-
Posts
18740 -
Joined
-
Last visited
-
Days Won
711
Everything posted by Nytro
-
[DEBINJECT] Copyright 2017 Debinject Written by: Alisson Moretto - 4w4k3 - UndeadSec TOOL DESIGNED TO GOOD PURPOSES, PENTESTS, DON'T BE A CRIMINAL ! Only download it here, do not trust in other places. CLONE git clone https://github.com/UndeadSec/Debinject.git RUNNING cd Debinject python debinject.py If you have another version of Python: python2.7 debinject.py RUN ON TARGET SIDE chmod 755 default.deb dpkg -i backdoored.deb DISCLAIMER "DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE." Taken from LICENSE. PREREQUISITES dpkg dpkg-deb metasploit TESTED ON Kali Linux - SANA Kali Linux - ROLLING SCREENSHOT More in Screens CONTRIBUTE Send me more features if you want it I need your Help to become it to better. LICENSE: This project is licensed under the BSD-3-Clause - see the LICENSE file for details. Sursa: https://github.com/UndeadSec/Debinject
-
filewatcher a simple auditing utility for macOS Filewatcher is an auditing and monitoring utility for macOS. It can audit all events from the system auditpipe of macOS and filter them by process or by file You can use this utility to: Monitor access to a file, or a group of files. Monitor activity of a process, and which resources are accessed by that process. Build a small Host-Based IDS by monitoring access or modifications to specific files. Do an dynamic malware analysis by monitoring what the malware is using on the filesystem. If you want to read more about how it works, check my blog. Installation Just run make to compile it and then ./bin/filewatcher. Usage: ./bin/filewatcher [OPTIONS] -f, --file Set a file to filter -p, --process Set a process name to filter -a, --all Display all events (By default only basic events like open/read/write are displayed) -d, --debug Enable debugging messages to be saved into a file -h, --help Print this help and exit Expected output: Sursa: https://github.com/m3liot/filewatcher
-
Win32/Diskcoder.Petya.C
Nytro replied to Technetium's topic in Reverse engineering & exploit development
Super, thanks!- 1 reply
-
- 1
-
-
Abstract—We present the password reset MitM (PRMitM) attack and show how it can be used to take over user accounts. The PRMitM attack exploits the similarity of the registration and password reset processes to launch a man in the middle (MitM) attack at the application level. The attacker initiates a password reset process with a website and forwards every challenge to the victim who either wishes to register in the attacking site or to access a particular resource on it. The attack has several variants, including exploitation of a password reset process that relies on the victim’s mobile phone, using either SMS or phone call. We evaluated the PRMitM attacks on Google and Facebook users in several experiments, and found that their password reset process is vulnerable to the PRMitM attack. Other websites and some popular mobile applications are vulnerable as well. Although solutions seem trivial in some cases, our experiments show that the straightforward solutions are not as effective as expected. We designed and evaluated two secure password reset processes and evaluated them on users of Google and Facebook. Our results indicate a significant improvement in the security. Since millions of accounts are currently vulnerable to the PRMitM attack, we also present a list of recommendations for implementing and auditing the password reset process. Download: https://www.ieee-security.org/TC/SP2017/papers/207.pdf
-
- 2
-
-
Resources KeenLab's MOSEC 2017 iOS 10 Kernel Security Presentation is Now UP! OASP Pangu 9 Internals Hacking from iOS 8 to iOS 9 Analysis of iOS 9.3.3 Jailbreak & Security Enhancements of iOS 10 iOS内核漏洞挖掘-Fuzz & 代码审计 The Userland Exploits of Pangu 8——cansecwest Improving Mac OS X Security Through Gray Box Fuzzing Technique OS X Kernel is As Strong as its Weakest Part Optimized Fuzzing IOKit in iOS Pangu 9.3 (女娲石) Don't TrustYour Eye:Apple Graphics Is Compromised!——cansecwest Liang Chen Hack in the (sand)Box——Jonathan Levin Video PDF The ARMs race to TrustZone——Jonathan Levin Video PDF iOS 10 - Kernel Heap Revisited——Stefan Esser Video PDF iOS Kernel Exploitation——Stefan Esser Video PDF Link: https://github.com/aozhimin/MOSEC-2017
-
- 2
-
-
Beginner Guide to Insecure Direct Object References (IDOR) posted inPenetration Testing, Website Hacking on July 4, 2017 by Raj Chandel Insecure Direct Object References (IDOR) has been placed fourth on the list of OWASP Top 10 Web application security risks since 2013. It allows an authorized user to obtain the information of other users, and could be establish in any type of web applications. Basically it allows requests to be made to specific objects through pages or services without the proper verification of requester’s right to the content. OWASP definition: Insecure Direct Object References allow attackers to bypass authorization and access resources directly by modifying the value of a parameter used to directly point to an object. Such resources can be database entries belonging to other users, files in the system, and more. This is caused by the fact that the application takes user supplied input and uses it to retrieve an object without performing sufficient authorization checks. The Application uses untested data in a SQL call that is accessing account information. Let consider a scenario where a web application allow the login user to change his secret value. Here you can see the secret value must be referring to some user account of the database. Currently user bee is login into web server for changing his secret value but he is willing to perform some mischievous action that will change the secret value for other user. Using burp suite we had captured the request of browser where you can see in the given image login user is bee and secret value is hello; now manipulate the user from another user. SQLquery = “SELECT * FROM useraccounts WHERE account = ‘bee’; Now let’s change user name into raj as shown in given image. To perform this attack in an application it requires atleast two user accounts. SQLquery = “SELECT * FROM useraccounts WHERE account = ‘raj’; Great!!! We have successfully changed the secret value for raj. Note: in any official website the attacker will replace user account from admin account. Let take another scenario that look quite familiar for most of IDOR attack. Many times we book different order online through their web application for example bookmyshow.com for movie ticket booking. Let consider same scenario in bwapp for movie ticket booking, where I had book 10 tickets of 15 EUR for each. Now let’s confirm it and capture the browser request through burp suite. Now you can see we have intercepted request where highlighted text contains number of tickets and price of one ticket i.e 15 EUR it means it will reduce 150 EUR from my (user) account; now manipulate this price from your desire price. I had changed it into 1 EUR which means now it will reduce only 10 EUR from account, you can observe it from given image then forward the request. Awesome!!! We had booked the 10 tickets in 10 EUR only. Author: AArti Singh is a Researcher and Technical Writer at Hacking Articles an Information Security Consultant Social Media Lover and Gadgets. Contact here Sursa: http://www.hackingarticles.in/beginner-guide-insecure-direct-object-references/
-
- 2
-
-
Publicat pe 5 iul. 2017 Live workshop walkthrough for the TI addr_limit bug Using syscalls in the kernel (or simply forgetting to reset the addr_limit value before returning to user space) may lead to this type of bugs. We're using a stack info leak with the buggy get_fs/set_fs code to overwrite the (e)uid and (e)gid of the current process to elevate privileges.
-
- 2
-
-
@tjt " De ce sa nu iti dea ? " - Tu daca ai avea firma ta i-ai da unuia care nu a muncit o zi pe ceea ce ai nevoie 5000 RON? Desi omul poate a mai citit cate ceva, daca nu a lucrat macar 1-2 ani cu ceea ce se cere, nu o sa se compare cu unul care a lucrat. De exemplu, cand m-am angajat ca C++ developer acum 6 ani, stiam limbajul extrem de bine. Dar ca sa vezi, nu prea era deajuns. Nu lucrasem cu sockets, multi-threading, STL, semaphores si mai stiu eu ce, pe cand cineva cu experienta probabil se lovise de cel putin o parte dintre ele. Nu cred ca cineva care nu a lucrat la o companie pe ceva anume, indiferent de ce, a petrecut aproape zilnic cateva ore sa isi dezvolte cunostiintele. Un alt exemplu, e ca inainte sa ma angajez prima oara am vrut sa lucrez ca PHP developer. Scrisesem peste 20.000 de linii de cod, aveam ceva proiecte DAR: nu scrisesem cod MVC (evident), nu lucrasem cu OOP (proiecte mici, evident), nu lucrasem cu niciun framework (la fel). Asadar, de ce sa imi dea 5000 RON pe luna cand eu ar trebui sa stau luni de zile sa invat cum trebuie lucrurile astea? " cineva care a investit timpul personal chiar si bani ca sa isi imbunatateasca cunostinte, sa obtina certificari " - Nu a investit nimeni destul din timpul personal pentru a fi la fel de bun ca cineva care a facut acel lucru 8 ore pe zi timp de 1-2 ani. Si nici nu o sa o faca nimeni. Ca mai citesti zilnic cate un articol, ca din cand in cand citesti o carte, e OK, dar nu e de ajuns. Da, dovedeste entuziasm si conteaza mult, dar nu e de ajuns. Pune-te in locul angajatorului. Cat despre HR, sau "Professional Linkedin browser", din pacate, nu au capacitatea de a trece peste anumite lucruri si de a intelege anumite lucruri. Intotdeauna o sa te lovesti de probleme cu ei si poti pierde locuri de munca bune din cauza ca ei vor considera poate ca "nu are facultate de IT, nu poate sa lucreze pe security", pentru ca ei nu inteleg ca nu exista facultate pentru asa ceva de exemplu. @Philip.J.Fry Nu stiu daca RON sau EUR, nu cred ca EUR in Romania. Da, diploma nu ar trebui sa conteze, ca nu stiu pe nimeni sa fi terminat Facultatea de Reverse Engineering si Analiza Malware in Romania, insa fara experienta mi se pare greu de crezut. Adica serios, ai da cuiva 3.7 EUR pe luna in Romania cuiva care probabil are ceva cunostiinte tehnice dobandite in timpul liber, in locul unuia care a facut asta luni de zile la cine stie ce companie care face antivirus? @gigiRoman Acum vreo 4 ani cred, am avut si eu interviu la Avira pe C++ Developer. Am avut de facut o aplicatie client-server, multithreading si cu nu stiu mai ce functionalitati in 3 ore. Am facut-o si a mers foarte bine, si ziceau cei de acolo ca majoritatea nu o fac in cele 3 ore. Apoi am avut o discutie tehnica. Toate bune si frumoase, pana sa vorbim despre antivirus. Le-am zis ca am facut un crypter, un program care ia un fisier detectabil si il face nedetectabil. Au zis ca "nu se poate, antivirusul nostru il prinde". Le-am explicat cum functioneaza si de ce nu l-ar prinde, ca se incarca in memorie bla-bla, dar nu au parut sa inteleaga. Apoi m-au intrebat: "De ce te-am angaja, de unde stim ca avand acces la codul sursa al antivirusului, nu ai dezvolta in continuare astfel de lucruri?". Am inceput sa rad si le-am zis ca nu am nevoie de codul sursa sa fac asa ceva. Nu m-au mai contactat deloc. Asadar, ca idee generala, de care m-am lovit si eu acum vreo 6 ani cand m-am angajat pe 1600 RON: NU va asteptati sa sara cu banii pe voi, pentru ca nu au de ce. In plus, nu sunteti singurele persoane care isi cauta un loc de munca in IT. Desi sunt destule job-uri, pentru pozitiile de inceput sunt foarte multi care aplica. De asemenea, banuiesc ca daca cineva lucreaza la un proiect in timpul personal, sau face ceva ca sa invete, poate mai posteaza si pe aici. Nu am vazut de ani de zile astfel de lucruri postate. Am fost si eu tanar student, si ce crezi, preferam sa stau sa scriu cod, sau sa beau pana picam din picioare?
- 30 replies
-
- 12
-
-
Daca nu ai experienta de munca sau cel putin proiecte personale, orice, nu iti da nimeni 5000 RON. Dar de crescut poate sa creasca destul de mult.
-
WSUXploit Written by Marcio Almeida to weaponize the use of WSUSpect Proxy created by Paul Stone and Alex Chapman in 2015 and public released by Context Information Security Summary This is a MiTM weaponized exploit script to inject 'fake' updates into non-SSL WSUS traffic. It is based on the WSUSpect Proxy application that was introduced to public on the Black Hat USA 2015 presentation, 'WSUSpect – Compromising the Windows Enterprise via Windows Update' Please read the White Paper and the presentation slides listed below: White paper: http://www.contextis.com/documents/161/CTX_WSUSpect_White_Paper.pdf Slides: http://www.contextis.com/documents/162/WSUSpect_Presentation.pdf Sursa: https://github.com/pimps/wsuxploit
-
- 1
-
-
How I found a bug in Intel Skylake processors July 3, 2017 By Xavier Leroy Instructors of "Introduction to programming" courses know that students are willing to blame the failures of their programs on anything. Sorting routine discards half of the data? "That might be a Windows virus!" Binary search always fails? "The Java compiler is acting funny today!" More experienced programmers know very well that the bug is generally in their code: occasionally in third-party libraries; very rarely in system libraries; exceedingly rarely in the compiler; and never in the processor. That's what I thought too, until recently. Here is how I ran into a bug in Intel Skylake processors while trying to debug mysterious OCaml failures. The first sighting Late April 2016, shortly after OCaml 4.03.0 was released, a Serious Industrial OCaml User (SIOU) contacted me privately with bad news: one of their applications, written in OCaml and compiled with OCaml 4.03.0, was crashing randomly. Not at every run, but once in a while it would segfault, at different places within the code. Moreover, the crashes were only observed on their most recent computers, those running Intel Skylake processors. (Skylake is the nickname for what was the latest generation of Intel processors at the time. The latest generation at the time of this writing is nicknamed Kaby Lake.) Many OCaml bugs have been reported to me in the last 25 years, but this report was particularly troubling. Why Skylake processors only? Indeed, I couldn't reproduce the crashes using SIOU's binary on my computers at Inria, which were all running older Intel processors. Why the lack of reproducibility? SIOU's application was single-threaded and made no network I/O, only file I/O, so its execution should have been perfectly deterministic, and whatever bug caused the segfault should cause it at every run and at the same place in the code. My first guess was flaky hardware at SIOU: a bad memory chip? overheating? Speaking from personal experience, those things happen and can result in a computer that boots and runs a GUI just fine, then crashes under load. So, I suggested SIOU to run a memory test, underclock their processor, and disable hyperthreading (HT) while they were at it. The HT suggestion was inspired by an earlier report of a Skylake bug involving AVX vector arithmetic, which would show up only with HT enabled (see description). SIOU didn't take my suggestions well, arguing (correctly) that they were running other CPU- and memory-intensive tests on their Skylake machines and only the ones written in OCaml would crash. Clearly, they thought their hardware was perfect and the bug was in my software. Great. I still managed to cajole them into running a memory test, which came back clean, but my suggestion about turning HT off was ignored. (Too bad, because this would have saved us much time.) In parallel, SIOU was conducting an impressive investigation, varying the version of OCaml, the C compiler used to compile OCaml's runtime system, and the operating system. The verdict came as follows. OCaml: 4.03, including early betas, but not 4.02.3. C compiler: GCC, but not Clang. OS: Linux and Windows, but not MacOS. Since MacOS uses Clang and they used a GCC-based Windows port, the finger was being firmly pointed to OCaml 4.03 and GCC. Surely, SIOU reasoned, in the OCaml 4.03 runtime system, there is a piece of bad C code -- an undefined behavior as we say in the business -- causing GCC to generate machine code that crashes, as C compilers are allowed to do in the presence of undefined behaviors. That would not be the first time that GCC treats undefined behaviors in the least possibly helpful way, see for instance this security hole and this broken benchmark. The explanation above was plausible but still failed to account for the random nature of crashes. When GCC generates bizarre code based on an undefined behavior, it still generates deterministic code. The only source of randomness I could think of is Address Space Layout Randomization (ASLR), an OS feature that causes absolute memory addresses to change from run to run. The OCaml runtime system uses absolute addresses in some places, e.g. to index into a hash table of memory pages. However, the crashes remained random after turning ASLR off, in particular when running under the GDB debugger. We were now in early May 2016, and it was my turn to get my hands dirty, as SIOU subtly hinted by giving me a shell account on their famous Skylake machine. My first attempt was to build a debug version of OCaml 4.03 (to which I planned to add even more debugging instrumentation later) and rebuild SIOU's application with this version of OCaml. Unfortunately this debug version would not trigger the crash. Instead, I worked from the executable provided by SIOU, first interactively under GDB (but it nearly drove me crazy, as I had to wait sometimes one hour to trigger the crash again), then using a little OCaml script that ran the program 1000 times and saved the core dumps produced at every crash. Debugging the OCaml runtime system is no fun, but post-mortem debugging from core dumps is atrocious. Analysis of 30 core dumps showed the segfaults to occur in 7 different places, two within the OCaml GC and 5 within the application. The most popular place, with 50% of the crashes, was the mark_slice function from OCaml's GC. In all cases, the OCaml heap was corrupted: a well-formed data structure contains a bad pointer, i.e. a pointer that doesn't point to the first field of a Caml block but instead points to the header or inside the middle of a Caml block, or even to invalid memory (already freed). The 15 crashes in mark_slice were all caused by a pointer two words ahead in a block of size 4. All those symptoms were consistent with familiar mistakes such as the ocamlopt compiler forgetting to register a memory root with the GC. However, those mistakes would cause reproducible crashes, depending only on the allocation and GC patterns. I completely failed to see what kind of memory management bug in OCaml could cause random crashes! By lack of a better idea, I then listened again to the voice at the back of my head that was whispering "hardware bug!". I had a vague impression that the crashes happened more frequently the more the machine was loaded, as would be the case if it were just an overheating issue. To test this theory, I modified my OCaml script to run N copies of SIOU's program in parallel. For some runs I also disabled the OCaml memory compactor, resulting in a bigger memory footprint and more GC activity. The results were not what I expected but striking nonetheless: N system load w/default options w/compactor turned off 1 3+epsilon 0 failures 0 failures 2 4+epsilon 1 failure 3 failures 4 6+epsilon 12 failures 19 failures 8 10+epsilon 17 failures 23 failures 16 18+epsilon 16 failures The number of failures given above is for 1000 runs of the test program. Notice the jump between N = 2 and N = 4 ? And the plateau for higher values of N ? To explain those numbers, I need to give more information on the test Skylake machine. It has 4 physical cores and 8 logical cores, since HT is enabled. Two of the cores were busy with two long-running tests (not mine) in the background, but otherwise the machine was not doing much, hence the system load was 2 + N + epsilon, where N is the number of tests I ran in parallel. When there are no more than 4 active processes at the same time, the OS scheduler spreads them evenly between the 4 physical cores of the machine, and tries hard not to schedule two processes on the two logical cores of the same physical core, because that would result in underutilization of the resources of the other physical cores. This is the case here for N = 1 and also, most of the time, for N = 2. When the number of active processes grows above 4, the OS starts taking advantage of HT by scheduling processes to the two logical cores of the same physical core. This is the case for N = 4 here. It's only when all 8 logical cores of the machine are busy that the OS performs traditional time-sharing between processes. This is the case for N = 8 and N = 16 in our experiment. It was now evident that the crashes happened only when hyperthreading kicked in, or more precisely when the OCaml program was running along another hyperthread (logical core) on the same physical core of the processor. I wrote SIOU back with a summary of my findings, imploring them to entertain my theory that it all has to do with hyperthreading. This time they listened and turned hyperthreading off on their machine. Then, the crashes were gone for good: two days of testing in a loop showed no issues whatsoever. Problem solved? Yes! Happy ending? Not yet. Neither I nor SIOU tried to report this issue to Intel or others: SIOU because they were satisfied with the workaround consisting in compiling OCaml with Clang, and because they did not want any publicity of the "SIOU's products crash randomly!" kind; I because I was tired of this problem, didn't know how to report those things (Intel doesn't have a public issue tracker like the rest of us), and suspected it was a problem with the specific machines at SIOU (e.g. a batch of flaky chips that got put in the wrong speed bin by accident). The second sighting The year 2016 went by without anyone else reporting that the sky (or more exactly the Skylake) was falling with OCaml 4.03, so I gladly forgot about this little episode at SIOU (and went on making horrible puns). Then, on January 6th 2017, Enguerrand Decorne and Joris Giovannangeli at Ahrefs (another Serious Industrial OCaml User, member of the Caml Consortium to boot) report mysterious random crashes with OCaml 4.03.0: this is PR#7452 on the Caml bug tracker. In the repro case they provided, it's the ocamlopt.opt compiler itself that sometimes crashes or produces nonsensical output while compiling a large source file. This is not particularly surprising since ocamlopt.opt is itself an OCaml program compiled with the ocamlopt.byte compiler, but mades it easier to discuss and reproduce the issue. The public comments on PR#7452 show rather well what happened next, and the Ahrefs people wrote a detailed story of their bug hunt as a blog post. So, I'll only highlight the turning points of the story. Twelve hours after opening the PR, and already 19 comments into the discussion, Enguerrand Decorne reports that "every machine on which we were able to reproduce the issue was running a CPU of the Intel Skylake family". The day after, I mention the 2016 random crash at SIOU and suggest to disable hyperthreading. The day after, Joris Giovannangeli confirms that the crash cannot be reproduced when hyperthreading is disabled. In parallel, Joris discovers that the crash happens only if the OCaml runtime system is built with gcc -O2, but not with gcc -O1. In retrospect, this explains the absence of crashes with the debug OCaml runtime and with OCaml 4.02, as both are built with gcc -O1 by default. I go out on a limb and post the following comment: Is it crazy to imagine that gcc -O2 on the OCaml 4.03 runtime produces a specific instruction sequence that causes hardware issues in (some steppings of) Skylake processors with hyperthreading? Perhaps it is crazy. On the other hand, there was already one documented hardware issue with hyperthreading and Skylake (link) Mark Shinwell contacts some colleagues at Intel and manages to push a report through Intel customer support. Then, nothing happened for 5 months, until... The revelation On May 26th 2017, user "ygrek" posts a link to the following Changelog entry from the Debian "microcode" package: * New upstream microcode datafile 20170511 [...] * Likely fix nightmare-level Skylake erratum SKL150. Fortunately, either this erratum is very-low-hitting, or gcc/clang/icc/msvc won't usually issue the affected opcode pattern and it ends up being rare. SKL150 - Short loops using both the AH/BH/CH/DH registers and the corresponding wide register *may* result in unpredictable system behavior. Requires both logical processors of the same core (i.e. sibling hyperthreads) to be active to trigger, as well as a "complex set of micro-architectural conditions" SKL150 was documented by Intel in April 2017 and is described on page 65 of 6th Generation Intel® Processor Family - Specification Update. Similar errata go under the names SKW144, SKX150, SKZ7 for variants of the Skylake architecture, and KBL095, KBW095 for the newer Kaby Lake architecture. "Nightmare-level" is not part of the Intel description but sounds about right. Despite the rather vague description ("complex set of micro-architectural conditions", you don't say!), this erratum rings a bell: hyperthreading required? check! triggers pseudo-randomly? check! does not involve floating-point nor vector instructions? check! Plus, a microcode update that works around this erratum is available, nicely packaged by Debian, and ready to apply to our test machines. A few hours later, Joris Giovannangeli confirms that the crash is gone after upgrading the microcode. I run more tests on my shiny new Skylake-based workstation (courtesy of Inria's procurement) and come to the same conclusion, since a test that crashes in less than 10 minutes with the old microcode runs 2.5 days without problems with the updated microcode. Another reason to believe that SKL150 is the culprit is that the problematic code pattern outlined in this erratum is generated by GCC when compiling the OCaml run-time system. For example, in byterun/major_gc.c, function sweep_slice, we have C code like this: hd = Hd_hp (hp); /*...*/ Hd_hp (hp) = Whitehd_hd (hd); After macro-expansion, this becomes: hd = *hp; /*...*/ *hp = hd & ~0x300; Clang compile this code the obvious way, using only full-width registers: movq (%rbx), %rax [...] andq $-769, %rax # imm = 0xFFFFFFFFFFFFFCFF movq %rax, (%rbx) However, gcc prefers to use the %ah 8-bit register to operate upon bits 8 to 15 of the full register %rax, leaving the other bits unchanged: movq (%rdi), %rax [...] andb $252, %ah movq %rax, (%rdi) The two codes are functionally equivalent. One possible reason for GCC's choice of code is that it is more compact: the 8-bit constant $252 fits in 1 byte of code, while the 32-bit-extended-to-64-bit constant $-769 needs 4 bytes of code. At any rate, the code generated by GCC does use both %rax and %ah, and, depending on optimization level and bad luck, such code could end up in a loop small enough to trigger the SKL150 bug. So, in the end, it was a hardware bug. Told you so! Epilogue Intel released microcode updates for Skylake and Kaby Lake processors that fix or work around the issue. Debian has detailed instructions to check whether your Intel processor is affected and how to obtain and apply the microcode updates. The timing for the publication of the bug and the release of the microcode updates was just right, because several projects written in OCaml were starting to observe mysterious random crashes, for example Lwt, Coq, and Coccinelle. The hardware bug is making the rounds in the technical Web sites, see for example Ars Technica, HotHardware, Tom's Hardware, and Hacker's News. Sursa: http://gallium.inria.fr/blog/intel-skylake-bug/
- 1 reply
-
- 3
-
-
Symmetric Encryption The only way to encrypt today is authenticated encryption, or "AEAD". ChaCha20-Poly1305 is faster in software than AES-GCM. AES-GCM will be faster than ChaCha20-Poly1305 with AES-NI. Poly1305 is also easier than GCM for library designers to implement safely. AES-GCM is the industry standard. Use, in order of preference: The NaCl/libsodium default Chacha20-Poly1305 AES-GCM Avoid: AES-CBC, AES-CTR by itself Block ciphers with 64-bit blocks such as Blowfish OFB mode RC4, which is comically broken Symmetric Key Length See The Physics of Brute Force to understand why 256-bit keys is more than sufficient. But rememeber: your AES key is far less likely to be broken than your public key pair, so the latter key size should be larger if you're going to obsess about this. Use: Minimum- 128-bit keys Maximum- 256-bit keys Avoid: Constructions with huge keys Cipher "cascades" Key sizes under 128 bits Symmetric Signatures If you're authenticating but not encrypting, as with API requests, don't do anything complicated. There is a class of crypto implementation bugs that arises from how you feed data to your MAC, so, if you're designing a new system from scratch, Google "crypto canonicalization bugs". Also, use a secure compare function. Use: HMAC Avoid: HMAC-MD5 HMAC-SHA1 Custom "keyed hash" constructions Complex polynomial MACs Encrypted hashes Anything CRC Hashing/HMAC Algorithm If you can get away with it you want to use hashing algorithms that truncate their output and sidesteps length extension attacks. Meanwhile: it's less likely that you'll upgrade from SHA-2 to SHA-3 than it is that you'll upgrade from SHA-2 to BLAKE2, which is faster than SHA-3, and SHA-2 looks great right now, so get comfortable and cuddly with SHA-2. Use, in order of preference: HMAC-SHA-512/256 HMAC-SHA-512/224 HMAC-SHA-384 HMAC-SHA-224 HMAC-SHA-512 HMAC-SHA-256 Alternately, use in order of preference: BLAKE2 SHA3-512 SHA3-256 Avoid: HMAC-SHA-1 HMAC-MD5 MD6 EDON-R Random IDs When creating random IDs, numbers, URLs, nonces, initialization vectors, or anything that is random, then you should always use /dev/urandom. Use: /dev/urandom Create: 256-bit random numbers Avoid: Userspace random number generators /dev/random Password Hashing When using scrypt for password hashing, be aware that It is very sensitive to the parameters, making it possible to end up weaker than bcrypt, and suffers from time-memory trade-off (source #1 and source #2). When using bcrypt, make sure to use the following algorithm to prevent the leading NULL byte problem and the 72-character password limit: bcrypt(base64(sha-512(password))) I'd wait a few years, until 2020 or so, before implementing any of the Password Hashing Competition candidates, such as Argon2. They just haven't had the time to mature yet. Use, in order of preference: scrypt bcrypt sha512crypt sha256crypt PBKDF2 Avoid: Plaintext Naked SHA-2, SHA-1, MD5 Complex homebrew algorithms Any encryption algorithm Asymmetric Encryption It's time to stop using vanilla RSA, and start using NaCl/libsodium. Of all the cryptographic "best practices", this is the one you're least likely to get right on your own. NaCl/libsodium has been designed to prevent you from making stupid mistakes, it's highly favored among the cryptographic community, and focuses on modern, highly secure cryptographic primitives. It's time to start using ECC. Here are several reasons you should stop using RSA and switch to elliptic curve software: Progress in attacking RSA --- really, all the classic multiplicative group primitives, including DH and DSA and presumably ElGamal --- is proceeding faster than progress against elliptic curve. RSA (and DH) drag you towards "backwards compatibility" (ie: downgrade-attack compatibility) with insecure systems. Elliptic curve schemes generally don't need to be vigilant about accidentally accepting 768-bit parameters. RSA begs implementors to encrypt directly with its public key primitive, which is usually not what you want to do: not only does accidentally designing with RSA encryption usually forfeit forward-secrecy, but it also exposes you to new classes of implementation bugs. Elliptic curve systems don't promote this particular foot-gun. The weight of correctness/safety in elliptic curve systems falls primarily on cryptographers, who must provide a set of curve parameters optimized for security at a particular performance level; once that happens, there aren't many knobs for implementors to turn that can subvert security. The opposite is true in RSA. Even if you use RSA-OAEP, there are additional parameters to supply and things you have to know to get right. If you have to use RSA, do use RSA-OAEP. But don't use RSA. Use ECC. Use: NaCl/libsodium Avoid: RSA-PKCS1v15 RSAES-OAEP RSASSA-PSS with MGFI-256, Really, anything RSA ElGamal OpenPGP, OpenSSL, BouncyCastle, etc. Asymmetric Key Length As with symmetric encryption, asymmetric encryption key length is a vital security parameter. Academic, private, and government organizations provide different recommendations with mathematical formulas to approimate the minimum key size requirement for security. See BlueKcrypt's Cryptographyc Key Length Recommendation for other recommendations and dates. To protect data up through 2020, it is recommended to meet the minimum requirements for asymmetric key lengths: Method RSA ECC D-H Key D-H Group Lenstra/Verheul 1881 161 151 1881 Lenstra Updated 1387 163 163 1387 ECRYPT II 1776 192 192 1776 NIST 2048 224 224 2048 ANSSI 2048 200 200 2048 BSI 3072 256 256 3072 See also the NSA Fact Sheet Suite B Cryptography and RFC 3766 for additional recommendations and math algorithms for calculating strengths based on calendar year. Personally, I don't see any problem with using 2048-bit RSA/DH group and 256-bit ECC/DH key lengths. So, my recommendation would be: Use: 256-bit minimum for ECC/DH Keys 2048-bit minimum for RSA/DH Group Avoid: Not following the above recommendations. Asymmetric Signatures In the last few years there has been a major shift away from conventional DSA signatures and towards misuse-resistent "deterministic" signature schemes, of which EdDSA and RFC6979 are the best examples. You can think of these schemes as "user-proofed" responses to the Playstation 3 ECDSA flaw, in which reuse of a random number leaked secret keys. Use deterministic signatures in preference to any other signature scheme. Use, in order of preference: NaCl/libsodium Ed25519 RFC6979 (deterministic DSA/ECDSA) Avoid: RSA-PKCS1v15 RSASSA-PSS with MGF1+SHA256 Really, anything RSA Vanilla ECDSA Vanilla DSA Diffie-Hellman This is the trickiest one. Here is roughly the set of considerations: If you can just use Nacl, use Nacl. You don't even have to care what Nacl does. If you can use a very trustworthy library, use Curve25519; it's the modern ECDH curve with the best software support and the most analysis. People really beat the crap out of Curve25519 when they tried to get it standardized for TLS. There are stronger curves, but none supported as well as Curve25519. But don't implement Curve25519 yourself or port the C code for it. If you can't use a very trustworthy library for ECDH but can for DH, use DH-2048 with a standard 2048 bit group, like Colin says, but only if you can hardcode the DH parameters. But don't use conventional DH if you need to negotiate parameters or interoperate with other implementations. If you have to do handshake negotiation or interoperate with older software, consider using NIST P-256, which has very widespread software support. Hardcoded-param DH-2048 is safer than NIST P-256, but NIST P-256 is safer than negotiated DH. But only if you have very trustworthy library support, because NIST P-256 has some pitfalls. P-256 is probably the safest of the NIST curves; don't go down to -224. Isn't crypto fun? If your threat model is criminals, prefer DH-1024 to sketchy curve libraries. If your threat model is governments, prefer sketchy curve libraries to DH-1024. But come on, find a way to one of the previous recommendations. It sucks that DH (really, "key agreement") is such an important crypto building block, but it is. Use, in order of preference: NaCl/libsodium 2048-bit Diffie-Hellman Group #14 Avoid: conventional DH SRP J-PAKE Handshakes and negotiation Elaborate key negotiation schemes that only use block ciphers srand(time()) Website security By "website security", we mean "the library you use to make your web server speak HTTPS". Believe it or not, OpenSSL is still probably the right decision here, if you can't just delegate this to Amazon and use HTTPS elastic load balancers, which makes this their problem not yours. Use: OpenSSL, LibreSSL, or BoringSSL if you run your own site Amazon AWS Elastic Load Balancing if Amazon does Avoid: PolarSSL GnuTLS MatrixSSL Client-server application security What happens when you design your own custom RSA protocol is that 1-18 months afterwards, hopefully sooner but often later, you discover that you made a mistake and your protocol had virtually no security. A good example is Salt Stack. Salt managed to deploy e=1 RSA. It seems a little crazy to recommend TLS given its recent history: The Logjam DH negotiation attack The FREAK export cipher attack The POODLE CBC oracle attack The RC4 fiasco The CRIME compression attack The Lucky13 CBC padding oracle timing attack The BEAST CBC chained IV attack Heartbleed Renegotiation Triple Handshakes Compromised CAs Here's why you should still use TLS for your custom transport problem: Many of these attacks only work against browsers, because they rely on the victim accepting and executing attacker-controlled Javascript in order to generate repeated known/chosen plaintexts. Most of these attacks can be mitigated by hardcoding TLS 1.2+, ECDHE and AES-GCM. That sounds tricky, and it is, but it's less tricky than designing your own transport protocol with ECDHE and AES-GCM! In a custom transport scenario, you don't need to depend on CAs: you can self-sign a certificate and ship it with your code, just like Colin suggests you do with RSA keys. Use: TLS Avoid: Designing your own encrypted transport, which is a genuinely hard engineering problem; Using TLS but in a default configuration, like, with "curl" Using "curl" IPSEC Online backups Of course, you should host your own backups in house. The best security is the security where others just don't get access to your data. There are many tools to do this, all of which should be using OpenSSH or TLS for the transport. If using an online backup service, use Tarsnap. It's withstood the test of time. Use: Tarsnap Avoid: Google Apple Microsoft Dropbox Amazon S3 Sursa: https://gist.github.com/atoponce/07d8d4c833873be2f68c34f9afc5a78a
-
- 2
-
-
PPEE (puppy) is a Professional PE file Explorer for reversers, malware researchers and those who want to statically inspect PE files in more details Puppy is free and tries to be small, fast, nimble and friendly as your puppy Download v1.09 Visual C++ 2010 Redistributable Package required Features Puppy is robust against malformed and crafted PE files which makes it handy for reversers, malware researchers and those who want to inspect PE files in more details. All directories in a PE file including Export, Import, Resource, Exception, Certificate(Relies on Windows API), Base Relocation, Debug, TLS, Load Config, Bound Import, IAT, Delay Import and CLR are supported. Both PE32 and PE64 support Examine YARA rules against opened file Virustotal and OPSWAT's Metadefender query report Statically analyze windows native and .Net executables Robust Parsing of exe, dll, sys, scr, drv, cpl, ocx and more Edit almost every data structure Easily dump sections, resources and .Net assembly directories Entropy and MD5 calculation of the sections and resource items View strings including URL, Registry, Suspicious, ... embedded in files Detect common resource types Extract artifacts remained in PE file Anomaly detection Right-click for Copy, Search in web, Whois and dump Built in hex editor Explorer context menu integration Descriptive information for data members Refresh, Save and Save as menu commands Drag and drop support List view columns can sort data in an appropriate way Open file from command line Checksum validation Plugin enabled Link: https://www.mzrst.com/
-
- 2
-
-
Suna dubios rau. Ban.
- 2 replies
-
- 9
-
-
- sender marketing
-
(and 2 more)
Tagged with:
-
See you in November at DefCamp 2017 Want to experience a conference that offers outstanding content infused with a truly cyber security experience? For two days (November 9th-10th) Bucharest will become once again the capital of information security in Central & Eastern Europe hosting at DefCamp more than 1,300 experts, passionate and companies interested to learn the “what” and “how” in terms of keeping information & infrastructures safe. Now it’s getting really close: this year's conference is only months away, and that means very early bird tickets are now available. Register Now at DefCamp 2017 (50% Off) What can you expect from the 2017 edition? 2 days full of cyber (in)security topics, GDPR, cyber warfare, ransomware, malware, social engineering, offensive & defensive security measurements 3 stages hosting over 35 international speakers and almost 50 hours of presentations Hacking Village hosting more than 10 competitions where you can test your skills or see how your technology stands 1,300 attendees with a background in cyber security, information technology, development, management or students eager to learn How to get involved? Speaker: Call for Papers & Speakers is available here. Volunteer: Be part of DefCamp #8 team and see behind the scene the challenges an event like this can have. Partner: Are you searching opportunities for your company? Become our partner! Hacking Village: Do you have a great idea for a hacking or for a cyber security contest? Consider applying at the Hacking Village Call for Contests. Attendee: Register at DefCamp 2017 right now and you will benefit of very early bird discounts. Register Now at DefCamp 2017 (50% Off) Use the following code to get an extra 10% discount of the Very Early Bird Tickets by June 27th. This is the best price you will get for 2017 edition. Code: DEFCAMP_2017_VEB_10 Website: https://def.camp/
- 32 replies
-
- 11
-
-
-
Ceva mai simplu decat in C, dar nu e mare diferenta.
-
Title: WordPress 2.3-4.7.5 - Host Header Injection in Password Reset
Nytro replied to cotos93's topic in Discutii incepatori
Nu am citit despre vulnerabilitate, am vazut ca a scris el "Host header injection". Da, nasol, RCE pr GTFO. -
Title: WordPress 2.3-4.7.5 - Host Header Injection in Password Reset
Nytro replied to cotos93's topic in Discutii incepatori
Ai modificat header-ul HTTP "Host" din request-ul pentru resetarea parolei? Daca da, in cazul in care Wordpress-ul e vulnerabil, probabil persoana careia incerci sa ii resetezi parola va primi un link de resetare de forma https://site-ul-tau.com/date-de-resetare-inclusiv-token/ si TREBUIE sa dea click pe acel link, iar tu vei primi request-ul pe site-ul tau si vei putea initia resetarea parolei. -
OWASP Bucharest AppSec Conference 2017 este o conferinta de o zi ce va avea loc pe 6 octombrie 2017. Ca si anul trecut vom avea training-uri/workshop-uri si o competitie capture the flag. Inregistrarea prezentarilor se realizeaza aici. Propunerile de training-uri se inregistreaza aici. Oportunitatile de sponsorizare sunt in acest document. Va puteti inscrie cu prezentari sau workshop-uri din urmatoarele arii si nu numai: Security aspects of new / emerging web technologies / paradigms / languages / frameworks Secure development: frameworks, best practices, secure coding, methods, processes, SDLC, etc. Security of web frameworks (Struts, Spring, ASP.Net MVC, RoR, etc) Vulnerability analysis (code review, pentest, static analysis etc) Threat modelling of applications Mobile security and security for the mobile web Cloud security Browser security and local storage Countermeasures for application vulnerabilities New technologies, paradigms, tools Application security awareness and education Security in web services, REST, and service oriented architectures Privacy in web apps, Web services and data storage Important: termenul limita pentru inscrierea prezentarilor este 28 august lista speakerilor confirmati va fi anuntata pe 1 septembrie conferinta va avea loc pe 6 octombrie prezentarile vor avea durata de 40 de minute fiecare va exista un speaker agreement Link: https://www.owasp.org/index.php/OWASP_Bucharest_AppSec_Conference_2017
-
O aplicaţie realizată de IT-iștii din Cluj, folosită de către NASA pe Staţia Spaţială Internaţională Mai mulţi IT-işti clujeni, dezvoltatori ai unei aplicaţii de back-up, au ajuns cu produsul lor chiar pe Staţia Spaţială Internaţională, după ce americanii de la NASA au cumpărat 20 de licenţe ale soft-ului lor, aflat în prezent deja la a şasea versiune, cu vânzări pe întreg mapamondul, scrie News.ro. NASA a achiziţionat 20 de licenţe ale soft-ului Backup4all, o aplicaţie dezvoltată de o echipă de programatori clujeni care deţin compania Softland. Începând cu luna mai, aplicaţia este folosită pe Staţia Spaţială Internaţională pentru activităţile de backup realizate de către agenţie. O licenţă pentru această aplicaţie costă 49,99 dolari, dar pentru că NASA a cumpărat o cantitate mai mare de licenţă, a primit şi o reducere, astfel încât preţul total a fost de 770 de dolari. De asemenea, pentru că este folosită într-un mediu în care nu există conexiune la internet, aplicaţia a trebuit modificată. "În ianuarie anul acesta am primit un mail de la NASA în care ne spuneau că şi-ar dori să instaleze Backup4all într-un mediu foarte securizat, fără acces la internet. Ne-au explicat că modalitatea noastră de activare nu va funcţiona în environment-ul lor şi atunci am aflat că vor să instaleze aplicaţia pe Staţia Spaţială Internaţională. A urmat o lună întreagă de teste şi configuraţii pentru ceea ce aveau nevoie şi în 31 mai a început să fie utilizată. Astfel, acum rulează pe opt laptopuri de pe Staţia Spaţială Internaţională”, a explicat Lóránt Barla, din partea companiei Softland. Clujenii, care au ajuns cu Backup4all la a şasea versiune, au explicat că mai ţin legătura cu cei de la NASA, în cazul în care aceştia au nevoie de ajutor pe partea de suport. "Cei de la NASA au cumpărat aplicaţia de pe site-ul nostru ca orice client normal. Nici măcar nu am ştiut. Poate mai avem şi alţi clienţi la fel de importanţi, dar nu ştim. Ar fi avut şi alte opţiuni pentru că este destul de mare concurenţa pe partea de backup. De ce au ales aplicaţia noastră? Pentru că li s-a părut că este cea mai bună soluţie pe care o pot configura conform nevoilor lor. În ceea ce îi priveşte pe clienţii noştri de la NASA, mai comunicăm profesional cu ei şi dacă vor avea nevoie de suport, pot conta pe ajutorul nostru. Dar, de regulă, Backup4all se configurează şi îşi face back-up automat fără să fie nevoie de altă interacţiune cu dezvoltatorii”, a precizat Lóránt Barla. Ca firmă, Softland funcţionează din 1999, la început desfăşurând activităţi de outsourcing. Din 2002 însă, echipa s-a concentrat să dezvolte şi să vândă propriile programe. În prezent, Softland are 13 angajaţi care se ocupă inclusiv de marketing, relaţia cu clienţii şi vânzări. Sursa: http://www.digi24.ro/stiri/externe/o-aplicatie-realizata-de-it-istii-din-cluj-folosita-de-catre-nasa-pe-statia-spatiala-internationala-737922
-
- 8
-
-
Nu imi aduc aminte sa fi dat 10 lire (50 RON) pe o bere aici in tara. Nici un abonament lunar la metrou nu cred ca este 13 lire. Cat despre chirie, gasisem apartament cu doua camere (1 flat room sau cum ii zic ei, nu mai stiu) cu 1200 lire (1300 EUR) in zona 8. Cu banii astia stau in penthouse pe Dorobanti.
-
Mi-e lene sa citesc, spuneti-mi cui sa dau ban.
-
MS-17-010: EternalBlue’s Large Non-Paged Pool Overflow in SRV Driver Posted on:June 2, 2017 at 1:10 am Author: William Gamazo Sanchez (Vulnerability Research) The EternalBlue exploit took the spotlight last May as it became the tie that bound the spate of malware attacks these past few weeks—the pervasive WannaCry, the fileless ransomware UIWIX, the Server Message Block (SMB) worm EternalRocks, and the cryptocurrency mining malware Adylkuzz. EternalBlue (patched by Microsoft via MS17-010) is a security flaw related to how a Windows SMB 1.0 (SMBv1) server handles certain requests. If successfully exploited, it can allow attackers to execute arbitrary code in the target system. The severity and complexity of EternalBlue, alongside the other exploits released by hacking group Shadow Brokers, can be considered medium to high. We further delved into EternalBlue’s inner workings to better understand how the exploit works and provide technical insight on the exploit that wreaked havoc among organizations across various industries around the world. Vulnerability Analysis The Windows SMBv1 implementation is vulnerable to buffer overflow in Large Non-Paged kernel Pool memory through the processing of File Extended Attributes (FEAs) in the kernel function, srv!SrvOs2FeaListToNt. The function srv!SrvOs2FeaListToNt will call srv!SrvOs2FeaListSizeToNt to calculate the received FEA LIST size before converting it to NTFEA (Windows NT FEA) list. The following sequence of operations happens: srv!SrvOs2FeaListSizeToNt will calculate the FEA List size and update the received FEA List size The resulting FEA size is greater than the original value because a wrong WORD cast When the FEA List is iterated to be converted to NTFEA LIST, there will be an overflow in the non-page pool because the original total size of list is miscalculated Overflow Analysis Our analysis of the overflow applies to srv.sys 6.1.7601.17514_x86. The vulnerable code can be triggered using srv!SrvSmbOpen2. The trace is as follows: 00 94527bb4 82171149 srv!SrvSmbOpen2 ➜ SrvOs2FeaListSizeToNt() 01 94527bc8 821721b8 srv!ExecuteTransaction+0x101 02 94527c00 8213b496 srv!SrvSmbTransactionSecondary+0x2c5 03 94527c28 8214a922 srv!SrvProcessSmb+0x187 04 94527c50 82c5df5e srv!WorkerThread+0x15c 05 94527c90 82b05219 nt!PspSystemThreadStartup+0x9e 06 00000000 00000000 nt!KiThreadStartup+0x19 To be able to analyze the overflow, we set the break points to: bp srv!SrvSmbOpen2+0x79 “.printf \”feasize: %p indatasize: %p fealist addr: %p\\n\”,edx,ecx,eax;g;” When the break point is hit we have the following (in hex and decimal values): feasize: 00010000 (65536) indatasize: 000103d0 (66512) fealist addr: 89e980d8 From here we can see that the IN-DATA size 66512—the same value of the Total Data Count in the NT Trans Request—is bigger that the FEA list size 65536. Figure 1: Snapshot of code showing IN-DATA size (highlighted) What’s notable here is that the pointer to IN-DATA will be cast to the FEA List structure, as shown below: Figure 2: FEA List structure After casting the IN-DATA buffer, we will have the FEA size 00010000 (65536) stored in FEALIST ➜ cbList. The next step in the SMB driver will be to allocate a buffer to convert the FEA List to NT FEA List. This means it is required to calculate the NTFEA list size, which is done by calling the srv!SrvOs2FeaListSizeToNt function. To see the returned values for this function, we put the following break point: bp srv!SrvOs2FeaListToNt+0x10 “.printf \”feasize before: %p\\n\”,poi(edi);r $t0 = @edi;g;” bp srv!SrvOs2FeaListToNt+0x15 “.printf \”NTFEA size: %p feasize after: %p\\n\”,eax,poi(@$t0);g;” After breaking we get: feasize before: 00010000 feasize after: 0001ff5d NTFEA size: 00010fe8 Accordingly, we found that FEALIST ➜ cbList was updated from 0x10000 to 0x1ff5d. But what part of the code is making the wrong calculation? The code below shows how the error happens: Figure 3: Code snapshot showing error in calculating FEALIST ➜ cbList In the code snapshot above, list 40 onwards showed an example of the calculation error. Because the Original FEA list size was updated, the iteration to copy the values to the NTLIST will go beyond the NTFEA size returned in v6 (which was 00010fe8). Note that if the function returns at line 28 or at line 21 the FEA list is not updated. The other condition that leads to the update of v1 other than the one used by EternalBlue is if there is trail data at the end of the FEA list, but not enough to store another FEA structure. We also analyzed what happens in the kernel memory during a buffer overflow on LARGE NON-PAGE Kernel Pool. When the SrvOs2FeaListSizeToNt returns, the size required to store the NTFEA LIST is 00010fe8. This will require a Large Kernel POOL Allocation in SRV.sys. Using the following breakpoints helps track exactly what happens when the FEA list is converted to NTFEA list: bp srv!SrvOs2FeaListToNt+0x99 “.printf \”NEXT: FEA: %p NTFEA: %p\\n\”,esi,eax;g;” bp srv!SrvOs2FeaToNt+04d “.printf \”MOV2: dst: %p src: %p size: %p\\n\”,ebx,eax,poi(esp+8);g;” bp srv!SrvOs2FeaListToNt+0xd5 To sum it up, once SrvOs2FeaListSizeToNt is called and the Pool allocated, the function SrvOs2FeaToNt is used while iterating over the FEA list to convert the elements of the list. Inside SrvOs2FeaToNt, there are two _memmove operations where all the buffer copy operations will happen. With the aforementioned break points, it is possible to track what happens during the FEA list conversion. The trace will take quite some time, however. Figure 4: Code snapshot showing copy operations After the trace, the break point srv!SrvOs2FeaListToNt+0xd5 will hit and we can get all data required to analyze the buffer overflow. There are 605 copy operations with size 0 because in the beginning of the payload, the FEA list will have a 0 bytes value, which corresponds to 605 FEA structs. The next FEA size will be F3B3 (copy 606) and the resulting copy will end in 85915ff0. After the copy operation 606 we will see the buffer at the end: 85905008 + 10FE8 = 85915FF0. However, another FEA iteration will happen, and the size will be A8 in this case. That will overwrite the next memory area. Note how after overwriting the data, it will be in a different POOL—in this case, the SRVNET.sys pool. After copy operation 607 is a corrupted FEA and the server return, STATUS_INVALID_PARAMETER (0xC000000D). The last FEA that is in the final NT Transaction sent to the server. Figure 5: Code snapshot showing the corrupted FEA and server return EternalBlue’s Exploration Capabilities The overflow happens in NON-PAGED Pool memory—and specifically in Large NON-PAGED Pool. Large non-page pool do not have a POOL Header. Because of this, after the large POOL buffer, another POOL Buffer can be allocated—one that is owned by a driver with specific DRIVER data. Therefore, the attack has to manipulate the POOL buffer coming after the overflowed buffer. EternalBlue’s technique is to control the SRVNET driver buffer structures. To achieve this, both buffers should be aligned in memory. To create the NON-PAGED POOL alignment, the kernel pool should sprayed. The technique is as follows: Create multiple SRVNET buffers (grooming the pool) Free some of the buffers to create some holes where the SRV buffer will be copied Send the SRV buffer to overflow the SRVNET buffer. Exploitation Mechanism The vulnerable code for the buffer overflow works on KERNEL NON-PAGED memory. It also works in LARGE NON-PAGED POOL. Those kinds of pools do not have any POOL headers embedded at the beginning of the page, so special techniques are required to exploit them. The technique requires reversing some Structure that can be allocated in the overflow area, as shown below: Figure 6: EternalBlue’s exploit mechanism The creation of multiple SRVNET buffers (Kernel Grooming) approximates what happens in memory and simply used to represent the idea. Note that we’ve also intentionally omitted other details to prevent our analysis from being misused. Figure 7: EternalBlue’s exploit chain EternalBlue’s Exploit Chain EternalBlue goes through a chain of processes in order to successfully exploit a vulnerable system or network, as shown above. EternalBlue first sends an SRV buffer except the last packet. This is because the Large NON-PAGED POOL buffer will be created when the last data in the transaction arrives at the server. The SMB server will then accumulate the DATA in an Input buffer until all transaction data are read. The total transaction data will be specified in the initial TRANS packet. Once all transaction data have arrived, the SMB server will process the data. In this case, the data is dispatched to the SrvOpen2 function to read the data via Common Internet File System (CFIS). At this point, EternalBlue ensures that all sent data is received by the server and sent to an SMB ECHO packet. Because the attack can be implemented over a slow network, this echo command is important. In our analysis, even if we sent the initial data, the “Vulnerable Buffer” isn’t created in memory yet. Kernel grooming tries to allocate an SRV vulnerable buffer just before the SRVNET buffer. Kernel grooming employs these steps: FreeHole_A: EternalBlue will start creating a kernel hole A by sending SMBv1 packet SMBv2_1n: Send a group of SMBv2 packets FreeHole_B: Send another free hole buffer; this one should be sent before the previous hole is free to make sure another one is created FreeHole_A_CLOSE: close the connection to make the buffer free, after which close A in order to create free hole SMBv2_2n: Send a group of SMBv2 packets FreeHole_B_CLOSE: close the connection to make the buffer free FINAL_Vulnerable_Buffer: Send the last packet of the vulnerable buffer A Vulnerable Buffer will be created in memory just before the SRVNET buffer and part of the SRVNET is overwritten. The conversion from FEA List to NTFEA List will return an error because FEA structs are invalid after a certain point, in which case the server will return with STATUS_INVALID_PARAMETER (0xC000000D). Patch your systems Given how EternalBlue served as the doorway for many of the malware that severely impacted end users and enterprises worldwide, it also serves as a lesson on the importance of applying the latest patches and keeping your systems and networks updated. EternalBlue has already been issued a fix for Windows systems, including unsupported operating systems. Apart from implementing regular patch management to systems and networks, IT/system administrators are also recommended to adopt best practices such as enabling intrusion detection and prevention systems, disabling outdated or unnecessary protocols and ports (like 445), proactively monitoring network traffic, safeguarding the endpoints, and deploying security mechanisms such data categorization and network segmentation to mitigate damage in case of exposure. Employing virtual patching can also help against unknown vulnerabilities. Trend Micro Solutions Trend Micro™ Deep Security™ and Vulnerability Protection provide virtual patching that protects endpoints from threats such as fileless infections and those that abuse unpatched vulnerabilities. OfficeScan’s Vulnerability Protection shields endpoints from identified and unknown vulnerability exploits even before patches are deployed. Trend Micro™ Deep Discovery™ provides detection, in-depth analysis, and proactive response to attacks using exploits and other similar threats through specialized engines, custom sandboxing, and seamless correlation across the entire attack lifecycle, allowing it to detect these kinds of attacks even without any engine or pattern update. More in-depth information on Trend Micro’s solutions for EternalBlue and the malware that leverage the exploit can be found in these technical support pages: https://success.trendmicro.com/solution/1117192 https://success.trendmicro.com/solution/1117391 Sursa: http://blog.trendmicro.com/trendlabs-security-intelligence/ms17-010-eternalblue/
-
- 1
-