-
Posts
3453 -
Joined
-
Last visited
-
Days Won
22
Everything posted by Aerosol
-
Introduction In this series of articles, we will learn about a not-so-new type of attack, but one of the most difficult attacks to control. Yes, we will lean about the demon Fast Flux!! In this article, we will learn about what exactly Fast Flux is, types of Fast Flux, and how Fast Flux works. In the next article of this series, we will learn about why it is difficult to detect Fast Flux in the environment, and then finally the recommended ways to detect Fast Flux. What is Fast Flux? The Fast Flux attack is generally used by bots around the world to hide their phishing and malware delivery sites behind an ever changing network of compromised hosts. It can also refer to the combination of peer-to-peer networking, distributed command and control, web-based load balancing and proxy redirection used to make malware networks more resistant to discovery and counter-measures. How Fast Flux Works The basic idea behind Fast Flux is to have numerous IP addresses associated with a single fully qualified domain name, where the IP addresses are swapped in and out with extremely high frequency through changing DNS records. These IP addresses are swapped in and out of flux with extreme frequency, using a combination of round-robin IP addresses and a very short Time-To-Live (TTL) for any given particular DNS Resource Record (RR). Website hostnames may be associated with a new set of IP addresses as often as every 3 minutes, which means that the end user client i.e. browser connecting to the same website every 3 minutes would actually be connecting to a different infected computer each time. The large pool of rotating IP addresses are not the final destination of the request for the content. Instead, compromised front end systems are merely deployed as redirectors called as flux agents funnel requests and data to and from other backend servers, which actually serve the content. Essentially, the domain names and URLs for advertised content no longer resolve to the IP address of a specific server, but instead fluctuate amongst many front end redirectors or proxies, which then in turn forward content to another group of backend servers. In addition, the attackers ensure that the compromised systems they are using to host their scams have the best possible bandwidth and service availability. They often use a load-distribution scheme which takes into account node health-check results, so that unresponsive nodes are taken out of flux and content availability is always maintained. Types of Flux Networks Fast Flux networks are classified under 2 major categories: Single flux networks: These are networks in which a set of compromised nodes register and deregister their address as a part of DNS address record list for a single DNS name. For example, in the figure below we can see that in the case of normal client server communication, a normal end user agent like a web browser requests the server and the server fulfils the request of the client, whereas in a single flux network, the end user agent like a web browser communication with the server is proxied via a redirector normally called a flux-bot. For example, the below figure shows that the victim request for example.com and the browser are actually communicating with the flux network. The request thus gets redirected to the target website. Single flux service networks change the DNS records for their front end node IP address as often as every 3-10 minutes, so even if one flux-agent redirector node is shut down, many other infected redirector hosts are standing by and available to quickly take its place. Because Fast Flux techniques utilize blind TCP and UDP redirects, any directional service protocol with a single target port would likely encounter few problems being served via a Fast Flux service network. For example, along with DNS and HTTP services, it also includes services such as SMTP, IMAP, POP, etc. Double flux networks: These networks are characterized by multiple nodes registering and deregistering as a part of DNS NS records. Both the DNS A record sets and the authoritative NS records for a malicious domain are continually changed in a round robin manner and advertised into the Fast Flux service network. The below figure outlines how double flux networks actually work and how they are different from single flux networks. First let’s just revise how the single flux networks work. Suppose the user is requesting a resource named http://abc.example.com so in the figure we can see that first the end user client i.e. browser asks the DNS root NS for resolution of top level domain i.e. com. Root NS then responds with the respective NS address. In the next step, the browser queries the NS for the domain example.com and receives as an answer a referral to the nameserver ns.example.com. Then, the browser queries ns.example.com for the address abc.example.com. NS responds with an IP address, and since it is a single flux network, this IP address value changes frequently. Now let’s see how the double flux networks works. Everything is same except for the last step, where the client asks the authoritative NS for the resolution of abc.example.com. In double flux networks, the IP of the authoritative NS itself is changing — frequently. When a DNS request for abc.example.com is received from the client, the current authoritative nameserver forwards the queries to the mothership node for the required information. The client can them attempt to initiate direct communication with the target system, although the target system will itself be a dynamically changing front end flux-agent node. This provides an additional layer of redundancy and survivability within the malware network. I think readers will now have a better understanding of Fast Flux networks, what are their types and how they work. In the next article, we will see how an attacker can benefit from this type of attack, why it is difficult to detect Fast Flux networks, and then the recommended ways to detect them. References Fast flux - Wikipedia, the free encyclopedia Source
-
Introduction Years of discussion on the right to have a free and open Internet have not yet solved the matter, and the issue is still a subject of heated debate for stakeholders: users, telecommunications companies and governments. The discussion revolves not only around the ability of government to control information and services that travel on the Net, but also on the possibility of telecommunications providers to decide which content to prioritize. Much has been written on the argument, but where do we currently stand? This article will highlight recent proposals and statements by government officials that have brought the subject back to light and the different stances that European countries and the U.S. are taking on the subject. The Basis of the Argument The fundamental issue in the network neutrality debate is the principle of open access to the net and the possibility by broadband traffic providers to ease or slow access by users to information, applications, sites and platform as they please, favoring, most likely, higher paying customers. Those who argue pro and against network neutrality bring to the table compelling reasons in favor of their argument. Network neutrality advocates believe that whenever an Internet Service Provider (ISP) treats some broadband traffic or some network users differently, it is going against basic principles that have governed the Internet so far, including free access to information by all. They are concerned that this form of discrimination could eventually make the Internet less valuable to end users. Pro network neutrality are not only many users but also most renowned Internet companies like Amazon, eBay and Vonage and many more, worried that suddenly broadband providers will have the power to decide which competing application to favor. ISPs should not control what data or how fast content arrives to users, and definitely should not be concerned about what content is transmitted (as long as it is lawful), but only that it is transmitted properly. Such a power for telecoms providers would stifle competition and slow the pace of broadband deployment that was designed to empower consumers. Many advocates for this retained freedom are also in the academic world. Lawrence Lessig, an American academic and political activist, for instance, is a supporter of the network neutrality movement. He has asked Congress to defend Michael Powell’s four Internet freedoms (i.e., freedom to access content, to use applications, to attach personal devices, to obtain service plan information), believing openness will trigger a new wave of innovation. Tim Wu, an American academic professor at Columbia Law School, whose paper Network Neutrality, Broadband Discrimination explores a non-discrimination regime, and self- or non-regulation approach to preserving network neutrality, is devoted to keeping the openness of the Internet and conveys that network service providers should not be allowed to deny people access to web resources or prioritize certain content. Wu has also pressed FCC for network neutrality throughout wireless computing and insists telecom carriers should carry content without discrimination. Even U.S. politicians have come forward to express concern over net neutrality. President Obama, for example, says he is unequivocally committed to net neutrality, and would be opposed to any Federal Communications plan that creates a so-called Internet “fast-lane.” It is no surprise, instead, that many telecom providers are continuing the debate for less neutrality and more control. With the Internet becoming more data-intensive and fitting for new technologies (e.g., VoIP, IPTV, Wi-Fi, power-line communication) and suiting ongoing demands for modern audio and video applications on the Internet, which are increasing bandwidth requirements, telecom carriers believe they have a right to decide what is transmitted on their networks and at what price too. Therefore, they want to be allowed to favor their own content and charge extra fees to give others the VIP treatment; In addition, they want to be able to deny those who cannot afford the high-class service. In general, cable companies want to end strict net neutrality and are against treating all data on the Internet equally; being for-profit companies, they believe in their right to charge different fees according to services provided and slow down any sites that will not pay up. In other words, cable carriers want to charge Internet content companies, such as Google and Netflix, for faster and better Internet service. Those that pay will be guaranteed their content reaches end users ahead of those who do not pay. Higher costs are justified by the need of frequently upgrading the infrastructure and having to meet the increasing demand of speed and bandwidth due to the growth in audio and HD video content sharing. Network providers could favor some clients and give them higher speed networks while degrading the service of specific content providers. This matter would potentially have a major impact not only on individual Internet users but also on online companies, especially smaller businesses that rely on the open Internet to launch their products and be able to reveal their goods, applications and services to customers. This, of course, requires all data on the Internet to be treated equally and broadband carriers to route all traffic in a neutral manner, without blocking, speeding up, or slowing down particular applications or content. People want to preserve the Internet as is with no government regulation or any legal intervention that would allow network operators to dictate what people can do online. Consequently, this is how the whole debate started and progressed; it has grown as a major topic of discussion with regard to the theoretical framework set forth by the Federal Communications Commission that has increased oversight of this area. FCC’s Proposal and Reactions The FCC’s stand on the subject of network neutrality was clear. In December 2010, the commission had issued the Open Internet order to establish three basic rules for ISPs to follow: Transparency of the network management practices, performance and terms of service No blocking of lawful content, applications and services No unreasonable discrimination when transmitting lawful traffic The FCC was protecting net neutrality, in the beginning at least. Chairman Tom Wheeler’s net neutrality proposal to federal court about a new open Internet framework is now challenging the concept. His set of rules for an online fast lane has been criticized. In fact, a number of complaints have come forward to say that it would undermine the goal of net neutrality; an “overwhelming surge” in commenters providing feedback, mostly criticisms, to the FCC’s online comment filing system about its recent proposal show how touchy the subject is. The proposal is still in line with the net neutrality stance of the FCC, but for the first time, possibly introduces different rules between “wholesale” and “retail” transactions, which would be regulated in lighter ways. Alongside end users, Internet activists and many U.S. politicians believe in a free and open Internet with no arbitrary fees or slow lanes for sites that cannot pay for technology that serves their interests. Those that support net neutrality are deeply concerned about FCC’s controversial net neutrality proposal and show concern on where the Internet is going. It is also becoming an economic problem, and this probably explains the recent interventions of higher political figures, including President Obama, in the debate. The President released an official statement affirming, “An open Internet is essential to the American economy, and increasingly to our very way of life. By lowering the cost of launching a new idea, igniting new political movements, and bringing communities closer together, it has been one of the most significant democratizing influences the world has ever known.” The President urges the FCC not to “allow Internet service providers (ISPs) to restrict the best access or to pick winners and losers in the online marketplace for services and ideas […] and implement the strongest possible rules to protect net neutrality.” On the other side of the argument stands Michael Powell, a former chairperson of FCC (2001–05), who is the current president of the lobbyist trade group the National Cable & Telecommunications Association (NCTA). He has contributed numerous editorials (in the National Journal and other media sources) and provided opinion pieces in opposition to net neutrality calling for ISPs to have pay-to-play fast lanes, for the sake of Internet availability. Michael Powell argues that net neutrality impairs rather than helps advancement in technology. He believes regulations discourage new competitive entries in the broadband provider world, favor larger regulated companies, discourage investments, and effectively kill innovation. Net neutrality in Europe The debate over the issue of net neutrality has gone on for years in the U.S. Is the rest of the world immune? Certainly not. The debate has extended internationally, where the problem has become terrestrial-network centered, especially in Europe. This has turned out to be a problem on a wide scale, influenced by state level politics and, ultimately, regarding consumer choices relating to broadband Internet access services. Just a few months ago, in the spring of 2014, Europe discussed in its multi-country parliament the new rules and regulations for the managing of the Internet arena. The response of European lawmakers was unmistakable: a strong stance in defense of net neutrality. They voted to limit the ability of telecommunication providers to charge third parties in order to provide faster network access. Providers will be able to limit and slow down services only for a limited time in particular cases to include network security needs or court orders. Acknowledging that ISPs are still commercial entities with expenses (due to costly network upgrades to provide more advanced services to clients), telecom providers will be allowed to offer specialized services at a premium (to include video services and some cloud business applications) but that can’t be at the disadvantage of other clients; in addition, the services must be provided by ISPs and not third parties. Now it will be up to each individual European country to receive and enforce the rules, but the European Parliament vote is definitely a step towards the security of the net neutrality principle in Europe. Conclusion Conceivably, the viewpoint of Jon Peha from Carnegie Mellon University in his paper “The Benefits and Risks of Mandating Network Neutrality, and the Quest for a Balanced Policy” could clear up the issue on hand and be a way out of the debate, as it proposes a balanced approach to the concerns of those who are pro and against net neutrality. “Success depends on moving the debate from vague principles to specific details about what practical forms of discrimination should and should not be allowed, and where one can prohibit the harmful without prohibiting the beneficial, ” stated Peha back in 2006. The question may not be who ought to pay for certain Internet services, but rather, more importantly, work out the rights and freedoms consumers and carriers deserve. With Obama having spoken last month demanding the FCC to keep the Internet free & open, one may question if the FCC will listen and consider reclassifying broadband Internet service as a telecommunications service in order to preserve net neutrality. It seems that the U.S. president’s take on the matter will highly influence the debate in the U.S. and may force the government to issue clear regulations to control the perceived power of ISPs. References Carlsmith, E. M. & Wendell, L. C. (2006). Testimony of Lawrence Lessig – In Support of Network Neutrality. Retrieved from http://moritzlaw.osu.edu/students/groups/is/files/2012/02/lessig-formatted.pdf Kastrenakes, J. (2014, September 16). FCC received a total of 3.7 million comments on net neutrality. Retrieved from FCC received a total of 3.7 million comments on net neutrality | The Verge King, A. (2014, September 10). King Calls on FCC to Adopt Stronger Net Neutrality Rules. Retrieved from Press Release | Press Releases | Newsroom | Angus King | U.S. Senator for Maine Miller, Z. L. (2014, October 9). Obama Signals Opposition to ‘Fast Lanes’ in Support of Net Neutrality. Retrieved from Obama Signals Opposition to 'Fast Lanes' in Support of Net Neutrality - TIME Open Internet Coalition. (n.d.). Why an Open Internet – Openness is a Fundamental Principle of the Internet. Received from Open Internet Coalition: Why an Open Internet Peha, J. M. (2006). The Benefits and Risks of Mandating Network Neutrality, and the Quest for a Balanced Policy. Retrieved from Carnegie Mellon University, at http://repository.cmu.edu/cgi/viewcontent.cgi?article=1021&context=epp Powell, M. (2014, October 31). Michael Powell: The FCC and competition. Retrieved from Michael Powell: The FCC and competition - The Orange County Register The White House, United States Government. (n.d.). Net Neutrality: President Obama’s Plan for a Free and Open Internet. Retrieved from Net Neutrality: President Obama's Plan for a Free and Open Internet | The White House Wyatt, E. (2014, April 23). F.C.C., in a Shift, Backs Fast Lanes for Web Traffic. Retrieved from http://www.nytimes.com/2014/04/24/technology/fcc-new-net-neutrality-rules.html?_r=0 Source
-
Introduction The rapid diffusion of mobile technology and the convergence of numerous services that use the paradigms, including social networking, cloud computing and payment, are urging IT and security industries to develop new solutions for the user authentication. Passcodes, PINs and thumbprints are a few samples of mechanisms that could be adopted to protect mobile devices. Security experts are aware that human behavior represents the weakest link in the security chain. For this reason, one of their principal goals is to improve the user’s experience with effective and easy to use security measures. The above methods for example are effective, but users are induced into misbehavior by laziness and carelessness. Mobile devices are becoming an essential component in our daily life. They manage a huge quantity of information that concurs to the definition of our digital identity. Mobile devices are used to maintain relationships on a social network, to complete payments as part of a two-factor authentication scheme for web services, and to store sensitive user data. Traditional authentication methods are perceived by mobile users as a waste of time. The majority of them do not use authentication on their mobile phones. The problem is that users are, in the majority of cases, totally unaware of principal cyber threats and ignore the importance of authentication processes. Groups of research and mobile device vendors are trying to improve users’ experience related the authentication processes by introducing user behavior and biometric analysis. The research industry is trying to develop implicit authentication mechanisms that rely on user behavior, and is accomplished by building so-called user profiles from various sensor data. The User Behavior Modelling approach with mobile device sensors To overcome the users’ wrong habits and improve their experience while maintaining a significant level of security, a group of researchers at the Glasgow Caledonian University (Hilmi Gunes Kayac?k, Mike Just, Lynne Baillie, David Aspinall and Nicholas Micallef) has developed a sensor-based authentication method that could simplify the verification of a phone’s user identity. The proposed approach is based on the definition of a user profile through the data collected by the numerous sensors that are present in the mobile phone. If the user behavior observed by the device sensors appears consistent with his profile, the device will have high comfort, while in the presence of some discrepancies a new authentication action is required and alternative measures will be triggered, such as requiring a passcode. It is clear that this kind of approach is more comfortable for the user due to the reduction of the occurrences of explicit authentication. The approach encourages more individuals to adopt this kind of authentication mechanisms for their devices. “We propose a lightweight, and temporally and spatially aware user behaviour modelling technique for sensor-based authentication. Operating in the background, our data driven technique compares current behaviour with a user profile. If the behaviour deviates sufficiently from the established norm, actions such as explicit authentication can be triggered. To support a quick and lightweight deployment, our solution automatically switches from training mode to deployment mode when the user’s behaviour is sufficiently learned. Furthermore, it allows the device to automatically determine a suitable detection threshold,” reports the abstract titled “Data Driven Authentication: On the Effectiveness of User Behaviour Modelling with Mobile Device Sensors”. User Behaviour Modelling could prevent unauthorized access to a user’s phone, because the technique is able to discriminate the legitimate owner of the device. The technique developed by the researchers is very effective. It is very interesting the way they create the user’s profile based on habits, for example, examining the nearest cellphone towers to create contextual “anchors” used to define user behavior throughout the day. This means that the technique uses location data related to the user’s movements during an ordinary day. The researchers based their analysis on the concept of “anchor”, a sort of snapshot used by the experts to gather information on the user habit, including mobile apps used, WI-Fi networks accessed, and connections with other devices through Bluetooth. The “anchor” is also used to collect information related to the environment surrounding the mobile device, for example the noise and light levels of the areas visited by the user. All the data collected by the researchers allowed them to profile users. The experts defined an algorithm to match real time behavior with normal behavioral patterns. The results of the experiments conducted on the algorithm in a few weeks are surprising. The model allowed them to discover if a stranger had stolen a user’s smartphone in two minutes with 99% accuracy. The researchers also made a series of tests in a worst case scenario. For example, they simulated the theft of the mobile made by a roommate who was even given a list of the apps the owner generally used. In this case, the software designed by the team of researchers discovered the theft in about ten minutes with 53% accuracy. The experts also explained that the algorithm presents a low rate of false positives. The User Behavior Modelling technique The researchers designed their software to operate in training mode until it will be able to track a user’s profile. This activity is transparent for the end-user and will be completed once the application has defined a user’s profile through the analysis of his routine. The team of experts highlighted that, different from previous works, the “learning mode” implemented in the solution is incremental and collects data until it is able to track a user’s profile. “We however argue that training duration must be set automatically on a per user basis since, as our evaluation shows, there is no one-size-fits-all,” states the paper. Other similar works incorporated user feedback for refinement of tracked profiles, and they do not consider the duration of training. The first approach is not considerable for the deployment of the technology on a large scale, because it is reasonable to expect that a user will not expend any effort in ‘teaching’ the device by providing feedback, “but they will quickly grow tired if frequent and labor-intensive feedback is required”. The profiling technique elaborated by the experts is based on the definition of temporal and spatial models that are built starting from the data in a lightweight and non-parametric way. Once the algorithm has qualified a profile, the training is completed and the application switches from the training mode to a deployment mode. At this point, every time the parameters defined by the model are below the threshold, which was calculated considering user settings, the software launches an authentication challenge. The dataset used by the researchers for the tests is composed of data collected by seven staff and students of the Glasgow Caledonian University. The data collected in 2013 from Android devices includes various kinds of information like sensor data from WI-Fi networks, cell towers data, application usage, surrounding environment’s parameters (light and noise levels) and device system stats. Data composing the dataset was collected in a period of variable duration, from 2 weeks to 14 weeks for different users. To improve the efficiency of the analysis, the experts included in the dataset a detailed diary for each profile, which allows them to conduct further investigation on anomalies. Figure 1 – Summary of the GCU, Rice and MIT datasets used in the tests The researchers examined different attack scenarios based on the attacker’s level of access to a user’s frequent locations and his knowledge about the victim’s habits. The experts defined two adversarial levels, the uninformed adversary, who knows very little about the victim and his behavior, and an informed adversary that has a deep knowledge of a target user and his behavior. Additionally, the researchers defined an outsider to be a person who steals the mobile and runs away, while an insider has access to a location that the user frequently visits and attempts to use the mobile device at the same places as the legitimate users. The results are very interesting. The informed attacks produce higher comfort levels compared to uninformed attacks. Anyway, they are not able to bypass the detection mechanism developed by the researchers. Figure 2 – Test Results The researchers announced that they will continue the investigations on the use of behaviour modelling, in particular analyzing different supervised learning techniques for profiling. Is the User Behavior Modelling an efficient theft deterrent? Despite that results of tests conducted by the experts demonstrate that the technique could be very effective against the theft of mobile devices, there are serious considerations to make about the users’ privacy. This kind of algorithms elaborates an impressive amount of data to profile users and to define a pattern for its analysis. Anyway, it is easy to predict that privacy advocates could criticize the technique due to possible use for surveillance purposes. The data-tracking and user profiling through the definition of contextual anchors is very invasive. For this reason, it is crucial to understand how to implement the technique in a real commercial scenario. Principal providers of mobile OS and hardware vendors like Google and Apple are very interested in implementation of the technique in their operating systems. The researchers explained that their “User Behavior Modelling” algorithm could be very effective for payment systems like Apple Pay, and could allow securing a user’s daily purchases without constantly typing in secret passcodes. The future applications of User Behavioral Modelling techniques depend on the capability of coders to implement models without user data ever leaving the device. The work we have analyzed proposed a lightweight, non-parametric modelling approach that can be implemented on modern mobile devices and determine when to stop the learning mode and the threshold for detection, both automatically from the data. References [1410.7743] Data Driven Authentication: On the Effectiveness of User Behaviour Modelling with Mobile Device Sensors Spying Software Spots Phone Theft In 2 Minutes, No Password Needed | Co.Design | business + design http://arxiv.org/ftp/arxiv/papers/1410/1410.7743.pdf Source
-
https://bitcointalk.org/index.php?topic=895413.100 taci ba nu mai comenta aiurea
-
"Din motive de siguran??, rug?m to?i utilizatorii platformei BTCXchange.ro s? î?i retrag? toate fondurile, atât FIAT cât ?i BTC (Bitcoin), din contul aferent platformei pân? în data de 19 decembrie 2014 inclusiv ?i s? nu mai tranzac?ioneze", se arat? într-o notificare publicat? mar?i pe site-ul platformei. Îndemnul c?tre utilizatori de a retrage banii a fost reiterat miercuri, reprezentan?ii platformei ad?ugând c? nu mai au acces la server. În momentul de fa??, nu este clar dac? serviciile de tranzac?ionare vor fi reluate sau dac? platforma se va închide definitiv Anun?ul privind întreruperea tranzac?iilor cu bitcoin a venit la doar câteva s?pt?mâni dup? ce procesatorul de pl??i mobile Netopia mobilPay ?i BTCXchange au încheiat un parteneriat pe baza c?ruia cei 6.000 de comercian?i ai Netopia care folosesc sistemul mobilPay pot accepta pl??i în moneda virtual?. Potrivit reprezentan?ilor Netopia, problemele cu care se confrunt? platforma digital? nu vor afecta eventualele tranzac?ii cu bitcoin la comercian?ii parteneri, în contextul în care BTCXchange nu a procesat pân? în prezent nicio astfel de tranzac?ie, transmite CoinDesk. "Planurile ?i interesul nostru în ceea ce prive?te bitcoin sunt mai mari ca oricând. Dorim s? ne pozi?ion?m pe primul loc în regiune în cadrul mi?c?rii bitcoin", a declarat pentru CoinDesk Antonio Eram, directorul general al Netopia. Potrivit datelor BTCXchange, în total 5.165,18 de bitcoin, cu o valoare de circa 8,3 milioane de lei, au fost tranzac?ionate anul acesta pe platforma digital?. Platforma este de?inut? de o persoan? fizic? - Horea Vu?can. Valoarea bitcoin a variat între 250 ?i 3.405,56 lei pe parcursul anului. Joi, moneda virtual? se tranzac?iona la 1.285 lei. Source ======================================================== Eh, asta se intampla doar pentru ca adminsitratorul e un mare zgarcit, oricum programatorul le-o dat grav peste bot, si-a dat demisia si nu le mai da datele (el fiind singurul om ce avea acele date).
-
Free PDF Merger + (100% discount) Free Thief Assistant (100% discount) Free Activity Timer – Pomodoro Edition (100% discount) Free Silly Family (100% discount) Free Jack Vs Ninjas (100% discount)
-
Free Easy Access Video Training (100% discount) Free Smart UnInstaller (100% discount) Free Control Your Expenses (100% discount) Free Super Manatee! (100% discount) Free Pet Vet Hospital (100% discount) Free The Witches of Pumpkin Avenue (100% discount)
-
Free Sergeant Crash (100% discount) Free Swippy Motion (100% discount) Free Dots in the line (100% discount) Free Top Buy (100% discount) Free SWAK! (100% discount) Free Tablik (100% discount) Free Zen Juggling (100% discount) Free PrimeFactor (100% discount) Free Sushi Run! (100% discount) Free Garden Island Plant Village: Grow & Harvest Fruits & Vegetables on your country farm! (100% discount) Free Alyia (100% discount) Free Escape the Hellevator! (100% discount)
-
An online "hacktivist" group that calls itself Anonymous has claimed responsibility for hacking into email accounts of Swedish government in response to the seizure of world renowned The Pirate Bay website and server by Swedish police last week. Apart from Sweden government officials, the Anonymous hacktivist group also claimed to have hacked into the government email accounts of Israel, India, Brazil, Argentina, and Mexico, and revealed their email addresses with passwords in plain-text. The Anonymous group also left a message at the end of the leak: The hack was announced by Anonymous group on their official Twitter account. The tweet also shared a link of Pastebin where leaked data has been dumped with the list of the emails. The tweet reads: Last Tuesday, an infamous Torrent website predominantly used to share copyrighted material such as films, TV shows and music files, free of charge — The Pirate Bay went dark from the internet for almost half a day after Swedish Police raided the site's server room in Stockholm and seized several servers and other equipment. The piracy site remained unavailable for several hours, and appeared back online in the late hours with a new URL hosted under the top-level domain for Costa Rica (.cr). However, some torrent users said that the downloads were neither properly working, nor were free of charge, some said that The Pirate Bay service with .cr domain came by a different group, while others referred to it as a scam. At the moment it is unclear how the group got access to the login credentials of several countries government officials and which server they exactly belong. However, this is not first time, Swedish internet giant Telia was attacked on December 12 following The Pirate Bay raid, reported by The Local. At the time, the online services by Telia were affected as well as user connections were disturbed, RT reported. Also a chief security researcher from Kaspersky Lab, David Jacoby, said the attack on Telia was a distributed denial-of-service (DDoS) attack and was likely a response to the seizer of The Pirate Bay in Stockholm by Swedish police. The company also encountered cyber attacks on both December 9 and 10 as well. However, The Pirate Bay has previously been shut down number of times and had its domain seized, prompting the BitTorrent site to change its top level domain many times. Earlier in September, The Pirate Bay claimed that it ran the notorious website on 21 "raid-proof" virtual machines, which means if one location is raided by the police, the site would hardly took few hours to get back in action. Source
-
There are a number of critical, remotely exploitable command injection vulnerabilities in Schneider Electric’s ProClima software, which is used in manufacturing and energy facilities. The ProClima application is a utility that customers use to design control panel enclosures in industrial facilities to help manage the heat from enclosed electrical devices. The bugs affect ProClima versions 6.0.1 and earlier, according to an advisory released by ICS-CERT. The flaws exists in two separate components of the ProClia software, MDraw30.ocx and Atx45.ocx. “MDraw30.ocx control can be initialized and called by malicious scripts potentially causing buffer overflows, which may allow an attacker to execute code remotely,” the advisory says. The same scenario is true for the vulnerabilities in Atx45.ocx. All of the vulnerabilities can be exploited remotely, and ICS-CERT said that an attacker with relatively low skills would be able to exploit the bugs. There aren’t any known exploits for the vulnerabilities at this point, however. The vendor has pushed out a new version of the ProClima package that contains fixes for the vulnerabilities. “Schneider Electric has released an updated version of the ProClima software, Version 6.1.7, which mitigates these vulnerabilities. Customers are encouraged to download the new version and update their installations. It is important that customers first uninstall the current version,” the ICS-CERT advisory says. The vulnerabilities were reported to Schneider Electric by Ariele Caltabiano, Andrea Micalizzi, and Brian Gorenc through the Zero Day Initiative. Source
-
Google yesterday announced that it has released the source code for its End-to-End extension for Chrome to open source via GitHub. End-to-End enables Gmail users to encrypt, sign and verify email messages within the Chrome browser, using OpenPGP. “We’ve always believed strongly that End-To-End must be an open source project, and we think that using GitHub will allow us to work together even better with the community,” wrote Stephan Somogyi, Product Manager, Security and Privacy for Google. Google is calling the updated version of End-to-End an alpha version and hopes to get community feedback. This version, however, already includes contributions from Yahoo’s security team. In August during the Black Hat USA conference in Las Vegas, Yahoo CISO Alex Stamos announced that it would enable end-to-end encryption for Yahoo Mail users in addition to announcing a partnership with Google. Yahoo, Google and other companies were implicated on several occasions as being tacitly cooperative with intelligence agencies gathering user data from Internet companies. Both tech giants, as well as many others, have taken great pains to distance themselves from such allegations announcing several initiatives aimed at encrypting web-based services. Yahoo, for example, also announced this summer that it is also working on enabling HSTS on its servers, as well as certificate transparency. HSTS (HTTP strict transport security) allows Web sites to tell users’ browsers that they only want to communicate over an encrypted connection. The certificate transparency concept involves a system of public logs that list all certificates issued by cooperating certificate authorities. It requires the CAs to voluntarily submit their certificates, but it would help protect against attacks such as spoofing Web sites or man-in-the-middle. Google said this version of End-to-End also incorporates fixes for two bugs submitted to its Vulnerability Rewards Program, and it hopes that the alpha will generate for End-to-End’s new crypto library. In addition, Google’s Somogyi said End-to-End isn’t stable enough for release into the Chrome Web Store. “We don’t feel it’s as usable as it needs to be. Indeed, those looking through the source code will see references to our key server, and it should come as no surprise that we’re working on one,” Somogyi said. “Key distribution and management is one of the hardest usability problems with cryptography-related products, and we won’t release End-To-End in non-alpha form until we have a solution we’re content with.” End-to-End is based on OpenPGP, which requires less technical understanding to deploy and run, Somogyi said. While End-to-End will be available to anyone, Google acknowledges it’s likely not within the average user’s wheelhouse. “We recognize that this sort of encryption will probably only be used for very sensitive messages or by those who need added protection,” Somogyi said. “But we hope that the End-to-End extension will make it quicker and easier for people to get that extra layer of security should they need it.” Source
-
Google has added another layer of security for users of Gmail on the desktop, which now supports content security policy, a standard that’s designed to help mitigate cross-site scripting and other common Web-based attacks. CSP is a W3C standard that has been around for several years, and it’s been supported in a number of browsers for some time. Mozilla has supported CSP since Firefox 4 and the technology is effective at defending against XSS attacks, but one of the issues with it has been that not many sites have supported it. It’s also difficult to implement properly, experts say. Earlier this year researchers from Northeastern University released a paper on CSP, looking at the question of why it isn’t more widely deployed at this point. Michael Weissbacher, one of the researchers, said that he was surprised CSP wasn’t more widely deployed, because the security benefits are clear. “I looked into CSP deployments because it is effective against XSS and could solve lots of problems with web security,” Weissbacher explained to Threatpost. “So I was surprised to find that only few websites used it, and those who did, didn’t use it fully, marginalizing the benefits. I think it would help the web at large if more websites invest the effort to implement CSP.” For Google, the benefits are clear. Gmail is very high on the list of targets for many kinds of attackers, from run-of-the-mill cybercriminals to APT groups to intelligence services. Gmail’s user base is enormous and includes people from all over the world, some of whom are prime targets themselves. Google has beefed up the security of the service several times in the last couple of years, providing HTTPS as the default connection option, adding a two-step verification option and now adding supporting for CSP. “We know that the safety and reliability of your Gmail is super important to you, which is why we’re always working on security improvements like serving images through secure proxy servers, and requiring HTTPS. Today, Gmail on the desktop is becoming more secure with support for Content Security Policy (CSP),” Danesh Irani of Google wrote in a blog post. “There are many great extensions for Gmail. Unfortunately, there are also some extensions that behave badly, loading code which interferes with your Gmail session, or malware which compromises your email’s security. Gmail’s CSP protects you, by stopping these extensions from loading unsafe code.” XSS attacks are among the more common Web-based attacks, and many popular sites have been found to harbor XSS flaws in the last few years. Attackers can take advantage of these vulnerabilities to load malicious code from a remote site and compromise visitors to a legitimate site. CSP is designed to mitigate these attacks by letting site owners determine which domains can safely load scripts in the browser. Source
-
Security researchers have discovered a backdoor in Android devices sold by Coolpad, a Chinese smartphone manufacturer. The “CoolReaper” vuln has exposed over 10 million users to potential malicious activity. Palo Alto Networks reckons the malware was “installed and maintained by Coolpad despite objections from customers”. It's common for device manufacturers to install software on top of Google’s Android mobile operating system to provide additional functionality or to customise Android devices. Some mobile carriers install applications that gather data on device performance. But CoolReaper operates well beyond the collection of basic usage data, acting as a true backdoor into Coolpad devices - according to Palo Alto. CoolReaper has been identified on 24 phone models sold by Coolpad. “We expect Android manufacturers to pre-install software onto devices that provide features and keep their applications up to date,” said Ryan Olson, Intelligence Director, Unit 42, Palo Alto Networks. “But the CoolReaper backdoor detailed in this report goes well beyond what users might expect, giving Coolpad complete control over the affected devices, hiding the software from antivirus programs, and leaving users unprotected from malicious attackers. We urge the millions of Coolpad users who may be impacted by CoolReaper to inspect their devices for presence of the backdoor and to take measures to protect their data.” CoolReaper is capable of a variety of unfriendly actions including the ability to download, install, or activate any Android application without user consent or notification. It can also clear user data, uninstall existing applications, or disable system applications. Worse yet the malware can push a fake over-the-air (OTA) update that doesn’t update the device, but installs unwanted applications. It can also send or insert arbitrary SMS or MMS messages into the phone or dial arbitrary phone numbers. Finally CoolReaper can upload information about the device, its location, application usage, calling and SMS history to a Coolpad server. Palo Alto’s Unit 42 research arm began investigating what came to be known as CoolReaper following numerous complaints from Coolpad customers in China posted to internet message boards. In November, a researcher working with Wooyun.org identified a vulnerability in the back-end control system for CoolReaper, which made clear how Coolpad itself controls the backdoor in the software. Chinese news site, Aqniu.com, reported some details about the backdoor in late November. Coolpad did not respond to multiple requests for assistance by Palo Alto Networks. The Chinese firm is yet to respond to requests for comment from El Reg. We’ll update this story as and when we hear more. More details on Palo Alto’s research into CoolReaper can be found in a blog post here and CoolReaper: The Coolpad Backdoor a new report from Unit 42 written by Claud Xiao and Ryan Olson. The report contains a list of files to check for in Coolpad devices that may indicate the presence of the CoolReaper backdoor. Source
-
Diogo Mónica once wrote a short computer script that gave him a secret weapon in the war for San Francisco dinner reservations. This was early 2013. The script would periodically scan the popular online reservation service, OpenTable, and drop him an email anytime something interesting opened up—a choice Friday night spot at the House of Prime Rib, for example. But soon, Mónica noticed that he wasn’t getting the tables that had once been available. By the time he’d check the reservation site, his previously open reservation would be booked. And this was happening crazy fast. Like in a matter of seconds. “It’s impossible for a human to do the three forms that are required to do this in under three seconds,” he told WIRED last year. Mónica could draw only one conclusion: He’d been drawn into a bot war. Everyone knows the story of how the world wide web made the internet accessible for everyone, but a lesser known story of the internet’s evolution is how automated code—aka bots—came to quietly take it over. Today, bots account for 56 percent of all of website visits, says Marc Gaffan, CEO of Incapsula, a company that sells online security services. Incapsula recently an an analysis of 20,000 websites to get a snapshot of part of the web, and on smaller websites, it found that bot traffic can run as high as 80 percent. People use scripts to buy gear on eBay and, like Mónica, to snag the best reservations. Last month, the band, Foo Fighters sold tickets for their upcoming tour at box offices only, an attempt to strike back against the bots used by online scalpers. “You should expect to see it on ticket sites, travel sites, dating sites,” Gaffan says. What’s more, a company like Google uses bots to index the entire web, and companies such as IFTTT and Slack give us ways use the web to use bots for good, personalizing our internet and managing the daily informational deluge. But, increasingly, a slice of these online bots are malicious—used to knock websites offline, flood comment sections with spam, or scrape sites and reuse their content without authorization. Gaffan says that about 20 percent of the Web’s traffic comes from these bots. That’s up 10 percent from last year. Often, they’re running on hacked computers. And lately they’ve become more sophisticated. They are better at impersonating Google, or at running in real browsers on hacked computers. And they’ve made big leaps in breaking human-detecting captcha puzzles, Gaffan says. “Essentially there’s been this evolution of bots, where we’ve seen it become easier and more prevalent over the past couple of years,” says Rami Essaid, CEO of Distil Networks, a company that sells bot-blocking software. But despite the rise of these bad bots, there is some good news for the human race. The total percentage of bot-related web traffic is actually down this year from what it was in 2013. Back then it accounted for 60 percent of the traffic, 4 percent more than today. Source
-
Domain-name overseer ICANN has been hacked and its DNS zone database compromised, the organization has said. Attackers sent staff spoofed emails appearing to coming from icann.org. The organization notes it was a "spear phishing" attack, suggesting employees clicked on a link in the messages that took them to a bogus login page – into which staff typed their usernames and passwords, providing hackers with the keys to their work email accounts. No sign of two-factor authentication, then. "The attack resulted in the compromise of the email credentials of several ICANN staff members," ICANN's statement on the matter reads, noting that the attack happened in late November and was discovered a week later. With those details, the hackers then managed to access a number of systems within ICANN, including the Centralized Zone Data System (CZDS), the wiki pages of the Governmental Advisory Committee (GAC), the domain registration Whois portal, and the organization's blog. The CZDS gives authorized parties access to all the zone files of the world's generic top-level domains. It is not possible to alter those zone files from within that system, but the hackers did manage to obtain information on those who are registered with the system, which include many of the administrators of the world's registries and registrars. In an email sent to every CZDS user, ICANN has warned that "the attacker obtained administrative access to all files in the CZDS including copies of the zone files in the system. The information you provided as a CZDS user might have been downloaded by the attacker. This may have included your name, postal address, email address, fax and telephone numbers, and your username and password." ICANN notes that the passwords were stored as salted hash values, rather than in plaintext, although the algorithm used is not known. It has since deactivated all pass-phrases and asked users to set new passwords. However, if CZDS users have used the same login details for other systems, the hackers could also gain access to other parts of the internet's basic infrastructure if they can crack the hashes. ICANN says it has found no impact on the other systems. "Based on our investigation to date, we are not aware of any other systems that have been compromised, and we have confirmed that this attack does not impact any IANA-related systems," it stated. Worrying While the hack is nowhere near the same level as the attack on, say, Sony that has seen gigabytes of sensitive information leaked onto the internet, it will prove extremely embarrassing to ICANN, which hopes to be handed control of the critical IANA contract next year. IANA is the ICANN-run body that manages the heart of the internet's DNS. It also comes as the US government revealed yesterday the process by which updates to the internet's root zone files are done through ICANN. When changing the network addresses for the world's top-level nameservers, the process relies on a secure email from ICANN, or a request sent through a secure web portal, a standard format change request and self-certification that ICANN has followed its own processes. With the email addresses of staff with access to root zone records having been compromised and the hack only noticed a week later, there will be significant concern that had the hackers been luckier or if an IANA staffer - who also use icann.org email addresses - had logged in to the fake site the hackers may have gained access to the system used to make changes at the very top of the internet. ICANN seeks to assure people that it is on top of the situation: "Earlier this year, ICANN began a program of security enhancements in order to strengthen information security for all ICANN systems. We believe these enhancements helped limit the unauthorized access obtained in the attack. Since discovering the attack, we have implemented additional security measures." That security program began when ICANN suffered a problem with CZDS system in April. In that case a number of users were wrongly given admin access to the system. If there is a positive to the news it is that ICANN has matured in how it deals with security. When the organization experienced a critical failure in its application system for new top-level domains in 2012, which caused it to shut down its entire flagship program for several months, it defensively dismissed the issue as a "glitch" and infuriated thousands of companies by providing very limited information about what had happened and when systems would be back up. Source
-
@jsonwhite tool-ul este postat intr-o varianta mai veche de https://rstforums.com/forum/57993-sqli-hunter_v1-1-a.rst cum sa nu functioneaze, terminati si numai faceti offtopic aiurea. E testat si functioneaza perfect.
-
Alina POS malware "sparks" off a new variant
Aerosol replied to Nytro's topic in Reverse engineering & exploit development
am gasit pe net acum ceva timp https://mega.co.nz/#!ZIRhRbyb!oNaSiCt9qzijqklCndlvLrGOxmXHwYmaGhxHK2Rd0DU PM pentru parola. Descarcati si jucati-va doar in sandbox sau VirtualBox -
Easy Macro Recorder is a tool that helps automate repetitive tasks by allowing you to record and playback keyboard and mouse macros. This giveaway has no free program updates or free tech support, must be installed during giveaway time period, and is for home/personal use only. Get Easy Macro Recorder with free lifetime upgrades if you want free updates, free tech support, business + home use, and ability to install/reinstall whenever you want. Sale ends in 2 days 19 hrs 06 mins Download
-
ABBYY PDF Transformer+ is a top-rated PDF editor, convertor, and creator. Features of ABBYY PDF Transformer+ include ability to edit regular and scanned PDFs (turn PDFs into editable and searchable formats with the original layout and formatting retained); ability to convert PDFs to other file formats such as DOCX, XLSX, PPTX, RTF, HTML, EPUB, CSV, and ODT while retaining original content, layout, and formatting; and ability to create PDFs out of other file formats. Best of all, ABBYY PDF Transformer+ includes ABBYY’s world renown OCR technology… so there is no PDF you can’t handle. Get it now! Sale ends in 19 hrs 06 mins Download
-
fwknop stands for the "FireWall KNock OPerator", and implements an authorization scheme called Single Packet Authorization (SPA). This method of authorization is based around a default-drop packet filter (fwknop supports iptables and firewalld on Linux, ipfw on FreeBSD and Mac OS X, and PF on OpenBSD) and libpcap. SPA is essentially next generation port knocking (more on this below). The design decisions that guide the development of fwknop can be found in the blog post "Single Packet Authorization: The fwknop Approach". Download | latest release: 2.6.5 Tutorial Documentation Features Source Code (github) Code Coverage (for the 2.6.5 release) Mailing List You can clone the fwknop git repository as follows from github: $ git clone https://www.github.com/mrash/fwknop fwknop.git Cloning into 'fwknop.git'... remote: Counting objects: 5275, done. remote: Compressing objects: 100% (1603/1603), done. remote: Total 5275 (delta 3672), reused 5155 (delta 3552) Receiving objects: 100% (5275/5275), 2.07 MiB | 3.96 MiB/s, done. Resolving deltas: 100% (3672/3672), done. SPA requires only a single encrypted packet in order to communicate various pieces of information including desired access through a firewall policy and/or complete commands to execute on the target system. By using a firewall to maintain a "default drop" stance, the main application of fwknop is to protect services such as OpenSSH with an additional layer of security in order to make the exploitation of vulnerabilities (both 0-day and unpatched code) much more difficult. With fwknop deployed, anyone using nmap to look for SSHD can't even tell that it is listening - it makes no difference if they want to run a password cracker against SSHD or even if they have a 0-day exploit. The authorization server passively sniffs SPA packets via libcap and hence there is no "server" to which to connect in the traditional sense. Access to a protected service is only granted after an authenticated, properly decrypted, and non-replayed packet is monitored from an fwknop client (see the following network diagram; the SSH session can only take place after the SPA packet is sniffed): http://www.cipherdyne.org/images/fwknop_tutorial_network_diagram.png/img] Single Packet Authorization retains the benefits of Port Knocking (i.e. service protection behind a default-drop packet filter), but has the advantages listed below over over Port Knocking. For a complete treatment of all fwknop design goals, see the fwknop tutorial. SPA can utilize asymmetric ciphers for encryption SPA is authenticated with an HMAC in the encrypt-then-authenticate model SPA packets are non-replayable SPA cannot be broken by trivial sequence busting attacks SPA only sends a single packet over the network SPA is much faster fwknop started out as a Port Knocking implementation in 2004, and at that time it was the first tool to combine traditional encrypted port knocking with passive OS fingerprinting. This made it possible to do things like only allow, say, Linux-2.4/2.6 systems to connect to your SSH daemon. However, if you are still using the port knocking mode in fwknop, I strongly recommend that you switch to the Single Packet Authorization mode.
-
Sony sued by ex-staff over daft security, leaked privates
Aerosol posted a topic in Stiri securitate
As if Sony Pictures didn't have enough on its plate, now former employees have launched a class-action lawsuit against the Hollywood giant over the parlous state of its security – and to recoup the damage hackers have allegedly caused them. It comes as people claiming to have hacked the movie studio's servers today made bizarre threats against showings of Sony Pictures' North Korea-poking comedy flick The Interview – including references to 2001's September 11 attacks. A whole load of new files stolen from Sony's systems by the miscreants have also been leaked via file-sharing networks. That adds to the tens of gigabytes of sensitive records – from employees' salaries, addresses and emails to credit card numbers, scripts and unreleased movies – obtained from Sony Pictures computers by hackers and dumped online. The two lead plaintiffs in the class-action lawsuit against Sony Pictures are revealed in legal paperwork [PDF] obtained today by The Reg. Michael Corona left Sony seven years ago and claims he and his wife and child have had attempts made to steal their identities based on personal information leaked from Sony. The other plaintiff, Christina Mathis, left Sony in 2002 but claims to have suffered the same fate due to this Sony ransacking. The lawsuit, filed on Monday in the central distract of California, claims that Sony should have known that it was a target for hackers, particularly in light of the 2011 PlayStation Network (PSN) breach which shut its servers down for nearly two months and led to the widespread plundering of gamers' personal information. Sony offered $15m to clear up that mess, and the lawyers in this latest case are seeking $1,000 compensation for each former employee who has had their details leaked, which given over 47,000 social security numbers have been released could add up to a hefty sum. The PSN hack, and plenty of other besides in other companies, show that Sony should have been more security conscious, the plaintiffs' lawyers argue. Even after such major breaches, the company was still storing critical information in plain text and without proper encryption, and Sony management made a business decision not to invest in proper security mechanisms, despite repeated warnings from IT staff, the suit claims. Once the scale of this latest hack was uncovered, Sony management warned in an email to employees on December 2 that all and any data given to the company was at risk. The biz set staffers up with credit and identity protection the next day. But it was only on December 12, and after increasing complaints from former staff, that Sony offered the same services to some ex-employees. The suit also points out that Sony didn't stint on countermeasures to the latest leak, seemingly using Amazon Web Services to spam out false data on torrents and trying to shut down torrenting sites seeding swiped files. It also hired a high-priced lawyer to threaten the press if they dug into the network breach. “AWS employs a number of automated detection and mitigation techniques to prevent the misuse of our services," a spokeswoman for Amazon told El Reg. "In cases where the misuse is not detected and stopped by the automated measures, we take manual action as soon as we become aware of any misuse. Our terms are clear about this. The activity being reported is not currently happening on AWS.” The plantiff's legal firm, Keller Rohrback in Seattle, didn’t return calls at time of going to press, but is assumed to be looking for further former employees to sign up and sue their old bosses for compensation. Meanwhile, on Monday Sony Pictures' chief executive and chairman Michael Lynton held a series of 20-minute meetings with groups of staff to appraise them of progress in dealing with the attacks and to reassure them about the future. "This won't take us down," he promised, the LA Times reports. "You should not be worried about the future of this studio. I am incredibly sorry that you've had to go through this." Co-chairman Amy Pascal also addressed the meeting, apologizing for insensitive comments she made in private emails that have since been leaked. "It is your incredible efforts and perseverance that will get us through this," she said. Source -
Google is proposing to warn people their data is at risk every time they visit websites that do not use the "HTTPS" system. Many sites have adopted the secure version of the basic web protocol to help safeguard data. The proposal was made by the Google developers working on the search firm's Chrome browser. Security experts broadly welcomed the proposal but said it could cause confusion initially. Scrambled data The proposal to mark HTTP connections as non-secure was made in a message posted to the Chrome development website by Google engineers working on the firm's browser. If implemented, the developers wrote, the change would mean that a warning would pop-up when people visited a site that used only HTTP to notify them that such a connection "provides no data security". Continue reading the main story Paul Mutton Netcraft The team said it was odd that browsers currently did nothing to warn people when their data was unprotected. HTTPS uses well-established cryptographic systems to scramble data as it travels from a user's computer to a website and back again. The team said warnings were needed because it was known that cyber thieves and government agencies were abusing insecure connections to steal data or spy on people. Rik Ferguson, a senior analyst at security firm Trend Micro, said warning people when they were using an insecure connection was "a good idea". Website operators might need help adopting the HTTPS system, say experts Letting people know when their connection to a website is insecure could drive sites to adopt more secure protocols, he said. Currently only about 33% of websites use HTTPS, according to statistics gathered by the Trustworthy Internet Movement which monitors the way sites use more secure browsing technologies. 'Headache' Paul Mutton, a security analyst at web monitoring firm Netcraft, also welcomed the proposal, saying it was "bizarre" that an unencrypted HTTP connection gave rise to no warnings at all. Many may resent the cost in time and money required to adopt the technology, he said, even though projects exist to make it easier and free for website administrators to use HTTPS. The Google proposal was also floated on discussion boards for other browsers and received guarded support from the Mozilla team behind the Firefox browser and those involved with Opera. Many large websites and services, including Twitter, Yahoo, Facebook and GMail, already use HTTPS by default. In addition, since September Google has prioritised HTTPS sites in its search rankings. Source
-
A 17 year-old Londoner has pleaded guilty to a series of denial-of-service attacks against internet exchanges and the Spamhaus anti-spam service last year. The teenager – who we cannot name for legal reasons – also admitted money laundering and possessing indecent images. faces a sentencing hearing on 9 January, a police statement confirmed: Juveniles – persons aged under 18 – appearing before youth courts receive automatic anonymity under English law. The case went through London's Camberwell Green Youth Court. The teenager was arrested and prosecuted following a series of DDoS attacks aimed at Spamhaus and content distribution network CloudFlare that ultimately affected the operation of internet exchanges. Hackers used DNS reflection to amplify the DDoS attack. Peak traffic volumes exceeded 300 Gbps, marking the assault out as the biggest DDoSes ever. Despite this massive volume the attack failed to break the internet's backbone, contrary to many early reports, as we reported at the time. Other arrests were made in the case. These and other circumstances mean it’s unlikely that the 17 year-old acted alone. Source