nedo
Active Members-
Posts
2065 -
Joined
-
Last visited
-
Days Won
11
Everything posted by nedo
-
Bine ai venit. Treci pe la regulament. @restul prea mult spam. TC.
-
A weakness in Android, Windows, and iOS mobile operating systems could be used to obtain personal information. Researchers at the University of California Riverside Bourns College of Engineering and the University of Michigan have identified a weakness they believe to exist across Android, Windows, and iOS operating systems that could allow malicious apps to obtain personal information. Although it was tested only on an Android phone, the team believes that the method could be used across all three operating systems because all three share a similar feature: all apps can access a mobile device's shared memory. "The assumption has always been that these apps can't interfere with each other easily," said Zhiyun Qian, an associate professor at UC Riverside. "We show that assumption is not correct and one app can in fact significantly impact another and result in harmful consequences for the user." To demonstrate the method of attack, first a user must download an app that appears benign, such as a wallpaper, but actually contains malicious code. Once installed, the researchers can use it to access the shared memory statistics of any process, which doesn't require any special privileges. The researchers then monitor the changes in this shared memory and are able to correlate changes to various activities -- such as logging into Gmail, H&R Block, or taking a picture of a cheque to deposit it online via Chase Bank -- the three apps that were most vulnerable to the attack, with a success rate of 82 to 92 percent. Using a few other side channels, the team was able to accurately track what a user was doing in real-time. In order to pull off a successful attack, two things need to happen: first, the attack needs to take place at the exact moment that the user is performing the action. Second, the attack needs to be conducted in such a way that the user is unaware of it. The team managed to pull this off by carefully timing the attacks. "We know the user is in the banking app, and when he or she is about to log in, we inject an identical login screen," said electrical engineering doctoral student Qi Alfred Chen from the University of Michigan. "It's seamless because we have this timing." Of the seven apps tested, Amazon was the hardest to crack, with a 48 percent success rate. This is because the app allows one activity to transition to another activity, making it harder to guess what the user will do next. To circumvent this issue, Qian suggested, "Don't install untrusted apps", adding that users should also be wary of the information access requested by apps on installation. The team will present its paper, "Peeking into Your App without Actually Seeing It: UI State Inference and Novel Android Attacks" (PDF), at the USENIX Security Symposium in San Diego on August 23. You can watch some short videos of the attacks in action below. Sursa: aici Titlul e cam "far fetched".
-
Nu e nici o noutate, astfel de mesaje se trimit de ani de zile, este o teapa, atat si nimic mai mult. Ca o regula de tinut minte "Daca e prea frumos ca sa fie adevarat, in cele mai multe cazuri nu este adevarat".
-
Poti sa vezi tool-ul in actiune aici , in principiu, testeaza mai multe tipuri de injectii mysql pentru a extrage diferite date(versiune mysql, tabele, baze de date existente).
-
Utilizator banat, singurul care are dreptul sa vand acea aplicatie este @danyweb09, cel ce a creat-o.
-
Nu, nu se merita, este un script incomplet, si mai mult de atat, scriptul i-a fost dat "in buna credinta" doar pentru el, nu pentru a il vinde. Nu are acordul creatorului scriptului pentru a il vinde. Imi poti spune cine este cel ce ti-a trimis mesajul?
-
nu e de la grub, nu cred cel putin, acolo zice ca nu vede nici un mediu bootabil, in grub iti afiseaza corect locatia imaginii ubuntu?
-
Microsoft has announced that they have released the Visual Studio '14' CTP 3 for download and also an early build of the .NET Framework vNext. If you have been following along recently, you will know that the Visual Studio team has been releasing updates at an extremely fast pace, so quickly, in fact, you may have missed the last release that came out earlier this month. This release comes with quite a few enhancements as well and we have posted some them below. ASP.NET and Web Development vNext Updates. This CTP includes all the Visual Studio 2013 Update 3 web tooling improvement and ASP.NET vNext alpha 3 runtime packages. It has improved tooling support for ASP.NET vNext, such as support for build configuration and support for unit tests, and it no longer includes content and compile items inside “.kproj” file. ASP.NET vNext includes an updated version of the RyuJIT JIT compiler. For details, please read the full post on the .NET Web Development Tools blog. .NET Native Updates. .NET Native is now integrated into Visual Studio “14.” It includes initial support for calling WCF services within .NET Native apps and the associated Add Service Reference experience in Visual Studio. PerfTips in the Debugger. In CTP 3 you can see how long your code took to execute as you hit breakpoints and step through code with the debugger. Simply look at the end of the current line when you are stopped in the debugger to see the performance tooltip. For more information read the dedicated post on PerfTips on diagnostics blog. Shared Projects. With CTP 3, you can create an empty C#, VB, and JavaScript shared project from the “File > New Project” menu. Windows Store/Phone Projects written in C#/VB/JavaScript, as well as some classic desktop projects (Console Application, Class Library, Windows Form Application, Portable Class Library, WPF) written in C#/VB can consume one or many of these shared projects. You can head to the source link below to see the full list of changes and more details about the update. As with any pre-release software, you should avoid installing it on a production machine and it should be limited to testing purposes only. While this may be a later-stage release, it could still have bugs that could cause unwanted side-effects in a production environment. Source: | Download: Visual Studio '14' CTP 3 Sursa articol: aici
-
- 1
-
Proposals have turned into major language changes for Java lately, and Python is poised to follow suit. Programmer Guido van Rossum, author of the Python programming language, published a proposal on the Python mailing list suggesting that Python should learn from languages such as Haskell and the experimental Python variant’s syntax for function annotations. Mypy, a static Python type checker, finds type errors during compilation when using custom syntax to add type annotations, but the custom syntax is actually valid Python 3 code. “The goal is to make it possible to add type checking annotations to third-party modules (and even to the stdlib) while allowing unaltered execution of the program by the (unmodified) Python 3.5 interpreter,” van Rossum wrote. “The actual type checker will not be integrated with the Python interpreter, and it will not be checked into the CPython repository.” Van Rossum’s proposal listed several reasons why adding type annotations in Python is a good idea: • Editors/IDEs: Type annotations would call out misspelled method names or inapplicable operations and suggest possible names. • Linter capabilities: Mypy annotations work much like a linter, which finds certain types of code errors more quickly. • Standard notation: Developing a standard notation for type annotations will reduce the amount of documentation that needs to be written. Once a standard type annotation syntax is introduced, van Rossum said it should be simple to add support for this notation to documentation generators like Sphinx. • Refactoring: Type annotations help in manually refactoring code as well as certain automatic refactoring. “Imagine a tool like 2to3 (but used for some other transformation) augmented by type annotations, so it will know whether [for example] x.keys() is referring to the keys of a dictionary or not,” van Rossum wrote. While explaining all the reasons why type annotations would be useful in Python, he made clear that the language will not technically be adopting standard rules for type checking. “Fully specifying all the type-checking rules would make for a really long and boring PEP [Python Enhancement Proposal],” van Rossum wrote. “The worst that can happen is that you consider your code correct but mypy disagrees; your code will still run. That said, I don’t want to completely leave out any specification… Maybe in the distant future a version of Python will take a different stance, once we have more experience with how this works out in practice, but for Python 3.5 I want to restrict the scope of the upheaval.” Van Rossum believed it’s possible (if the proposal is approved) to implement this feature in time for Python 3.5. Van Rossum’s entire proposal is available here. - See more at: Python creator proposes type annotations for programming language « SD Times
-
It might not be the end of the world, but the design of systemd and the attitudes of its developers have been counterproductive. Now that Red Hat has released RHEL 7 with systemd in place of the erstwhile SysVinit, it appears that the end of the world is indeed approaching. A schism and war of egos is unfolding within the Linux community right now, and it is drawing blood on both sides. Ultimately, no matter who "wins," Linux looks to lose this one. The idea behind systemd was to replace the aged Init functionality and provide a sleek, common system initialization framework that could be standardized across multiple Linux distributions. systemd promised to speed up system boot times, better handle race conditions, and in general, improve upon an item that wasn't exactly broken, but wasn't as efficient as it could be. [ First look: Docker 1.0 is ready for prime time | How-to: Get started with Docker | For the latest practical data center info and news, check out InfoWorld's Data Center newsletter. ] As an example, you might be able to produce software that could compile and run on numerous Linux distributions, but if it had to start at boot time, you could be required to write several different Init-style boot scripts, one for each supported distribution. Clearly this is inelegant and could use improvement. Also, there was the problem that traditional Init is slow and bulky, based on shell scripts and somewhat random text configuration files. This is a problem on systems that need to boot as fast as possible, like embedded Linux systems, but is much less of a problem on big iron that takes longer to count RAM in POST than it does to boot to a log-in prompt. However, it's hard to argue that providing accelerated boot times for Linux across the board is not a good thing. These are all laudable goals, and systemd wasn't the first project aimed at achieving them. It is, however, the first such project to gain mainstream acceptance. This is in no small part due to the fact that the main developers of systemd are employed by Red Hat, which is still the juggernaut of Linux distributions. Red Hat exerted its considerable force on the Linux world. Thus, we saw systemd take over Fedora, essentially become a requirement to run the GNOME desktop, then become an inextricable part of a significant number of other distributions (notably not the "old guard" distributions such as Gentoo). Now you'd be hard-pressed to find a distribution that doesn't have systemd in the latest release (Debian doesn't really use systemd, but still requires systemd-shim and CGManager). While systemd has succeeded in its original goals, it's not stopping there. systemd is becoming the Svchost of Linux -- which I don't think most Linux folks want. You see, systemd is growing, like wildfire, well outside the bounds of enhancing the Linux boot experience. systemd wants to control most, if not all, of the fundamental functional aspects of a Linux system -- from authentication to mounting shares to network configuration to syslog to cron. It wants to do so as essentially a monolithic entity that obscures what's happening behind the scenes. No matter which side of the argument you're on, this monolithic approach is in violation of the rules of Unix, specifically the rule stating it's best to have small tools that do one job perfectly rather than one large tool that is mediocre at performing many jobs. Prior to this, all the functions subsumed by systemd were accomplished by assembling small tools in such a way that they performed the desired function. These same tools could be used within a variety of other scripts to perform myriad tasks -- there was no singular way to do anything, which allowed for extreme freedom to address and fix problems. It also allowed for poor implementations of some functions, simply because they were poorly assembled. You can't have both, after all. That's not the end of the story. There's more happening with systemd than many might realize. First, systemd is rather inelegantly designed. While there are many defensible aspects of systemd, other aspects boggle the mind. Not the least of these was that, as of a few months ago, trying to debug the kernel from the boot line would cause the system to crash. This was because of systemd's voracious logging and the fact that systemd responds to the "debug" flag on the kernel boot line -- a flag meant for the kernel, not anything else. That, straight up, is a bug. However, the systemd developers didn't see it that way and actively fought with those experiencing the problem. Add the fact that one of the systemd developers was banned by Linus Torvalds for poor attitude and bad design and another was responsible for causing significant issues with Linux audio support, but , and you have a bad situation on your hands. There's no shortage of egos in the open source development world. There's no shortage of new ideas and veteran developers and administrators pooh-poohing something new simply because it's new. But there are also 45 years of history behind Unix and extremely good reasons it's still flourishing. Tools designed like systemd do not fit the Linux mold, to their own detriment. Systemd's design has more in common with Windows than with Unix -- down to the binary logging. My take is that systemd is a good idea poorly implemented, developed by people with enormous egos who firmly believe they can do no wrong. As it stands now, both systemd and the developers responsible for it need to change. In the open source world, change is a constant and sometimes violent process, and upheavals around issues such as systemd aren't necessarily bad. That said, these battles cannot be drawn out forever without causing irreparable harm -- and any element as integral to the stability and functionality of Linux as systemd has even less time than most. Articol preluat de aici
-
C++14 is done! Following the Issaquah meeting in February, we launched the Draft International Standard (DIS) ballot for the next C++ standard. That ballot closed on Friday. Today, we received the notification that the ballot was unanimously successful, and therefore we can proceed to publication. We will perform some final editorial tweaks, on the order of fixing a few spelling typos and accidentally dropped words, and then transmit the document to ISO for publication this year as the brand new International Standard ISO/IEC 14882:2014(E) Programming Language C++, a.k.a. C++14. C++ creator Bjarne Stroustrup writes: “C++14 was delivered on schedule and implementations are already shipping by major suppliers. This is exceptional! It is a boon to people wanting to use C++ as a modern language.” Thanks very much to our tireless C++14 project editor Stefanus DuToit and his helpers, and to all the members of the C++ standards committee, for bringing in this work on time and at high quality with a record low number of issues and corrections in the CD and DIS ballots! Not only is this the fastest turnaround for a new standard in the history of C++, but as Bjarne noted this is historic in another way: There are already multiple substantially or entirely conforming implementations (modulo bugs) of C++14 available already today or in the near future – at the same time time C++14 is published. That has never happened before for a C++ (or I believe C) standard. For C++98, the delta between publishing the standard and the first fully conforming implementation being available was about 5 years. For C++11, it was two years. For C++14, the two have merged and we have achieved “time on target.” Thanks again, everyone. This was a team effort. Sursa: aici Sugerez sa vizitati link-ul cu draft-ul.
-
C++ inventor details the language's latest changes and assesses the strengths and weaknesses of its competitors Bjarne Stroustrup designed the C++ language in 1979, and the general-purpose language for systems programming has become a mainstay for developers everywhere, despite competition from Java, JavaScript, Python, Go, and Apple's newly unveiled Swift. Now a technologist at Morgan Stanley and a professor at both Columbia University and Texas A&M University, Stroustrup spoke with InfoWorld Editor at Large Paul Krill about C++'s role today and about other happenings in software development, including Google's Go and Apple's Swift languages. InfoWorld: Where do you see the role of C++ today, when you have popular scripting languages like Python and JavaScript along with languages like Java and even Google's Go? How does C++ manage to survive, thrive, and grow in such a diverse landscape with all these different languages? Stroustrup: That's a good question. People have been predicting its demise quite enthusiastically for more than 20 years, but it's still growing. Basically, nothing that can handle complexity runs as fast as C++. If you go to some embedded areas, if you go to image processing, if you go to some telecom applications, if you go to some financial applications, C++ rules. You don't see it much if you're into looking at apps and such, that's not where you find it. It's things like Google, Amazon, search engines, where you really need performance, that's where it is. InfoWorld: Google's Go language is getting attention lately. What's your perspective on Google Go? Stroustrup: It seems to be one of these languages that can do a few things elegantly. [but languages] focused on doing those things elegantly lose the edge in performance and lose a little bit in generality. But of course, we have to see what happens. InfoWorld: Some of these new scripting languages are intended for easy consumption by developers. Would you say C++ requires more attention than that? Stroustrup: Oh, definitely. C++ is designed for fairly hardcore applications, and it's always been used together with some scripting language or other. When I started, I used C++ for anything that required a real programming language and real performance. Then I used the Unix shell as my scripting language. That was how it [was done], and that's also the way things are done in most of the cases today. [C++ is for] high performance, high reliability, small footprint, low energy consumption, all of these good things. I'm not saying hobbyists, I'm not saying quick apps. That's not our domain. InfoWorld: Apple debuted its Swift language on June 2. Do you think the fact that it has Apple's backing it means it's going to be a significant language that developers are going to have to pay attention to? Stroustrup: I think so. They paid attention to Objective-C, and now Swift is moving into that exact domain again. Continuare articol aici
-
- 1
-
Microsoft is aiming to deliver a "technology preview" of its Windows "Threshold" operating system by late September or early October, according to multiple sources of mine who asked not to be named. And in a move that signals where Microsoft is heading on the "servicability" front, those who install the tech preview will need to agree to have subsequent monthly updates to it pushed to them automatically, sources added. Threshold is the next major version of Windows that is expected to be christened "Windows 9" when it is made available in the spring of 2015. Threshold is expected to include a number of new features that are aimed at continuing to improve Windows' usability on non-touch devices and by those using mice and keyboards alongside touch. Among those features — according to previous leaks — are a new "mini" Start Menu; windowed Metro-Style applications that can run on the Desktop; virtual desktops; and the elimination of the Charms bar that debuted as part of Windows 8. Cortana integration with Windows Threshold is looking like it could make it into the OS, as well. Microsoft has painted bold design strokes with Windows 8, but the business impact remains hotly debated. ZDNet and TechRepublic have the enterprise and SMB perspectives on Windows 8 covered from virtually every angle. I've asked Microsoft officials for comment. To date, Microsoft execs have declined to comment on what will be in Threshold, when it will be available, how much it will cost or what it will be named. When Microsoft was working on Windows 8, the company delivered three external "milestones" before making the operating system generally available in October 2012. First there was a Windows 8 developer preview, which Microsoft released on September 13, 2011, followed by a Windows 8 "consumer" preview on February 29, 2012. The operating system was released to manufacturing on August 1, 2012. These days, Microsoft's operating system team is on a more rapid release schedule, so I'd think there won't be five or six months between any Threshold milestone builds Microsoft plans to make available externally. I had heard previously from my contacts that Microsoft was aiming to deliver a public preview of Threshold available to anyone interested toward the end of calendar 2014. I'm not sure if there's still a plan to make a public consumer preview available at that time or if this "technical preview" is the only "preview" Microsoft will release before Threshold is released to manufacturing. Update: One of my contacts who has provided accurate information on Windows in the past said the Threshold tech preview will be public and available to all those interested. Sursa aici
-
When I first started reading Ars Technica, performance of a processor was measured in megahertz, and the major manufacturers were rushing to squeeze as many of them as possible into their latest silicon. Shortly thereafter, however, the energy needs and heat output of these beasts brought that race crashing to a halt. More recently, the number of processing cores rapidly scaled up, but they quickly reached the point of diminishing returns. Now, getting the most processing power for each Watt seems to be the key measure of performance. None of these things happened because the companies making processors ran up against hard physical limits. Rather, computing power ended up being constrained because progress in certain areas—primarily energy efficiency—was slow compared to progress in others, such as feature size. But could we be approaching physical limits in processing power? In this week's edition of Nature, The University of Michigan's Igor Markov takes a look at the sorts of limits we might face. Clearing hurdles Markov notes that, based on purely physical limitations, some academics have estimated that Moore's law had hundreds of years left in it. In contrast, the International Technology Roadmap for Semiconductors (ITRS), a group sponsored by the major semiconductor manufacturing nations, gives it a couple of decades. And the ITRS can be optimistic; it once expected that we would have 10GHz CPUs back in the Core2 days. The reason for this discrepancy is that a lot of hard physical limits never come into play. For example, the ultimate size limit for a feature is a single atom, which represents a hard physical limit. But well before you reach single atoms, physics limits the ability to accurately control the flow of electrons. In other words, circuits could potentially reach single-atom thickness, but their behavior would become unreliable before they got there. In fact, a lot of the current work Intel is doing to move to ever-smaller processes involves figuring out how to structure individual components so that they continue to function despite these issues. The gist of Markov's argument seems to be that although hard physical limits exist, they're often not especially relevant to the challenges that are impeding progress. Instead, what we have are softer limits, ones that we can often work around. "When a specific limit is approached and obstructs progress, understanding its assumptions is a key to circumventing it,' he writes. "Some limits are hopelessly loose and can be ignored, while other limits remain conjectural and are based on empirical evidence only; these may be very difficult to establish rigorously." FURTHER READING BROADWELL IS COMING: A LOOK AT INTEL’S LOW-POWER CORE M AND ITS 14NM PROCESS Once Intel can get past the delays, its new chips will have a lot to offer. As a result, things that seem like limits are often overcome by a combination of creative thinking and improved technology. The example Markov cites is the diffraction limit. Initially, this limit should have kept the argon-fluorine lasers we use from etching any features finer than 65 nanometers. But by using sub-wavelength diffraction, we're currently working on 14nm features using the same laser. Where are the current limits? Markov focuses on two issues he sees as the largest limits: energy and communication. The power consumption issue comes from the fact that the amount of energy used by existing circuit technology does not shrink in a way that's proportional to their shrinking physical dimensions. The primary result of this issue has been that lots of effort has been put into making sure that parts of the chip get shut down when they're not in use. But at the rate this is happening, the majority of a chip will have to be kept inactive at any given time, creating what Markov terms "dark silicon." Power use is proportional to the chip's operating voltage, and transistors simply cannot operate below a 200 milli-Volt level. Right now, we're at about five times that, so there's the potential for improvement there. But progress in lowering operating voltages has slowed, so we may be at another point where we've run into a technological roadblock prior to hitting a hard limit of physics. The energy use issue is related to communication, in that most of the physical volume of a chip, and most of its energy consumption, is spent getting different areas to communicate with each other or with the rest of the computer. Here, we really are pushing physical limits. Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other. The best we can do with current technology is to try to design chips such that areas that frequently need to communicate with each other are physically close to each other. Extending more circuitry into the third dimension could help a bit—but only a bit. Continuare articol aici
-
Details of a new NSA program have emerged from Wired's meticulous Snowden profile this morning. As part of the interview, Snowden described an ongoing NSA project called Monstermind, planned as a new cyberdefense capability. The system would scan web metadata for signs of an attack in progress, then respond automatically to blunt the attack and potentially even retaliate. The program is still in development and there is no information on if or when it might be deployed, but once put in action, it would represent a huge shift towards American control over the internet, effectively stopping any traffic the NSA deems malicious. It isn't the first time someone has proposed stamping out malware by monitoring network traffic -- the SecDev group took a similar approach with its ZeroPoint project -- but with a network-level view of most of the traffic traveling over the web, the NSA is uniquely positioned to pull it off. Still, the development of the program raises a number of difficult questions. If MonsterMind is launching automatic counterattacks, how will it prevent collateral damage against intermediary machines caught up in botnet attacks? More importantly, is MonsterMind's protection enough to justify the NSA's continued access to most of the activity on the web?
-
The machines are taking over. Or they will, if we keep teaching machines to think for themselves. And we can't seem to stop. Two years back GigaOm's Derrick Harris opined that "it’s difficult to imagine a new tech company launching that doesn’t at least consider using machine learning models to make its product or service more intelligent." And that's true. But engineers at Google, Twitter and new startups have largely been forced to roll their own machine learning libraries and systems. What's been missing are open-source projects that provide essential building blocks for easily embedding machine learning into applications. The Apache Software Foundation has sought to change this with Apache Mahout, and now PredictionIO just raised $2.5 million in an effort to take open-source machine learning even further. I sat down with PredictionIO founder Simon Chan to better understand the market and why open source matters in the complex world of machine learning. Making Machine Learning Simple ReadWrite: You call yourself the “MySQL of prediction.” What does that mean? Simon Chan: Before the birth of MySQL, database management systems (think Oracle, DB2, etc.) were largely inaccessible to many developers and companies. Such systems are complex, expensive and proprietary. MySQL has rewritten the history of the relational database industry. It allows every website and application, regardless of the size, to be powered by a database server. The current world of machine learning is similar to the old days of the database industry. Machine learning is still inaccessible to most companies and developers. The cost of development and maintenance of machine learning infrastructure is extremely high. Companies like Google, LinkedIn and Twitter spend huge amounts of money to recruit data scientists. PredictionIO, as MySQL did to the database industry, can be the machine learning server behind every application. It is 100% open source, developer-friendly and production-ready. RW: Machine learning sounds great, but historically hasn’t worked as advertised, or it's required extensive engineering resources to pull off. What does PredictionIO do differently? SC: We believe that every prediction problem is unique; therefore, most black box machine learning solutions don’t work as planned. PredictionIO makes the life of developers easier by handling a lot of heavy lifting, such as algorithm evaluation and distributed deployment. It also comes with a number of built-in predictive engines for developers to use right away. But more importantly, PredictionIO is a customizable open-source product. This means that developers can optimize and improve the predictive engines whenever they need to. Continuare articol aici
- 1 reply
-
- 1
-
Simplu, pentru ca pur si simplu oamenii cauta metode rapide de a "face bani" si sunt dispusi sa plateasca. Cerere si oferta. Doar ca oferta este falsa. Daca te interesa sa ajuti pe cineva cu aceste cursuri, luai frumos vreo 2-3 amarati de pe forum care au nevoie de bani si sunt cu capul la invatat si ii educai pe ei. Sau daca vroiai sa ai credibilitate vorbeaci cu 2-3 vipi/moderatori, le explicai metoda, iar acestia puteau sa garanteze pentru tine. Ceea ce incerci tu sa faci sa facut si se va face in continuare pe net. X vinde o metoda garantata de a face y suma de bani, iar el ca deh, e baiat bun, vinde metoda asta oricui vrea sa plateasca. Nu crezi ca daca ar fi o astfel de metoda reala, ar avea grija sa o tina cat mai bine ascunsa pentru a nu fi abuzata de toti si in final sa nu mai fie eficienta? Asa ca termina cu "metodele de facut bani". Eu iti sugerez sa incerci sa vinzi metoda asta in alta parte, pana nu iei ban.
-
Salut, Nu incurajam ScriptKid-ismul pe acest forum. Ceea ce vrei tu este in mare parte o tampenie. Mai mult de atat, nu exista un "program" eficient pentru a face asta deoarece pentru a putea "flooda" pe cineva trebuie sa ai o latime de banda mai mare decat el, pentru a ii putea trimite mai multe pachete decat poate primi pe conexiunea lui de internet. De asemenea nu recomand sa faci asa ceva, nu faci altceva decat sa dai dovada de imaturitate, iar pe acest forum nu vei primi un raspuns pozitiv.
-
Scientists reconstruct speech through soundproof glass by watching a bag of potato chips. Subtle vibrations can be translated back into audio Your bag of potato chips can hear what you're saying. Now, researchers from MIT are trying to figure out a way to make that bag of chips tell them everything that you said — and apparently they have a method that works. By pointing a video camera at the bag while audio is playing or someone is speaking, researchers can detect tiny vibrations in it that are caused by the sound. When later playing back that recording, MIT says that it has figured out a way to read those vibrations and translate them back into music, speech, or seemingly any other sound. While a bag of chips is one example of where this method can be put to work, MIT has found success with it elsewhere, including when watching plant leaves and the surface of a glass of water. While the vibrations that the camera is picking up aren't observable to the human eye, seemingly anything observable to a camera can work here. For the most part the researchers used a high-speed camera to pick up the vibrations, even using it to detect them on a potato chip bag filmed 15-feet away and through a pane of soundproof glass. Even without a high-speed camera though, researchers were able to use a common digital camera to pick up basic audio information. "We’re scientists, and sometimes we watch these movies, like James Bond, and we think, ‘This is Hollywood theatrics. It’s not possible to do that. This is ridiculous.’ And suddenly, there you have it," Alexei Efros, a University of California at Berkeley researcher, says in a statement. "This is totally out of some Hollywood thriller. You know that the killer has admitted his guilt because there’s surveillance footage of his potato chip bag vibrating." The research is being described in a paper that will be published at the computer graphics conference Siggraph. Sursa: aici
-
The HTTP/2 protocol will speed Web delivery, though it also may put more strain on Web servers. IDG News Service - When it comes to speeding up Web traffic over the Internet, sometimes too much of a good thing may not be such a good thing at all. The Internet Engineering Task Force is putting the final touches on HTTP/2, the second version of the Hypertext Transport Protocol (HTTP). The working group has issued a last call draft, urging interested parties to voice concerns before it becomes a full Internet specification. Not everyone is completely satisfied with the protocol however. "There is a lot of good in this proposed standard, but I have some deep reservations about some bad and ugly aspects of the protocol," wrote Greg Wilkins, lead developer of the open source Jetty server software, noting his concerns in a blog item posted Monday. Others, however, praise HTTP/2 and say it is long overdue. "A lot of our users are experimenting with the protocol," said Owen Garrett, head of products for server software provider NGINX. "The feedback is that generally, they have seen big performance benefits." First created by Web originator Tim Berners-Lee and associates, HTTP quite literally powers today's Web, providing the language for a browser to request a Web page from a server. Version 2.0 of HTTP, based largely on the SPDY protocol developed by Google, promises to be a better fit for how people use the Web. "The challenge with HTTP is that it is a fairly simple protocol, and it can be quite laborious to download all the resources required to render a Web page. SPDY addresses this issue," Garrett said. While the first generation of Web sites were largely simple and relatively small, static documents, the Web today is used as a platform for delivering applications and bandwidth intensive real-time multimedia content. HTTP/2 speeds basic HTTP in a number of ways. HTTP/2 allows servers to send all the different elements of a requested Web page at once, eliminating the serial sets of messages that have to be sent back and forth under plain HTTP. HTTP/2 also allows the server and the browser to compress HTTP, which cuts the amount of data that needs to be communicated between the two. As a result, HTTP/2 "is really useful for organization with sophisticated Web sites, particularly when its users are distributed globally or using slower networks -- mobile users for instance," Garrett said. While enthusiastic about the protocol, Wilkins did have several areas of concern. For instance, HTTP/2 could make it more difficult to incorporate new Web protocols, most notably the communications protocol WebSocket, Wilkins asserted. Wilkins noted that HTTP/2 blurs what were previously two distinct layers of HTTP -- the semantic layer, which describes functionality, and the framework layer, which is the structure of the message. The idea is that it is simpler to write protocols for a specification with discrete layers. The protocol also makes it possible to hide content, including malicious content, within the headers, bypassing the notice of today's firewalls, Wilkins said. HTTP/2 could also put a lot more strain on existing servers, Wilkins noted, given that they will now be fielding many more requests at once. HTTP/2 "clients will send requests much more quickly, and it is quite likely you will see spikier traffic as a result," Garrett agreed. As a result, a Web application, if it doesn't already rely on caching or load balancing, may have to do so with HTTP/2, Garrett said. The SPDY protocol is already used by almost 1 percent of all the websites, according to an estimate of the W3techs survey company. NGINX has been a big supporter of SPDY and HTTP/2, not surprising given that the company's namesake server software was designed for high-traffic websites. Approximately 88 percent of sites that offer SPDY do so with NGINX, according to W3techs. Yet NGINX has characterized SPDY to its users as "experimental," Garrett said, largely because the technology is still evolving and hasn't been nailed down yet by the formal specification. "We're really forward to when the protocol is rubber-stamped," Garrett said. Once HTTP/2 is approved, "We can recommend it to our customers with confidence," Garrett said. Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com Sursa: aici
-
Last week MS Press published a free ebook based on the Building Real-World Apps using Azure talks I gave at the NDC and TechEd conferences. The talks + book walks through a patterns-based approach to building real world cloud solutions, and help make it easier to understand how to be successful with cloud development. Videos of the Talks You can watch a video recording of the talks I gave here: - Part 1: Building Real World Cloud Apps with Azure - Part 2: Building Real World Cloud Apps with Azure eBook Downloads You can now download a completely free PDF, Mobi or ePub version of the ebook based on the talks using the links below: - Download the PDF (6.35 MB) - Download the EPUB file (12.3 MB) - Download the Mobi for Kindle file (22.7 MB) Hope this helps, Scott Sursa aici
-
When creators of the state-sponsored Stuxnet worm used a USB stick to infect air-gapped computers inside Iran's heavily fortified Natanz nuclear facility, trust in the ubiquitous storage medium suffered a devastating blow. Now, white-hat hackers have devised a feat even more seminal—an exploit that transforms keyboards, Web cams, and other types of USB-connected devices into highly programmable attack platforms that can't be detected by today's defenses. Dubbed BadUSB, the hack reprograms embedded firmware to give USB devices new, covert capabilities. In a demonstration scheduled at next week's Black Hat security conference in Las Vegas, a USB drive, for instance, will take on the ability to act as a keyboard that surreptitiously types malicious commands into attached computers. A different drive will similarly be reprogrammed to act as a network card that causes connected computers to connect to malicious sites impersonating Google, Facebook or other trusted destinations. The presenters will demonstrate similar hacks that work against Android phones when attached to targeted computers. They say their technique will work on Web cams, keyboards, and most other types of USB-enabled devices. "Please don't do anything evil" "If you put anything into your USB [slot], it extends a lot of trust," Karsten Nohl, chief scientist at Security Research Labs in Berlin, told Ars. "Whatever it is, there could always be some code running in that device that runs maliciously. Every time anybody connects a USB device to your computer, you fully trust them with your computer. It's the equivalent of [saying] 'here's my computer; I'm going to walk away for 10 minutes. Please don't do anything evil." In many respects, the BadUSB hack is more pernicious than simply loading a USB stick with the kind of self-propagating malware used in the Stuxnet attack. For one thing, although the Black Hat demos feature only USB2 and USB3 sticks, BadUSB theoretically works on any type of USB device. And for another, it's almost impossible to detect a tampered device without employing advanced forensic methods, such as physically disassembling and reverse engineering the device. Antivirus scans will turn up empty. Most analysis short of sophisticated techniques rely on the firmware itself, and that can't be trusted. "There's no way to get the firmware without the help of the firmware, and if you ask the infected firmware, it will just lie to you," Nohl explained. Most troubling of all, BadUSB-corrupted devices are much harder to disinfect. Reformatting an infected USB stick, for example, will do nothing to remove the malicious programming. Because the tampering resides in the firmware, the malware can be eliminated only by replacing the booby-trapped device software with the original firmware. Given the possibility that traditional computer malware could be programmed to use BadUSB techniques to infect any attached devices, the attack could change the entire regimen currently used to respond to computer compromises. "The next time you have a virus on your computer, you pretty much have to assume your peripherals are infected, and computers of other people who connected to those peripherals are infected," Nohl said. He said the attack is similar to boot sector infections affecting hard drives and removable storage. A key difference, however, is that most boot sector compromises can be detected by antivirus scans. BadUSB infections can not. The Black Hat presentation, titled BadUSB—on accessories that turn evil, is slated to provide four demonstrations, three of which target controller chips manufactured by Phison Electronics. They include: Transforming a brand-name USB stick into a computer keyboard that opens a command window on an attached computer and enters commands that cause it to download and install malicious software. The technique can easily work around the standard user access control in Windows since the protection requires only that users click OK. Transforming a brand-name USB stick into a network card. Once active, the network card causes the computer to use a domain name system server that causes computers to connect to malicious sites impersonating legitimate destinations. Programming a brand-name USB stick to surreptitiously inject a payload into a legitimate Ubuntu installation file. The file is loaded onto the drive when attached to one computer. The tampering happens only after it is plugged into a separate computer that has no operating system present on it. The demo underscores how even using a trusted computer to verify the cryptographic hash of a file isn't adequate protection against the attack. Transforming an Android phone into a malicious network card. Remember badBIOS? MEET “BADBIOS,” THE MYSTERIOUS MAC AND PC MALWARE THAT JUMPS AIRGAPS Like a super strain of bacteria, the rootkit plaguing Dragos Ruiu is omnipotent. The capabilities of BadUSB closely resemble the mysterious badBIOS malware security consultant Dragos Ruiu said repeatedly infected his computers. Nine months after Ars reported security researchers were unable to independently reproduce his findings, that remains the case. Still, Nohl said BadUSB confirms that the badBIOS phenomena Ruiu described is technically feasible. "Everything Dragos postulated is entirely possible with reasonable effort," Nohl said. "I'm pretty sure somebody is doing it already. This is something that's absolutely possible." No easy fix Nohl said there are few ways ordinary people can protect themselves against BadUSB attacks short of limiting the devices that get attached to a computer to those that have remained in the physical possession of a trusted party at all times. The problem, he said, is that USB devices were never designed to prevent the types of exploits his team devised. By contrast, peripherals based on the Bluetooth standard contain cryptographic locks that can only be unlocked through a time-tested pairing process. The other weakness that makes BadUSB attacks possible is the lack of cryptographic signing requirements when replacing device firmware. The vast majority of USB devices will accept any firmware update they're offered. Programming them in the factory to accept only those updates authorized by the manufacturer would go a long way to preventing the attacks. But even then, devices might be vulnerable to the same types of rooting attacks people use to jailbreak iPhones. Code signing would likely also drive up the cost of devices. "It's the endless struggle between do you anticipate security versus making it so complex nobody will use it," Nohl said. "It's the struggle between simplicity and security. The power of USB is that you plug it in and it just works. This simplicity is exactly what's enabling these attacks." Sursa aici
-
In primul rand, salut si bine ai venit. In al 2-lea rand, sa iti explic de fapt care-i treaba cu ce ai spus tu mai sus. Chestia aia, e doar o capcana, o momeala, pentru voi tineri creduli. O momeala prin care niste hoti ordinari, se folosesc de voi, de cunostintele voastre pentru a FURA bani de la alti. Atat si nimic mai mult. Acum legat de ce vrei sa inveti. Care este copul final al unei experiente in domeniul "hacking-ului"? Ce vrei sa faci de fapt cu aceste cunostinte. Legat de utilizarea backtrack, este ca si cum ai cere unui mecanic sa te invete sa conduci. Desigur o sa poti sa conduci masina, dar la prima problema o sa ramai blocat, pe cand un mecanic isi va rezolva problema. Asa si cu backtrack-ul. Inveti sa il "folosesti" dar la prima chestie diferita/iesita din comun o sa te blochezi, daca nu sti cum functioneaza procesele din spatele chestiilor pe care le faci cu backtrack. Iar la incheiere un sfat: Gandeste-te bine ce vrei sa faci, pentru ca cei care vor doar sa "sparga site-uri" nu fac multi purici pe aici, si mai devreme sau mai tarziu isi pierd din drepturile si libertatile constitutionale.
-
Microsoft will remove the desktop from mobile-oriented versions of Windows 9, codename "Threshold," writes Mary Jo Foley. The mobile operating system, available for both ARM phones and tablets and x86 tablets, won't include a desktop environment at all. Laptop and desktop systems will have the desktop and will default to it. Hybrid systems are described as offering both a Metro-style mode and a regular desktop mode, governed by whether their keyboards are attached or available. Neowin reports that Microsoft is going a step further, with the live tile-centric Metro mode disabled by default for desktop machines. Metro apps themselves will still be available, launched from a new hybrid Start menu, residing in regular windows. Threshold is expected to come in the first half of 2015, possibly even as a free update to Windows 8.1 (and perhaps, Foley's sources speculate, Windows 7) users. Between now and then, a second update to Windows 8.1 is believed to be in the works. Windows 8.1 Update 2 (for want of a better name) is expected to be a much less substantial update than the Windows 8.1 Update that was released in April and should make only small user interface adjustments. Foley's sources say that the second update should be rolled out with the August Patch Tuesday release. A preview release of some kind for Threshold should be released to the public some time in the autumn. Overall, it looks that just as Apple and Google are embracing (and even extending) Microsoft's type-centric, more geometric, skeuomorphism-free Metro design language, Microsoft is backing away from it, or at least, backing away from its most visible implementation. While the company is understandably gun-shy about pushing Metro—Windows 8 in particular was obviously rough and unfinished in a lot of ways, and that alienated many early adopters—backing away from the design concept just as it's going mainstream doesn't seem like the most forward-looking action. Sursa articol aici
-
Linus Torvalds has called GCC 4.9.0 compiler ‘pure and utter sh*t’ and ‘terminally broken’ after a random panic was discovered in a load balance function in Linux 3.16-rc6. “Ok, so I’m looking at the code generation and your compiler is pure and utter *shit*”, in one of the mails on Linux kernel mailing list. “…gcc-4.9.0 seems to be terminally broken”, he added further. The issue that invited such comments from Torvalds is to do with the compiler apparently spilled a constant and incorrect stack red-zoning on x86-64 code generation. “Lookie here, your compiler does some absolutely insane things with the spilling, including spilling a *constant*. For chrissake, that compiler shouldn’t have been allowed to graduate from kindergarten. We’re talking “sloth that was dropped on the head as a baby” level retardation levels”, he added. Dwelling onto the technical bits, Torvalds went onto say that a bug report needs to be filed as it “is some seriously crazy shit.” Torvalds rules out kernel bug in the load balance random panic claiming that the compiler is creating broken code while also warning that those testing the kernel shouldn’t compile it with gcc-4.9.0. Torvalds has already filed a bug report (Bug 61904) regarding incorrect stack red-zoning on x86-64 code generation for gcc-4.9.0. The issue hasn’t been observed with gcc version 4.8 and it would be safest bet for now. Also, we are not sure if the kernel’s code compiles perfectly with gcc-4.9.1 which was released recently. Sursa articol aici