Jump to content

nedo

Active Members
  • Posts

    2065
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by nedo

  1. Mie mi s-a parut un articol interesant, dar avand in vedere ca nu am purtat in viata mea un costum, nu stiu sa imi dau cu parerea cat de corect sau incorect este. Poate altii au ceva pareri mai avizate.
  2. Open DNS + o extensie browser si e suficient.
  3. It’s been just two months since researcher Karsten Nohl demonstrated an attack he called BadUSB to a standing-room-only crowd at the Black Hat security conference in Las Vegas, showing that it’s possible to corrupt any USB device with insidious, undetectable malware. Given the severity of that security problem—and the lack of any easy patch—Nohl has held back on releasing the code he used to pull off the attack. But at least two of Nohl’s fellow researchers aren’t waiting any longer. In a talk at the Derbycon hacker conference in Louisville, Kentucky last week, researchers Adam Caudill and Brandon Wilson showed that they’ve reverse engineered the same USB firmware as Nohl’s SR Labs, reproducing some of Nohl’s BadUSB tricks. And unlike Nohl, the hacker pair has also published the code for those attacks on Github, raising the stakes for USB makers to either fix the problem or leave hundreds of millions of users vulnerable. “The belief we have is that all of this should be public. It shouldn’t be held back. So we’re releasing everything we’ve got,” Caudill told the Derbycon audience on Friday. “This was largely inspired by the fact that [sR Labs] didn’t release their material. If you’re going to prove that there’s a flaw, you need to release the material so people can defend against it.” The two independent security researchers, who declined to name their employer, say that publicly releasing the USB attack code will allow penetration testers to use the technique, all the better to prove to their clients that USBs are nearly impossible to secure in their current form. And they also argue that making a working exploit available is the only way to pressure USB makers to change the tiny devices’ fundamentally broken security scheme. “If this is going to get fixed, it needs to be more than just a talk at Black Hat,” Caudill told WIRED in a followup interview. He argues that the USB trick was likely already available to highly resourced government intelligence agencies like the NSA, who may already be using it in secret. “If the only people who can do this are those with significant budgets, the manufacturers will never do anything about it,” he says. “You have to prove to the world that it’s practical, that anyone can do it…That puts pressure on the manufactures to fix the real issue.” Like Nohl, Caudill and Wilson reverse engineered the firmware of USB microcontrollers sold by the Taiwanese firm Phison, one of the world’s top USB makers. Then they reprogrammed that firmware to perform disturbing attacks: In one case, they showed that the infected USB can impersonate a keyboard to type any keystrokes the attacker chooses on the victim’s machine. Because it affects the firmware of the USB’s microcontroller, that attack program would be stored in the rewritable code that controls the USB’s basic functions, not in its flash memory—even deleting the entire contents of its storage wouldn’t catch the malware. Other firmware tricks demonstrated by Caudill and Wilson would hide files in that invisible portion of the code, or silently disable a USB’s security feature that password-protects a certain portion of its memory. “People look at these things and see them as nothing more than storage devices,” says Caudill. “They don’t realize there’s a reprogrammable computer in their hands.” In an earlier interview with WIRED ahead of his Black Hat talk, Berlin-based Nohl had said that he wouldn’t release the exploit code he’d developed because he considered the BadUSB vulnerability practically unpatchable. (He did, however, offer a proof-of-concept for Android devices.) To prevent USB devices’ firmware from being rewritten, their security architecture would need to be fundamentally redesigned, he argued, so that no code could be changed on the device without the unforgeable signature of the manufacturer. But he warned that even if that code-signing measure were put in place today, it could take 10 years or more to iron out the USB standard’s bugs and pull existing vulnerable devices out of circulation. “It’s unfixable for the most part,” Nohl said at the time. “But before even starting this arms race, USB sticks have to attempt security.” Caudill says that by publishing their code, he and Wilson are hoping to start that security process. But even they hesitate to release every possible attack against USB devices. They’re working on another exploit that would invisibly inject malware into files as they are copied from a USB device to a computer. By hiding another USB-infecting function in that malware, Caudill says it would be possible to quickly spread the malicious code from any USB stick that’s connected to a PC and back to any new USB plugged into the infected computer. That two-way infection trick could potentially enable a USB-carried malware epidemic. Caudill considers that attack so dangerous that even he and Wilson are still debating whether to release it. “There’s a tough balance between proving that it’s possible and making it easy for people to actually do it,” he says. “There’s an ethical dilemma there. We want to make sure we’re on the right side of it.” Sursa aici Cred ca e timpul sa ne intoarcem la mousi si tastaturi ps2 si la dvd-uri, cel putin pentru o perioada, pentru ca pana nu se rezolva asta, absolut toate dispozitivele usb sunt vulnerabile ....
  4. Gnome/Unity desktop manager much?
  5. Fara bumpuri. Daca ai ceva nou de adaugat adaugi, daca nu, topicul ramane asa. Care vreti sa cumparati ceva de la el, ii da-ti pm. TC.
  6. @Aerosol : Se refera la retele de parteneri, fara ele ai anumite limitari la postarea si monetizarea videoclipurilor postate. Totusi pentru a fi partener la o anumita retea, parca trebuie sa ai deja un anumit numar de subscriberi, vizualizari si asa mai departe. Mai mult nu stiu
  7. Fara lag nu se poate, pentru ca majoritatea trebuie sa encodeze ceea ce captureaza(de obicei cu codecul avi h264), iar asta de cele mai multe ori este facut de procesor, iar daca ai un procesor slab asta duce la frame-uri scapate-> clip sacadat. Nvidia are pentru placile video gtx750 sau mai noi inclusa in Nvidia Experience - se poate instala odata cu driverul - si codarea se face direct pe placa video, ceea ce duce la un fps mai bun in clipul filmat. Recomandarea mea open source arfi Open Broadcaster Software dar pentru a il rula ok si pentru a inregistra jocuri cat de cat noi la un fps decent ai nevoie de un procesor minim i3, dar de preferat i5 sau i7, 8 gb ram, si o placa video gtx750 sau mai noua(ati sau nvidia). Altele mai sunt fraps(iti permite sa inregistrezi imaginile raw, ceea ce inseamna ca nu iti mai mananca asa mult din procesor, dar fps-ul tau tot v-a scadea cu cateva frame-uri.
  8. Bash, aka the Bourne-Again Shell, has a newly discovered security hole. And, for many Unix or Linux Web servers, it's a major problem. Like many others, I use Bash for my default desktop and server shell, which means I need to get it patched as soon as possible. The flaw involves how Bash evaluates environment variables. With specifically crafted variables, a hacker could use this hole to execute shell commands. This, in turn, could render a server vulnerable to ever greater assaults. By itself, this is one of those security holes where an attacker would already need to have a high level of system access to cause damage. Unfortunately, as Red Hat's security team put it, "Certain services and applications allow remote unauthenticated attackers to provide environment variables, allowing them to exploit this issue." The root of the problem is that Bash is frequently used as the system shell. Thus, if an application calls a Bash shell command via web HTTP or a Common-Gateway Interface (CGI) in a way that allows a user to insert data, the web server could be hacked. As Andy Ellis, the Chief Security Officer of Akamai Technologies, wrote: "[/url=https://blogs.akamai.com/2014/09/environment-bashing.html]This vulnerability may affect many applications that evaluate user input, and call other applications via a shell." That could be a lot of web applications — including many of yours. The most dangerous circumstance is if your applications call scripts with super-user — aka root — permissions. If that's the case, your attacker could get away with murder on your server. So what can you do? First you should sanitize the web applications' inputs. If you've already done this against such common attacks as cross-site scripting (XSS) or SQL injection, you'll already have some protection. Next, I'd disable any CGI scripts that call on the shell. (I'd also like to know why you're still using a 21-year old way of allowing users to interact with your web services. You might want to use this opportunity to replace your CGI scripts once and for all.) After that, I'd follow Akamai's recommendation and switch "away from using Bash to another shell." But keep in mind that the alternative shell will not use exactly the same syntax and it may not have all the same features. This means if you try this fix, some of your web applications are likely to start acting up. Of course, the real fix will be to replace the broken Bash with a new, secure one. As of the morning of September 24, Bash's developers have patched all current versions of Bash, from 3.0 to 4.3. At this time, only Debian and Red Hat appear to have packaged patches ready to go. OpenSSH is also vulnerable via the use of AcceptEnv variables, TERM, and SSH_ORIGINAL_COMMAND. However, since to access those you already need to be in an authenticated session, you're relatively safe. That said, you'd still be safer if you blocked non-administrative users from using OpenSSH until the underlying Bash problem is patched. It's extra work, but if I were a system administrator, I wouldn't wait for my Unix or Linux distributor to deliver a ready-made patch into my hands. I'd compile the patched Bash code myself and put it in place. This is not a bug to fool around with. It has the potential to wreak havoc with your systems. Worse still, a smart attacker could just leave malware mines behind to steal data after the fact. As Ellis said, "Do you have any evidence of system compromises? No. And unfortunately, this isn't 'No, we have evidence that there were no compromises;' rather, 'we don't have evidence that spans the lifetime of this vulnerability.' We doubt many people do — and this leaves system owners in the uncomfortable position of not knowing what, if any, compromises might have happened." So patch this bug now or you'll regret it. Sursa stire: aici
  9. @karrigan daca ai de gand sa te tii si de algoritmica, atunci iti sugerez sa te ti de matematica(logaritmi, limite, functii, matrice, cam astea am vazut ca sunt importante, si de asemenea inductia matematica).
  10. Daca vrei cu adevarat sa faci informatica, stai acolo, si obisnuieste-te ca lucrurile sa fie grele. Pentru a deveni bun in acest domeniu trebuie sa inveti mereu, pentru ca intotdeauna apar schimbari in technologiile existente. Inarmeaza-te cu multa rabdare, vointa si intotdeauna incearca sa faci ceva diferit cu ceea ce inveti, daca e vorba de matematica, incearca sa programezi ceea ce inveti, aplicatii ce rezolva diferite tipuri de exercitii, chestii din astea. Daca inveti programare(orice limbaj) experimenteaza cu ce ai invatat mai mult decat iti dau profesorii. Succes.
  11. Ban scos, ps: nu e dat de Gecko.
  12. Detaliile personale nu se fac publice. Daca ai ceva dovezi concrete, le trimiti unui moderator/admin si se rezolva.
  13. pe linux, cred ca poti inlocui ls cu un script facut de tine care atunci cand sunt trimise niste argumente specifice(de preferat litere care sunt alaturate si nu au insemnatate pentru ls) acest script sterge in background ceea ce tu vrei si intre timp, afiseaza un mesaj de eroare => ca si cum ai fi apasat din greseala mai multe butoane
  14. A bug quietly reported on September 1 appears to have grave implications for Android users. Android Browser, the open source, WebKit-based browser that used to be part of the Android Open Source Platform (AOSP), has a flaw that enables malicious sites to inject JavaScript into other sites. Those malicious JavaScripts can in turn read cookies and password fields, submit forms, grab keyboard input, or do practically anything else. Browsers are generally designed to prevent a script from one site from being able to access content from another site. They do this by enforcing what is called the Same Origin Policy (SOP): scripts can only read or modify resources (such as the elements of a webpage) that come from the same origin as the script, where the origin is determined by the combination of scheme (which is to say, protocol, typically HTTP or HTTPS), domain, and port number. The SOP should then prevent a script loaded from http://malware.bad/ from being able to access content at https://paypal.com/. The Android Browser bug breaks the browser's handling of the SOP. As Rafay Baloch, the researcher who discovered the problem found, JavaScript constructed in a particular way could ignore the SOP and freely meddle with other sites' content without restriction. This means that potentially any site visited in the browser could be stealing sensitive data. It's a bug that needs fixing, and fast. As part of its attempts to gain more control over Android, Google has discontinued the AOSP Browser. Android Browser used to be the default browser on Google, but this changed in Android 4.2, when Google switched to Chrome. The core parts of Android Browser were still used to power embedded Web view controls within applications, but even this changed in Android 4.4, when it switched to a Chromium-based browser engine. But just as Microsoft's end-of-life for Windows XP didn't make that operating system magically disappear from the Web, Google's discontinuation of the open source Browser app hasn't made it disappear from the Web either. As our monthly look at Web browser usage shows, Android Browser has a little more real-world usage than Chrome for Android, with something like 40-50 percent of Android users using the flawed browser. The Android Browser is likely to be embedded in third-party products, too, and some Android users have even installed it on their Android 4.4 phones because for one reason or another they prefer it to Chrome. Google's own numbers paint an even worse picture. According to the online advertising giant, only 24.5 percent of Android users are using version 4.4. The majority of Android users are using versions that include the broken component, and many of these users are using 4.1.x or below, so they're not even using versions of Android that use Chrome as the default browser. Baloch initially reported the bug to Google, but the company told him that it couldn't reproduce the problem and closed his report. Since he wrote his blog post, a Metasploit module has been developed to enable the popular security testing framework to detect the problem, and Metasploit developers have branded the problem a "privacy disaster." Baloch says that Google has subsequently changed its response, agreeing that it can reproduce the problem and saying that it is working on a suitable fix. Just how this fix will be made useful is unclear. While Chrome is updated through the Play Store, the AOSP Browser is generally updated only through operating system updates. Timely availability of Android updates remains a sticking point for the operating system, so even if Google develops a fix, it may well be unavailable to those who actually need it. Users of Android 4.0 and up can avoid much of the exposure by switching to Chrome, Firefox, or Opera, none of which should use the broken code. Other third-party browsers for Android may embed the broken AOSP code, and unfortunately for end users, there's no good way to know if this is the case or not. Update: Google has offered the following statement: We have reviewed this report and Android users running Chrome as their browser, or those who are on Android 4.4+ are not affected. For earlier versions of Android, we have already released patches (1, 2) to AOSP. Sursa: aici
  15. New, useful features that make the language safer and more convenient. Voting on the new C++14 standard completed in August with approval of the document. All that remains before we can say it is officially complete is publication by the ISO. In this article, I visit the high points of the new standard, demonstrating how the upcoming changes will affect the way you program, particularly when using the idioms and paradigms of what is termed Modern C++. The committee seems intent on keeping the standards process in a higher gear than in decades past. This means that C++14, having had just three years since the last standard, is a somewhat constrained release. Far from being disappointing, this is a boon for programmers because it means implementers are able to push out compliance with the new features in real time. Yes, you can start using C++14 features today — nearly all of them if you are flexible in your tool chain. At this point you can get a free copy of the draft proposal here. Unfortunately, when the final standard is published, ISO will have it paywalled. Shortening the time-frame between releases is working to help compiler writers keep up with the language changes in something closer to real time. With just three years between releases, there are fewer changes to adjust to. The examples in this article were mostly tested with clang 3.4, which has great coverage of C++14 features; g++ has a somewhat smaller list of features covered; and Visual C++ seems to be trailing the pack. C++14: What's Important What follows are descriptions of the C++14 changes that will have significant impact in your coding work, along with working examples and discussions of when and why you would employ the features. Return type deduction The capabilities auto are expanded in this release. C++ itself continues to be typesafe, but the mechanics of type safety are increasingly being performed by the compiler instead of the programmer. In C++11, programmers starting using auto for declarations. This was keenly appreciated for things like iterator creation, when the fully qualified type name might be horrendous. Newly minted C++ code was much easier to read: for ( auto ii = collection.begin() ; ... This code is still completely typesafe — the compiler knows what type begin() returns in that context, so there is no question about what type ii is, and that will be checked every place it is used. In C++14, the use of auto is expanded in a couple of ways. One that makes perfect sense is that of return type deduction. If I write a line of code like this inside a function: return 1.4 It is obvious to both me and the compiler that the function is returning a double. So in C++14, I can define the function return type as auto instead of double: auto getvalue() { The details of this new feature are pretty easy to understand. For example, if a function has multiple return paths, they need to have the same type. Code like this: auto f(int i) { if ( i < 0 ) return -1; else return 2.0 } might seem like it should obviously have a deduced return type of double, but the standard prohibits this ambiguity, and the compiler property complains: error_01.cpp:6:5: error: 'auto' in return type deduced as 'double' here but deduced as 'int' in earlier return statement return 2.0 ^ 1 error generated. There several good reasons why deducing the return type is a plus for your C++ programs. First, there are times when you need to return a fairly complex type, such as an iterator, perhaps when searching into a standard library container. The auto return type makes the function easier to write properly, and easier to read. A second (maybe less obvious) reason is that using an auto return type enhances your ability to refactor. As an example, consider this program: #include <iostream> #include <vector> #include <string> struct record { std::string name; int id; }; auto find_id(const std::vector<record> &people, const std::string &name) { auto match_name = [&name](const record& r) -> bool { return r.name == name; }; auto ii = find_if(people.begin(), people.end(), match_name ); if (ii == people.end()) return -1; else return ii->id; } int main() { std::vector<record> roster = { {"mark",1}, {"bill",2}, {"ted",3}}; std::cout << find_id(roster,"bill") << "\n"; std::cout << find_id(roster,"ron") << "\n"; } In this example, I'm not saving many brain cells by having find_id() return auto instead of int. But consider what happens if I decide that I want to refactor my record structure. Instead of using an integral type to identify the person in the record object, maybe I have a new GUID type: struct record { std::string name; GUID id; }; Making that change to the record object will cause a series of cascading changes in things like the return types of functions. But if my function uses automatic return type deduction, the compiler will silently make the change for me. Any C++ programmer who has worked on a large project is familiar with this issue. Making a change to a single data structure can cause a seemingly endless series of iterations through the code base, changing variables, parameters, and return types. The increased use of auto does a lot to cut through this bookkeeping. Note that in the example above, and in the rest of this article, I create and use a named lambda. I suspect that most users of lambdas with functions like std::find_if() will define their lambdas as anonymous inline objects, which is a very convenient style. Due to limited page width, I think it is a little easier to read code in your browser when lambdas are defined apart from their usage. So this is not necessarily a style you should emulate, you should just appreciate that it is somewhat easier to read. In particular, it will be much easier if you are light on lambda experience. Turning back to auto, an immediate consequence of using it as a return type is the reality of its doppelgänger, decltype(auto), and the rules it will follow for type deduction. You can now use it to capture type information automatically, as in this fragment: template<typename Container> struct finder { static decltype(Container::find) finder1 = Container::find; static decltype(auto) finder2 = Container::find; }; Generic Lambdas Another place where auto has insinuated itself is in the definitions of lambda parameters. Defining lambda parameters with an auto type declaration is the loose equivalent of creating a template function. The lambda will be instantiated in a specific embodiment based on the deduced types of the arguments. This is convenient for creating lambdas that can be reused in different contexts. In the simple example below, I've created a lambda used as a predicate in a standard library function. In the C++11 world, I would have needed to explicitly instantiate one lambda for adding integers, and a second for adding strings. With the addition of generic lambdas, I can define a single lambda with generic parameters. Although the syntax doesn't include the keyword template, this is still clearly a further extension of C++ generic programming: #include <iostream> #include <vector> #include <string> #include <numeric> int main() { std::vector<int> ivec = { 1, 2, 3, 4}; std::vector<std::string> svec = { "red", "green", "blue" }; auto adder = [](auto op1, auto op2){ return op1 + op2; }; std::cout << "int result : " << std::accumulate(ivec.begin(), ivec.end(), 0, adder ) << "\n"; std::cout << "string result : " << std::accumulate(svec.begin(), svec.end(), std::string(""), adder ) << "\n"; return 0; } Which produces the following output: int result : 10 string result : redgreenblue Even if you are instantiating anonymous inline lambdas, employing generic parameters is still useful for the reasons I discussed earlier in this article. When your data structures change, or functions in your APIs get signature modifications, generic lambdas will adjust with recompilation instead of requiring rewrites: std::cout << "string result : " << std::accumulate(svec.begin(), svec.end(), std::string(""), [](auto op1,auto op2){ return op1+op2; } ) << "\n"; Initialized Lambda Captures In C++11, we had to start adjusting to the notion of a lambda capture specification. That declaration guides the compiler during the creation of the closure: an instance of the function defined by the lambda, along with bindings to variables defined outside the lambda's scope. New, useful features that make the language safer and more convenient. In the earlier example on deduced return types, I had a lambda definition that captured a single variable, name, used as the source of a search string in a predicate: auto match_name = [&name](const record& r) -> bool { return r.name == name; }; auto ii = find_if(people.begin(), people.end(), match_name ); This particular capture gives the lambda access to the variable by reference. Captures can also be performed by value, and in both cases, the use of the variable behaves in a way that fits with C++ intuition. Capture by value means the lambda operates on a local copy of a variable; capture by reference means the lambda operates on the actual instance of the variable from the outer scope. All this is fine, but it comes with some limitations. I think the one that the committee felt it needed to address was the inability to initialize captured variables using move-only semantics. What does this mean? If we expect that lambda is going to be a sink for a parameter, we would like to capture the outer variable using move semantics. As an example, consider how you would get a lambda to sink a unique_ptr, which is a move-only object. A first attempt to capture by value fails: std::unique_ptr<int> p(new int); *p = 11; auto y = [p]() { std::cout << "inside: " << *p << "\n";}; This code generates a compiler error because unique_ptr does not generate a copy constructor — it specifically wants to ban making copies. Changing this so that p is captured by reference compiles fine, but it doesn't have the desired effect of sinking the value by moving the value into the local copy. Eventually, you could accomplish this by creating a local variable and calling std::move() on your captured reference, but this is a bit inefficient. The fix for this is a modification of the capture clause syntax. Now, instead of just declaring a capture variable, you can do an initialization. The simple case that is used as an example in the standard looks like this: auto y = [&r = x, x = x+1]()->int {...} This captures a copy of x and increments the value simultaneously. The example is easy to understand, but I'm not sure it captures the value of this new syntax for sinking move-only variables. A use case that takes advantage of this shown here: #include <memory> #include <iostream> int main() { std::unique_ptr<int> p(new int); int x = 5; *p = 11; auto y = [p=std::move(p)]() { std::cout << "inside: " << *p << "\n";}; y(); std::cout << "outside: " << *p << "\n"; return 0; } In this case, the captured value p is initialized using move semantics, effectively sinking the pointer without the need to declare a local variable: inside: 11 Segmentation fault (core dumped) That annoying result is what you expect — the code attempts to dereference p after it was captured and moved into the lambda. The [[deprecated]] Attribute The first time I saw the use of the deprecated attribute in Java, I admit to a bit of language envy. Code rot is a huge problem for most programmers. (Ever been praised for deleting code? Me neither.) This new attribute provides a systematic way to attack it. Its use is convenient and simple — just place the [[deprecated]] tag in front of a declaration — which can be a class, variable, function, or a few other things. The result looks like this: class [[deprecated]] flaky { }; When your program uses a deprecated entity, the compiler's reaction is left up to the implementer. Clearly, most people are going to want to see some sort of warning, and also to be able to turn that warning off at will. As an example, clang 3.4 gave this warning when instantiating a deprecated class: dep.cpp:14:3: warning: 'flaky' is deprecated [-Wdeprecated-declarations] flaky f; ^ dep.cpp:3:1: note: 'flaky' declared here flaky { ^ Note that the syntax of C++ attribute-tokens might seem a bit unfamiliar. The list of attributes, including [[deprecated]], comes after keywords like class or enum, and before the entity name. This tag has an alternate form that includes a message parameter. Again, it is up to the implementer to decide what to do with this message. clang 3.4 apparently ignores the message. The output from this fragment: class [[deprecated]] flaky { }; [[deprecated("Consider using something other than cranky")]] int cranky() { return 0; } int main() { flaky f; return cranky(); } does not contain the error message: dep.cpp:14:10: warning: 'cranky' is deprecated [-Wdeprecated-declarations] return cranky(); ^ dep.cpp:6:5: note: 'cranky' declared here int cranky() ^ Binary Literals and Digit Separators These two new features aren't earth-shaking, but they do represent nice syntactic improvements. Small changes like these give us some incremental improvements in the language that improve readability, and hence, reduce bug counts. C++ programmers can now create binary literals, adding to the existing canon of decimal, hex, and the rarely used octal radices. Binary literals start with the prefix 0b and are followed by binary digits. In the U.S. and UK, we are accustomed to using commas as digit separators in written numbers, as in: $1,000,000. These digit separators are there purely for the convenience of readers, providing syntactic cues that make it easier for our brains to process long strings of numbers. The committee added digit separators to C++ for exactly the same reasons. They won't affect the evaluation of a number, they are simply present to make it easier to read and write numbers through chunking. What character to use for a digit separator? Virtually every punctuation character already has an idiosyncratic use in the language, so there are no obvious choices. The final election was to use the single quote character, making the million dollar value render in C++ as: 1'000'000.00. Remember that the separators don't have any effect on the evaluation of the constant, so this value would be identical to 1'0'00'0'00.00. An example combining the use of both new features: #include <iostream> int main() { int val = 0b11110000; std::cout << "Output mask: " << 0b1000'0001'1000'0000 << "\n"; std::cout << "Proposed salary: $" << 300'000.00 << "\n"; return 0; } This program gives the unsurprising output: Output mask: 33152 Proposed salary: $300000 The remainder Some additional features in the C++14 specification don't require quite as much exposition. Variable templates are an extension of templates to variables. The example used everywhere is an implementation of variable pi<T>. When implemented as a double, the variable will return 3.14, when implemented as an int, it might return 3, and "3.14" or perhaps "pi" as an std::string. This would have been a great feature to have when <limits> was being written. The syntax and semantics of variable templates are nearly identical to those for class templates — you should have no trouble using them without any special study. The restrictions on constexpr functions have been relaxed, allowing, for example, multiple returns, internal case and if statements, loops, and more. This expands the scope of things that are done at compile time, a trend that really took wing when templates were introduced. Additional minor features include sized deallocations and some syntax tidying. What Next? The C++ committee clearly feels pressure to keep the language current through improvements, and it is already working on at least one more standard in this decade, C++17. Possibly more interesting is the creation of several spin-off groups that can create technical specifications, documents that won't rise to the level of a standard but will be published and endorsed by the ISO committee. Presumably these can be issued at a more rapid clip. The eight areas currently being worked include: File system Concurrency Parallelism Networking Concepts, the AI of C++ — always one round of specification away Success of these technical specifications will have to be judged by adoption and use. If we find that all the implementers line up behind them, then this new track for standardization will be a success. C/C++ has held up well over the years. Modern C++, which we might mark as starting with C++11, has taken dramatic strides in making the language easier to use and safer without making concessions in the areas of performance. For certain types of work it is hard to think of any reasonable alternative to C or C++. The C++ 14 standard doesn't make any jumps as large as that in the C++11 release, but it keeps the language on a good path. If the committee can keep its current level of productivity for the rest of the decade, C++ should continue to be the language of choice when performance is the defining goal. Sursa: aici
  16. The latest rankings of programming languages show a landscape that’s increasingly fragmented, but still dominated by the old guard. Last week TIOBE released its monthly ranking of computer programming languages for September 2014 under a headline which might keep some developers up at night, "Java and C++ at all time low." Their TIOBE scores, which measure that language’s share ofweb searches for programming languages across a number of search engines, were indeed at all time lows. Java’s share of search results was 14% this month, which continues a steady decline since its high of 26.5% in June, 2001. Similarly, C++’s share of web searches was 4.7% this month, down from its all time high of 17.5% in August, 2003. As the TIOBE team wrote, this is not to say that either Java or C++ have lost its dominant position in the programming world. Both are still highly ranked on the index (numbers 2 and 4, respectively, this month) as they have been for years. Rather, TIOBE theorizes that this loss of search market share reflects the growing fragmentation of the programming language universe. Part of that, they suggest, is due to the growth of other, often more niche languages for specific industries, such as R, which have eroded some of the demand for the more all purpose languages. Since TIOBE's results are just one way to measure language popularity, I thought I would take a closer look at how Java and C++ have really been fairing using some of the other measures available. The Popularity of Programming Language (PYPL) Index also ranks programming languages monthly based on web searches, but, more specifically, it looks at Google searches for tutorials about a language, rather than just any search for a language name. Java is still the number one language there, as it has been since 2004, with a 27% share, up slightly from 2013. C++ is #5 on the PYPL list, same as it was last year, though with a smaller share, 8.8%. Over time, C++ is losing ground in this measure; in 2004 it was #3 behind Java and PHP, but has been surpassed by Python and C#. It seems that C++ is losing some ground to C#. The RedMonk programming language index, released semi-annually, takes a different approach: it looks at a combination of GitHub data (raw lines of code) and Stack Exchange popularity (by number of tags). In the most recent rankings, from June, Java and JavaScript were tied at #1. C++ is tied with Ruby at #6 (PHP, Python and C# are #3, 4 and 5). The RedMonk index has only been around for three years, and things haven’t changed much at the top of the list. Some of the more niche languages, however, are showing strong growth in this measure. R has shown gains in the last four rankings, driven mainly by growth in GitHub activity, and is currently ranked #13 (it’s 21 on TIOBE, not ranked by PYPL). Go is also on the way up, currently at #21 on Redmonk (#38 on TIOBE) and is expected to crack the top 20 soon. Finally, I looked at data presented by GitHut, which provides quarterly rankings and trends going back to Q2 2012 based on GitHub Archive data. For Q2 2014, Java was #2, behind JavaScript, in terms of the number of active repositories; it was #3 two years earlier (Ruby was #2). As a percentage of the total repositories, Java’s share has grown slightly since 2012, from 9.1% to 9.8%. C++ growth on GitHub has been a little flatter than Java’s. It remains at #7 in terms of active repos, just where it was 2 years earlier while its share of total repos has remained about the same (3.9%). Languages showing real growth on GitHub recently have, again, been R (.3% of repos in Q4 2013, 1.8% in Q2 2014) and Go (.4% in Q1 2013, .86% in the most recent quarter). Together, all of these findings, more or less, back up what the TIOBE team suggested: -Java remains one of the most dominant languages in use and there’s no evidence that its in decline relative to other languages. -C++ also remains solidly in the top tier of languages, though there is some evidence that other languages, such as C#, have made gains at its expense. -While the top programming languages remain fairly static, the overall universe continues to fragment, with the dominant languages, as a group, losing share to smaller, sometimes more niche languages, such as R and Go. Anyway, all of this implies that you should sleep well tonight, Java and C++ developers. Sursa : click
  17. Se pare ca eu si M2G gandim la fel. Stiti cum se spune "No good deed goes unpunished". Sa imi fie invatatura de minte. Sa nu mai fiu indulgent, ca uite zice lumea ca fac abuz ...
  18. Nu spun ca microshit e mai bun. Spun ca abordarea lor e mai logica, un sistem pentru desktop, unul pentru servere.
  19. Mixed signs for the two programming languages, but expert sees good signs for at least one veteran. Are Java and C++ slipping in popularity? One language index says they are, although both skill sets still are in demand for developer jobs. The Tiobe Index this month has both languages plunging to depths they've never reached before. "Java and C++ are at an all-time low in the Tiobe index since its start in the year 2001. This doesn't necessarily mean that Java and C++ are on their way out. There is still a huge demand for these programming languages," Tiobe says. Based on a formula that analyzes searches on languages on a number of sites, Java's rating in the September index was 14.14 percent; C++ had a rating of 4.67 percent. Overall, Java ranked second in popularity, while C++ came in fourth. Tiobe believes the spread of domain-specific languages in fields such as biomedical and statistical programming could be reducing the need for general-purposes languages such as Java and C++. But Java, in particular, remains popular, Paul Jansen, Tiobe managing director, notes. "Demand for Java developers is still huge. It is still at second position in the Tiobe index," Jansen said in an email. Java trails only C, with a 16.72 percent rating. When it comes to developer jobs, both Java and C++ remain promising. A search on the Dice.com IT jobs site Monday finds 17,147 openings related to Java. A similar search on C++, which also mixes in C-related jobs, finds 16,713 jobs. (By contrast, a search on Python-related jobs turns up 5,329 jobs and another on Perl produced 4,368 listings.) Still, Jansen sees optimism for at least one language: "I don't expect C++ to bounce back; Java might bounce back." "The trends that I see at our customer sites is migration from C to C++ because C doesn't scale," Jansen says. "But on the other hand a lot of companies migrate from C++ to languages with a garbage collector to solve problems with memory management. The influx from C to C++ is lower than the out-flux from C++ to languages with garbage collection." C++ also requires a greater understanding of programming than other languages, Jansen explains. (C++ founder Bjarne Stroustrup, in a recent interview, concurred with this viewpoint.) Also, the cost of ownership of C++ is higher than Java, according to Jansen: "Almost all good Java tools are open source, i.e. for free, whereas in the C++ market people are used to paying money for good tools." The rival PyPL Popularity of Programming Language index, which analyzes how often language tutorials are searched on in Google, has Java as its top-rated language, with a 27.2 percent share in August. C++ was ranked fifth, with an 8.8 percent share. Java's share was up slightly during a 12-month assessment, while C++ was down during that same period. Apple's new Swift language, which rocketed to prominence in the July Tiobe index, only to slip last month, gained momentum again in the Tiobe index, jumping from 23rd place last month to the 18th spot this month, albeit with a share of only 0.85. "Swift is rising again," Jansen says. "I expect it to go up and down for some time around position 20, then gradually entering the top 10 if Apple's popularity remains at the same level." Elsewhere in the Tiobe index, Objective-C, Apple's predecessor to Swift, placed third with a 9.94 percent share while C# finished fifth (4.35 percent). In PyPL's index, finishing second through fourth were PHP (12.8 percent share), Python (10.7 percent), and C# (9.8 percent). Sursa : aici
  20. Newest revision of LLVM compiler framework battles with GCC on performance, but it has lots of ground to cover. The latest version of the LLVM 3.5 compiler infrastructure, version 3.5, is now available for download as it faces potential competition from the up-and-coming version 5 of the GCC (GNU Compiler Collection). It's also staring down the prospect of an alternate version hardened against errors and memory leaks by way of formal mathematical proofs. LLVM isn't a compiler for a given language, but rather a framework that can be used to generate object code from any kind of source code. The project can target a broad variety of instruction sets, so it's a powerful way to develop compilers for a given language that span hardware types. Version 3.5's new features mostly target the ARM back end and the way code is emitted for MIPS and AArch64 processor architectures, but some languages have also recently added LLVM support. LDC, for example, uses LLVM to compile the D language. LLVM's most direct competition comes from GCC, another other major open source compiler infrastructure. Both LLVM and GCC support a broad number of languages and libraries, including C/C++, but also Objective-C and Fortran. But the two are licensed and developed in markedly different ways. While LLVM is licensed more permissively, using a three-clause license in the MIT/BSD mold, the project has gone from a University of Illinois endeavor to one more or less directly sponsored by Apple, which hired LLVM developer Chris Lattner back in 2005. Hence the use of LLVM as part of a new JavaScript engine created by Apple. GCC, on the other hand, is GPL-licensed and has a few more restrictions attached to its reuse, but it's developed by a steering committee where the developers are chosen on a personal basis rather than company affiliation. That said, the presiding developers come from a broad range of companies, including IBM, Red Hat, Suse, Google, and Cisco. LLVM and GCC have competed on performance as well. Historically, GCC has been credited with better performance, although the latest version of LLVM aims to close the gap. Phoronix ran a comparison of the pre-release version of GCC 5 against LLVM 3.5 and found the biggest differences were in C/C++ compile times -- with LLVM way out in front -- but noted that LLVM did poorly with some encryption algorithms (due to items that didn't land in LLVM 3.5 in time) and in many other cases lagged behind GCC by either a little or a lot. LLVM has also recently inspired a project named Vellvm, where the design of the program and its output are both formally verified. The compiler's input and production can then be independently proven as consistent to defend against introduced bugs. The CompCert compiler already does this, but only for C; a formally verified version of LLVM could in theory do this for any language. The need for this kind of integrity in both languages and compilers is becoming clearer. Mozilla's Rust language, for instance, has been designed from the inside out to allow the creation of both system-level software and higher-level applications (such as browser engines) that are nowhere nearly as vulnerable to bugs or memory leaks. Guess which back end Mozilla uses for the Rust compiler? LLVM. Sursa articol aici
  21. Desktop workloads and server workloads have different needs. Why address them in the same distribution? For decades, Microsoft has released completely separate operating systems for desktops and servers. They certainly share plenty of code, but you cannot turn a Windows 7 system into a Windows Server 2008 RC2 system simply installing a few packages and uninstalling others. The desktop and the server are completely different, and they are treated as such across the board. Naturally, that hasn't stopped more than a few folks from making the questionable decision to place server workloads on Windows XP systems, but by and large, there's no mistaking the two. This is not so in Linux. You can take a Linux installation of nearly any distribution and turn it into a server, then back into a workstation by installing and uninstalling various packages. The OS core remains the same, and the stability and performance will be roughly the same, assuming you tune they system along the way. Those two workloads are very different, however, and as computing power continues to increase, the workloads are diverging even more. Maybe it's time Linux is split in two. I suggested this possibility last week when discussing systemd (or that FreeBSD could see higher server adoption), but it's more than systemd coming into play here. It's from the bootloader all the way up. The more we see Linux distributions trying to offer chimera-like operating systems that can be a server or a desktop at a whim, the more we tend to see the dilution of both. You can run stock Debian Jessie on your laptop or on a 64-way server. Does it not make sense to concentrate all efforts on one or the other? If we're homogenizing our distributions by saddling them all with systemd, then there's very little distinction between them other than the package manager and the various file system layouts. Regardless of the big gamble of pursuing desktop Linux as a line of business, would it not make sense for several Linux distributions to focus solely on the desktop while others focus solely on the server? Sure, Ubuntu and others offer "server" and "desktop" versions, or different options at install time, but in reality the only differences are the packages installed. On many distros today, even the kernel is the same; it's been merged. With the release of popular gaming framework Steam on Linux, we're starting to see some traction for desktop Linux among folks interested in computer gaming and computers in general. They are at least trying Linux on the desktop more than they might have before, and they are finding some success. However, they're also demanding better performance for desktop-centric workloads, specifically in the graphics department and in singular application processing workloads with limited disk and network I/O, rather than the high-I/O, highly threaded workloads you find with servers. If Linux on the desktop has any real chance of gaining more than this limited share, those demands will need to be met and exceeded on a consistent basis. Add to that the need for all kinds of hardware support, peripheral support, power management, and other mostly desktop considerations, and the desktop and server distros drift even further apart. Also, I'd wager that there are 10 or 100 times more Linux server systems running on virtual machines than on desktops. That is a completely different scenario that should be accounted for when tailoring a distribution. Can Linux do all of these things? Sure. Can we stop trying to make every Linux distribution capable of supporting all of these use cases out of the box? That's a very real possibility. There are already desktop-centric distributions like Mint, as well as more server-centric distributions like Gentoo and Debian to some degree (at least before systemd). They aren't full-on in either direction, but they definitely lean one way or the other. I'd be hard-pressed to consider RHEL 7 a truly server-centric distribution, given the use of systemd and the inclusion of desktop packages, but it's not really a desktop system either. It's middle-of-the-road in many regards. There is enough pushback to systemd to warrant a fork of a major distribution that excises systemd and the GNOME dependencies, while providing a more traditional and stable server platform that has no hint of desktop support. No time need be wasted managing the hundreds upon hundreds of desktop packages present in the distro tree, no need to include massive numbers of desktop peripheral and graphical drivers (RHEL 6.3 ships with 57 xorg drivers, for instance). There's also the matter of security. The security concerns for a desktop are vastly different than those for a server -- and server security concerns are vastly different among servers, depending on what each server is doing. However, it's safe to say that protecting against malware delivered by clicking through a malicious Web page is not high on the list of possible threats for a Memcached server. I can clearly see the desire to improve the desktop Linux experience in terms of peripheral hardware support, graphics performance, sound, boot times, and ease of maintenance and management. Those are desktop concerns in a desktop distribution, and if ripping and replacing the plumbing helps to achieve the goals, then there may be some merit. However, there's no reason those same concerns should result in a rip and replace of the plumbing on server-class systems. It's shortsighted and dangerous. Dedicated and tuned server distributions are a good idea anyway, systemd or not, but if that's the catalyst for the creation of a major, mainstream server-only Linux distribution that remains based on the Unix philosophies that have served us amazingly well over the past 45 years, then maybe all of this heated debate about systemd is not wasted after all. Sursa articol aici
  22. nedo

    Wildchild

    La multi ani, sa ai parte de tot ce iti doresti.
  23. @FreddieTux pentru evobook are nevoie sa faca o comanda pentru o carte(platita) pentru a putea descarca cartile gratuite, am testat deja.
  24. Nu exista intr-o singura carte. Va trebuie sa le cauti singur pe fiecare in parte. Aici ai lista cu textele/titlurile ce fac parte din aceasta "categorie" Aici ai prima discutie - Apararea lui Socrate - ce pare a fi completa. Le poti cauta dupa numele lucrarii urmat de pdf, eu asa am gasit-o pe asta in vreo 5 minute.
  25. aici ar trebui sa fie cam toate lucrarile lui ~ 3500 pagini. Din ce am vazut sunt si dialogurile. Spor la citit.
×
×
  • Create New...