-
Posts
1337 -
Joined
-
Last visited
-
Days Won
89
Everything posted by Usr6
-
Programele de computer care folosesc algoritmi speciali ar putea fi mai potrivite decât anali?tii pentru a prevedea unde ?i când va avea loc urm?toarea infrac?iune într-un ora? aglomerat, sus?ine un studiu recent. Un software testat în Los Angeles s-a dovedit a fi de dou? ori mai bun decât anali?tii umani, reu?ind s? prezic? loca?iile în care era cel mai probabil s? se produc? infrac?iuni, sus?ine raportul elaborat de compania PredPol, care produce sistemul. Mai mult, atunci când patrulele poli?iei au fost trimise în zonele alese de software, rata criminalit??ii a sc?zut cu 25 de procente. Software-ul se bazeaz? pe cercet?ri antropologice realizate de oamenii de ?tiin?? de la Universitatea din Santa Clara ?i Universitatea California, Los Angeles. Aceste studii au la baz? rapoarte anterioare asupra criminalit??ii ?i includ timpul ?i locul în care a fost s?vâr?it? fapta. De asemenea, software-ul preia informa?ii sociologice, care explic? comportamentul criminal. Sistemul creeaz? h?r?i speciale ?i eviden?iaz? locurile care ar trebui monitorizate cu prioritate. Noile infrac?iuni sunt trecute în baza de date a sistemului ?i astfel el poate crea h?r?i noi pentru fiecare patrul?. Pentru a demonstra c? sistemul aduce un avantaj major, compania produc?toare a testat mai întâi informa?ii cu privire la frecven?a cu care apar infrac?iunile în anumite zone identificate de software, comparativ cu zonele prezise de anali?ti. Între noiembrie 2011 ?i aprilie 2012, sistemul a reu?it s? calculeze de 6 ori mai bine decât anali?tii umani zonele în care se produceau infrac?iuni. Algoritmul reu?e?te s? reduc? procedurile birocratice ?i ofer? ofi?erilor mai mult timp pentru a patrula pa str?zi, deoarece, mul?umit? lui, poli?ia nu mai este nevoit? s? petreac? ore întregi pentru a planifica zonele de patrulare. Sursa: http://www.descopera.ro/dnews/9819203-ca-in-minority-report-computerul-prezice-unde-vor-avea-loc-delictele Detalii suplimentare (eng): http://www.technologyreview.com/news/428354/la-cops-embrace-crime-predicting-algorithm/
-
- 1
-
-
@begood, eu am propus eliminarea unor restrictii nicidecum introducerea unor noi, in acest moment tu ca om obisnuit daca ajungi pe rst dupa un search pe google, esti obligat sa-ti faci cont pt a vedea un link, o poza etc. daca persoana respectiva e interesat de it/securitate isi va face cont de buna voie. avantaje: pt tine nici unul, pt mine idem, ar avea de castigat vizitatori ocazionali prin faptul ca vor putea vedea forumul asa cum e el cu toate linkurile/pozele fara a fi nevoiti sa se inregistreze. ------------------------- @kabron viitorul suna bine, chiar daca e posibil ca noi sa nu apucam acele vremuri
-
cifre: Romanian Security Team Statistics Threads: 51,113 Posts: 348,265 Members: 81,162 parerea mea: din cei 80000 cam 60000 cred ca au 0 post (daca are timp un admin poate posta cifre exacte din db) din cei 60000: 5000-10000 sa zicem sunt cititori care intra din cand in cand sa vada noutati, restul >50000 useri au conturi de unica folosinta, veniti aici de pe google care si-au facut cont doar pentru a vedea 1-2 linkuri propun sa se ridice interdictia de a vedea linkurile doar cei inregistrati, astfel se vor inregistra doar cei care chiar doresc sa faca parte din forum.
-
McAfee si Guardian Analytics au descoperit o noua si elaborata campanie globala de frauda bancara , denumita generic “Operation High Roller”. McAfee ofera deja suport impotriva acestor atacuri. Desi campania a avut la baza atacurilor, cunoscutele familii de malware SpyEye si Zeus, ea se distinge si prin noi tehnici si metode de atac, cyber-criminalii folosind din ce in ce mai mult tehnici automatizate pentru fraudele bancare. Cercetatorii de la McAfee si Guardian Analytics considera ca hackerii ’’ High Roller’’ au incercat sa transfere fraudulos aproximativ 104.850 de euro si ca incercarile lor inca mai au loc si astazi. Au fost depistate mai mult de 60 de servere care au generat atacuri in urma carora furtul s-ar fi putut ridica la sume cuprinse intre 60 milioane – 2 miliarde de euro. Beneficiarii solutiilor McAfee sunt protejati. Protectie impotriva familiilor si variantelor de malware care au fost implicate in Operation High Roller este deja inclusa in fisierele McAfee DAT. Toate serverele implicate au fost deja introduse ca atare in bazele de date ale McAfee Global Threat Intelligence. Pentru clientii care folosesc solutii de securitate web de la McAfee, toate web-siteurile care gazduiesc codurile daunatoare sunt clasificate drept ”Red” sau “Malicious”. Tehnologia avanseaza si odata cu ea, si tehnicile folosite de catre hackeri pentru a fura mai repede, mai usor, fara ca macar userul sa observe anomalia. Se disting noi tendinte in executarea acestor campanii de hacking / furt de bani: - schimbarea metodei de atac, prin automatizarea proceselor; un atac automatizat presupune lipsa prezentei umane, lipsa intarzierilor, lipsa erorilor si deci, o rata de succes a actiunilor de tip criminal, mult mai mare. - Atacurile bazate pe codul descoperit in atacul din Italia (primul), s-au adaptat in functie de caracteristicile mediului din fiecare banca aflata sub atac. - Tranzactiile ilegale, urmele penetrarii sistemelor, au fost mascate mai bine ca niciodata. - in Operatiunea High Roller apare pentru prima data un malware care ”lucreaza” atat smartcard-ul/ readerul fizic cat si PIN-ul. Institutiile financiare, persoane particulare sau corporatii – pentru o securitate optima, au datoria sa-si reexamineze controlul si asumarea rolului securizarii bunurilor. Succesul Operatiunii High Roller reprezinta inca un semnal de alarma in acest sens. Exista destule filiale de institutii bancare ce inregistreaza un minus la capitolul detectare de software ilegal/anomalii; astfel, sunt complet expuse acestui tip de atac. Companiile trebuie sa mareasca atat controlul securitatii cat si interesul de a educa userii privilegiati in legatura cu tehnicile de social engineering si phishing. Cei care folosesc acasa computerul, desi nu au reprezentat targetul primar, ar fi indicat sa-si sporeasca securitatea pe terminalele folosite si sa fie atenti la posibile anomalii care ar putea aparea in timpul unei tranzactii bancare online. Pentru a urmari intregul raport efectuat de catre specialistii McAfee si Guardian Analytics: http://www.mcafee.com/us/resources/reports/rp-operation-high-roller.pdf Sursa: OPERATION HIGH ROLLER – Cel mai recent atac global | Devirusare.com
-
Introducing Windows Server 2012 is 256 pages and includes 5 chapters loaded with insider information from the Windows Server Team. Table of Contents Chapter 1 The business need for Windows Server 2012 The rationale behind cloud computing Making the transition Technical requirements for successful cloud computing Four ways Windows Server 2012 delivers value for cloud computing Foundation for building your private cloud Chapter 2 Foundation for building your private cloud A complete virtualization platform Increase scalability and performance Business continuity for virtualized workloads Chapter 3 Highly available, easy-to-manage multi-server platform Continuous availability Cost efficiency Management efficiency Chapter 4 Deploy web applications on premises and in the cloud Scalable and elastic web platform Support for open standards Chapter 5 Enabling the modern workstyle Access virtually anywhere, from any device Full Windows experience Enhanced security and compliance PDF Introducing Windows Server 2012 PDF ebook EPUB Introducing Windows Server 2012 EPUB ebook MOBI Introducing Windows Server 2012 MOBI ebook Sursa
-
fa o vizita pe aici, aici si aici sunt toate subiectele propuse de tine + alte sute de subiecte la care inca nu te-ai gandit. La lume nu place sa citeshte, nu ca lipseshte documentatzia. Nu poti sa torni nimanui informatiile pe gat cu leica.
-
Expert Penetration Testing Lab Manual, 404 pag http://attrition.org/errata/plagiarism/infosec_institute/labbook.pdf -exploits se spune ca ar fi fost plagiat dupa tutorialele oferite de corelan team tutoriale corelan: 1-10, 11
-
The United States and Israel jointly developed a sophisticated computer virus nicknamed Flame that collected intelligence in preparation for cyber-sabotage aimed at slowing Iran’s ability to develop a nuclear weapon, according to Western officials with knowledge of the effort. The massive piece of malware secretly mapped and monitored Iran’s computer networks, sending back a steady stream of intelligence to prepare for a cyberwarfare campaign, according to the officials. The effort, involving the National Security Agency, the CIA and Israel’s military, has included the use of destructive software such as the Stuxnet virus to cause malfunctions in Iran’s nuclear-enrichment equipment. The emerging details about Flame provide new clues to what is thought to be the first sustained campaign of cyber-sabotage against an adversary of the United States. “This is about preparing the battlefield for another type of covert action,” said one former high-ranking U.S. intelligence official, who added that Flame and Stuxnet were elements of a broader assault that continues today. “Cyber-collection against the Iranian program is way further down the road than this.” Flame came to light last month after Iran detected a series of cyberattacks on its oil industry. The disruption was directed by Israel in a unilateral operation that apparently caught its American partners off guard, according to several U.S. and Western officials who spoke on the condition of anonymity. There has been speculation that Washington had a role in developing Flame, but the collaboration on the virus between the United States and Israel has not been previously confirmed. Commercial security researchers reported last week that Flame contained some of the same code as Stuxnet. Experts described the overlap as DNA-like evidence that the two sets of malware were parallel projects run by the same entity. Spokesmen for the CIA, the NSA and the Office of the Director of National Intelligence, as well as the Israeli Embassy in Washington, declined to comment. The virus is among the most sophisticated and subversive pieces of malware to be exposed to date. Experts said the program was designed to replicate across even highly secure networks, then control everyday computer functions to send secrets back to its creators. The code could activate computer microphones and cameras, log keyboard strokes, take screen shots, extract geolocation data from images, and send and receive commands and data through Bluetooth wireless technology. Flame was designed to do all this while masquerading as a routine Microsoft software update; it evaded detection for several years by using a sophisticated program to crack an encryption algorithm. “This is not something that most security researchers have the skills or resources to do,” said Tom Parker, chief technology officer for FusionX, a security firm that specializes in simulating state-sponsored cyberattacks. He said he does not know who was behind the virus. “You’d expect that of only the most advanced cryptomathematicians, such as those working at NSA.” Flame was developed at least five years ago as part of a classified effort code-named Olympic Games, according to officials familiar with U.S. cyber-operations and experts who have scrutinized its code. The U.S.-Israeli collaboration was intended to slow Iran’s nuclear program, reduce the pressure for a conventional military attack and extend the timetable for diplomacy and sanctions. The cyberattacks augmented conventional sabotage efforts by both countries, including inserting flawed centrifuge parts and other components into Iran’s nuclear supply chain. The best-known cyberweapon let loose on Iran was Stuxnet, a name coined by researchers in the antivirus industry who discovered it two years ago. It infected a specific type of industrial controller at Iran’s uranium- enrichment plant in Natanz, causing almost 1,000 centrifuges to spin out of control. The damage occurred gradually, over months, and Iranian officials initially thought it was the result of incompetence. The scale of the espionage and sabotage effort “is proportionate to the problem that’s trying to be resolved,” the former intelligence official said, referring to the Iranian nuclear program. Although Stuxnet and Flame infections can be countered, “it doesn’t mean that other tools aren’t in play or performing effectively,” he said. To develop these tools, the United States relies on two of its elite spy agencies. The NSA, known mainly for its electronic eavesdropping and code-breaking capabilities, has extensive expertise in developing malicious code that can be aimed at U.S. adversaries, including Iran. The CIA lacks the NSA’s sophistication in building malware but is deeply involved in the cyber-campaign. The CIA’s Information Operations Center is second only to the agency’s Counterterrorism Center in size. The IOC, as it is known, performs an array of espionage functions, including extracting data from laptops seized in counter*terrorism raids. But the center specializes in computer penetrations that require closer contact with the target, such as using spies or unwitting contractors to spread a contagion via a thumb drive. Both agencies analyze the intelligence obtained through malware such as Flame and have continued to develop new weapons even as recent attacks have been exposed. Flame’s discovery shows the importance of mapping networks and collecting intelligence on targets as the prelude to an attack, especially in closed computer networks. Officials say gaining and keeping access to a network is 99 percent of the challenge. “It is far more difficult to penetrate a network, learn about it, reside on it forever and extract information from it without being detected than it is to go in and stomp around inside the network causing damage,” said Michael V. Hayden, a former NSA director and CIA director who left office in 2009. He declined to discuss any operations he was involved with during his time in government. Years in the making The effort to delay Iran’s nuclear program using cyber-techniques began in the mid-2000s, during President George W. Bush’s second term. At that point it consisted mainly of gathering intelligence to identify potential targets and create tools to disrupt them. In 2008, the program went operational and shifted from military to CIA control, former officials said. Despite their collaboration on developing the malicious code, the United States and Israel have not always coordinated their attacks. Israel’s April assaults on Iran’s Oil Ministry and oil-export facilities caused only minor disruptions. The episode led Iran to investigate and ultimately discover Flame. “The virus penetrated some fields — one of them was the oil sector,” Gholam Reza Jalali, an Iranian military cyber official, told Iranian state radio in May. “Fortunately, we detected and controlled this single incident.” Some U.S. intelligence officials were dismayed that Israel’s unilateral incursion led to the discovery of the virus, prompting countermeasures. The disruptions led Iran to ask a Russian security firm and a Hungarian cyber-lab for help, according to U.S. and international officials familiar with the incident. Last week, researchers with Kaspersky Lab, the Russian security firm, reported their conclusion that Flame — a name they came up with — was created by the same group or groups that built Stuxnet. Kaspersky declined to comment on whether it was approached by Iran. “We are now 100 percent sure that the Stuxnet and Flame groups worked together,” said Roel Schouwenberg, a Boston-based senior researcher with Kaspersky Lab. The firm also determined that the Flame malware predates Stuxnet. “It looks like the Flame platform was used as a kickstarter of sorts to get the Stuxnet project going,” Schouwenberg said. Sursa: U.S., Israel developed Flame computer virus to slow Iranian nuclear efforts, officials say - The Washington Post
-
Anyone who has ever managed a firewall will know that all too often it’s a one way street. From the moment the device is plugged into the network, rules are added to allow traffic to flow between the required hosts and ports. We start out with the best of intentions, deploying the absolute minimum number of rules needed to keep things as tight as possible. All is well in the world of firewall management, but then we are hit with the first trouble ticket. “Please allow my machine to contact server X on port 8081, I need this to test a new piece of software.” No problem you say, you grab the source IP and allow it to talk to server X on 8081. This isn’t a big deal, our rule base is still pretty tight, and one addition isn’t going to cause us any compliance problems Then the next ticket comes in. “My IM client isn’t working anymore, it runs on TCP port 5150, please allow outbound access on this port.” A quick examination of the firewall logs reveals that it’s indeed blocking port 5150 outbound, so you shove a rule in to allow it. You decide that more than one person will likely be using this client, and from the logs you see it requires access to a variety of servers on the outside. Begrudgingly, you add the rule that allows port 5150 outbound, from any source to any destination. Suddenly, your rule base is starting to take on an appearance akin to that of a piece of Swiss cheese. This trend continues over time and is mirrored across multiple firewalls in the organization. The rule base grows and gets more and more complex. Different administrators add different rules to address different issues. Rules overlap and cancel each other out, which in turn causes the performance of the firewall to degrade. Meanwhile, on the inside of the network, servers are decommissioned and their IP addresses are recycled. Unless someone thinks to tell the firewall admin, an old rule stays in place without being removed or amended. This is a huge problem, and unfortunately, a common one. Firewall administrators are always the first to hear about it when a rule is preventing something from working, but when that rule is no longer required – radio silence. This isn’t just untidy, it’s insecure. I can recall many a pen test where I’ve stumbled across a machine that is unwittingly exposed the outside world thanks to sloppy firewall rules. The solution to this problem: performing regular firewall rule audits. If you have experience with PCI DSS, you’ll know that performing a firewall rule audit at least every six months is a requirement of the standard (requirement 1.1.6 to be precise). So what should you be looking for during an audit, and how do you go about looking for it? Of course a lot of the answer to that question depends on the number, complexity and type of firewalls you are auditing. It also depends on the budget you have available for tools to help speed up the process! What are we looking for? You should never go shopping without a shopping list; otherwise you’ll end up wandering round for hours comparing tins of beans. Trust me, it happens. The same is true of auditing firewalls. We need to have a clear goal of what we should be interested in. Rules that specify “any” source or destination. How much of an issue these rules are depends upon the position of the firewall. There is usually a perfectly acceptable reason for rules with “any” source or destination to be in an edge firewall. Allowing hosts to browse the internet; send or receive email and perform DNS lookups are just a few of those reasons. If however the firewall is internal, segregating different networks, perhaps in a cardholder data environment – an “any” source or destination rule is not something I’d expect to see. More importantly it’s something a PCI auditor would question. The reasoning behind that should be quite clear. If you implement a firewall to separate your networks, and then put a rule in that undoes all that effort, you must really like making work for yourself! access-list 101 line 1 extended permit tcp any host 192.168.0.1 eq 3389 (hitcnt=1420) 0xadb3bee1 This rule allows any source to contact 192.168.0.1 on TCP port 3389, commonly used for remote desktop connections. Some will try to justify an “any” source or destination rule by stating that “the any destination is there because that server needs to talk to all networks in the environment, so rather than specifying them all it’s quicker just to put any, it has the same effect after all.” Unfortunately, it’s this kind of thinking that leads to mistakes. First of all, there may be a network that the firewall admin isn’t aware of that doesn’t strictly need access to or from the host specified in the rule. Secondly, while it may have the desired effect at the time the rule is entered, a few months later, as new components are added to the network, the context of the “any” rule may change, and allow more open access than was originally intended. Rules that allow “any” service between two hosts. These are a definite red flag, regardless of the position of the firewall. Rules should adhere to the principle of least privilege. Of course, this means only giving an entity the minimum amount of access required to do its job properly. Why would a server in the DMZ require FTP access to an internal database server? This is all well and good for applications that use well known or registered TCP ports (FTP, SSH, HTTP etc.), but what do we do with those that use dynamic port ranges? This is the most common reason people shove an “any” service rule into a firewall. access-list 101 line 2 extended permit ip host 192.168.0.1 host 10.0.0.1 (hitcnt=127) 0xfd904f62 In this example, the host 192.168.0.1 is allowed to contact the host 10.0.0.1 on any port. If you can’t pin down the rule to a specific port, you should try and do the next best thing – restrict access to a range of ports used by the application. This could be 10,000 ports, but still, that is better than the full 65,355. Remember, the aim here is to reduce the attack surface as best we can. Rules that have no effect, either because they overlap or are cancelled out by a prior rule. Generally speaking, firewalls handle rules sequentially. They start at the top of the list and work their way down to the very end. Therefore, the order in which those rules are entered into the firewall is extremely important, and often an area of oversight. Let’s take a look at an example of a case where a rule has been put in the wrong place, and as a result is doing little more than taking up disk space. The intention in the above example is to deny internal users access to the cardholder data environment, while still allowing system admins remote desktop access. Unfortunately, because the rules have been entered in the wrong order, the second rule will have zero effect. The traffic will have already been dropped by the time the firewall gets around to processing it, and the poor system admins will be denied access. This example leads to inconvenience (for the system admins); however there are plenty of others out there that led to much more serious security problems. Rules that do not have a defined business purpose, or are unused (zero hit count). If you’ve ever seen a motor racing team bring their car in for a pit stop, you will have likely marveled at how quickly they can refuel and change tires. This is possible because everyone in the pit crew has a specific job to do, and they do it. You don’t see just anyone hanging around the car for the sake of it, because they’d likely get in the way and slow things down. Firewalls should be like a pit crew. Every single rule needs to be doing something to justify being there in the first place. If it’s unclear what a rule is doing, it should be investigated. If after an investigation it’s still unclear what it’s doing, the issue should be raised with senior management, and treated as an incident. Nine times out of ten it’ll turn out that the mysterious rule is a harmless hangover from a legacy system or configuration, but it could also be harboring something more sinister. One way of finding out if a rule is actually doing something is to take a look at the hit count. The hit count is the number of times a packet transiting the firewall has matched a particular rule. How you determine the hit count depends on the platform you are working with. labfw01# show access-list access-list 101; 2 elements access-list 101 line 1 extended permit tcp any host 192.168.0.1 eq 3389 (hitcnt=12304) 0xadb3bee1 access-list 101 line 2 extended permit ip host 192.168.0.1 host 10.0.0.1 (hitcnt=7543) 0xfd904f62 The “show access-list” command on this Cisco PIX displays the access lists in use on the firewall and the hit count for each rule. Rules that aren’t “commented”. Something that makes a firewall audit around a million times easier (especially if you are auditing a client’s firewalls rather than your own), is having comments entered with each rule explaining in plain English exactly what it’s doing. Or at the very least, what it’s supposed to be doing. labfw01# access-list 102 remark – Permit HTTP access to web server labfw01# access-list 102 permit tcp any host 10.5.4.1 eq 80 Adding a remark (comment) to an ACL Lack of an explicit cleanup rule. Not as mission critical as it once was, thanks to the majority of modern day firewalls now implementing a “deny by default” ethos, but still something to be on the lookout for. The lack of a cleanup or “deny everything else” rule at the end of an access control list could allow any traffic not matched by a rule in the firewall to pass unhindered. This would be a very bad thing, as it would allow more access to a host than intended. Even if your firewall does drop unmatched traffic by default, it will most likely not be recording this in a log file. This could hinder troubleshooting and investigation efforts, and is another valid reason for having an explicit cleanup rule in place. labfw01# access-list 102 deny ip any any log A traditional clean up rule – notice the addition of “log”, to ensure that all dropped packets are recorded. So now that we know what we are looking for, we should probably start looking. This is a fairly straightforward task if we have only a couple of firewalls to audit, with a few dozen rules in each. If this is the case, we can just ask for a copy of the rules or running configuration, and study them by hand – without getting too much of a headache. But life is never normally that simple, and frequently audits require the review of many firewalls with long and complex rule bases. If this is the case, you may need to call on tools to help automate the audit process. There are many tools and utilities out there that can help with this type of work; I’ll cover just a couple of them to give you an idea of what’s out there, but is by no means an extensive list. CPDB2HTML CPDB2HTML (Check Point Database to HTML) is a free tool produced by Check Point Software. Known as the “Web Visualization Tool”, essentially it dumps the contents of a Check Point Security Gateway’s security policy to an HTML or XML file. This may sound elementary, and it is, but it’s an extremely useful tool for reviewing a Check Point firewall configuration. CPDB2HTML can be obtained directly from Check Point at Check Point - Security Appliances, Security Gateways, Firewall, Security Management, Endpoint Security & Software Blades. Nipper The name comes from “network infrastructure parser”, which should give you an idea of how it works. You feed it a copy of the running configuration from the device you are auditing (and it supports a great deal of devices including Cisco, Juniper, F5, Dell, Brocade and Checkpoint), wait about half a second and watch as it produces an extremely detailed report. In addition to studying the rules in an access control list, Nipper also takes a look at the general configuration of the target device, to make sure it is up to scratch. The pricing of Nipper makes it an attractive option for consultants. A free trial is available from https://www.titania-security.com/. Firemon Security Manager Firemon Security Manager is a full blown suite for keeping tabs on exactly what your firewalls are up to at any given movement. It allows you to map out traffic flows, track changes to device configuration, and suggests ways to simplify a rule base. One of the very best features of Firemon is its ability to crack open a rule that allows “any” service to pass and see exactly which services are being used. This gives the firewall admin a great opportunity to remove the offending “any” statement, and tighten things up a touch. This is certainly a tool that is more likely to be owned and operated by the client than an auditor or consultant, but the data it produces provides value on a daily basis, not just at audit time. Conclusion So there you have it, a quick guide to performing firewall audits, and just a couple of the tools that can help you along the way. It’s important to remember that sometimes the mere presence of a firewall can create a false sense of security. The reality is that unless they are properly maintained and reviewed, firewalls can become nothing more than an extra hop along the way. Take care of your firewalls and they will take care of you! Sursa
-
- 1
-
-
After its human resources, information is an organization’s most important asset. As we have seen in previous chapters, security and risk management is data centric. All efforts to protect systems and networks attempt to achieve three outcomes: data availability, integrity, and confidentiality. And as we have also seen, no infrastructure security controls are 100% effective. In a layered security model, it is often necessary to implement one final prevention control wrapped around sensitive information: encryption. Encryption is not a security panacea. It will not solve all your data-centric security issues. Rather, it is simply one control among many. In this chapter, we look at encryption’s history, its challenges, and its role in security architecture. Cryptography Cryptography is a science that applies complex mathematics and logic to design strong encryption methods. Achieving strong encryption, the hiding of data’s meaning, also requires intuitive leaps that allow creative application of known or new methods. So cryptography is also an art. Early Cryptography The driving force behind hiding the meaning of information was war. Sun Tzu wrote, Of all those in the army close to the commander none is more intimate than the secret agent; of all rewards none more liberal than those given to secret agents; of all matters none is more confidential than those relating to secret operations. Secret agents, field commanders, and other human elements of war required information. Keeping the information they shared from the enemy helped ensure advantages of maneuver, timing, and surprise. The only sure way to keep information secret was to hide its meaning. Early cryptographers used three methods to encrypt information: substitution, transposition, and codes. Monoalphabetic Substitution Ciphers One of the earliest encryption methods is the shift cipher. A cipher is a method, or algorithm, that converts plaintext to ciphertext. Caesar’s shift cipher is known as a monoalphabetic substitution shift cipher. See Figure 7-1. The name of this cipher is intimidating, but it is simple to understand. Monoalphabetic means it uses one cipher alphabet. Each character in the cipher alphabet—traditionally depicted in uppercase—is substituted for one character in the plaintext message. Plaintext is traditionally written in lowercase. It is a shift cipher because we shift the start of the cipher alphabet some number of letters (four in our example) into the plaintext alphabet. This type of cipher is simple to use and simple to break. In Figure 7-1, we begin by writing our plaintext message without spaces. Including spaces is allowed, but helps with cryptanalysis (cipherbreaking) as shown later. We then substitute each character in the plaintext with its corresponding character in the ciphertext. Our ciphertext is highlighted at the bottom. Breaking monoalphabetic substitution ciphers Looking at the ciphertext, one of the problems with monoalphabetic ciphers is apparent: patterns. Note the repetition of “O” and “X.” Each letter in a language has specific behavior, or socialization, characteristics. One of them is whether it is used as a double consonant or vowel. According to Mayzner and Tresselt (1965), the following is a list of the common doubled letters in English. “LL EE SS OO TT FF RR NN PP CC” In addition to doubling, certain letter pairs commonly appear in English text: “TH HE AN RE ER IN ON AT ND ST ES EN OF TE ED OR TI HI AS TO” Finally, each letter appears in moderate to long text with relative frequency. According to Zim (1962), the following letters appear with diminishing frequency. For example, “e” is the most common letter in English text, followed by “t,” etc. “ETAON RISHD LFCMU GYPWB VKXJQ Z” Use of letter frequencies to break monoalphabetic ciphers was first documented by Abu Yusuf Ya’qub ibnis-haq ibn as-Sabbath ibn ‘om-ran ibn Ismail al-Kindi in the ninth century CE (Singh, 1999). al-Kindi did what cryptanalysts (people to try to break the work of cryptographers) had been trying to do for centuries: develop an easy way to break monoalphabetic substitution ciphers. Once the secret spread, simple substitution ciphers were no longer safe. The steps are If you know the language of the plaintext hidden by the ciphertext, obtain a page-length sample of any text written in that language. Count the occurrence of all letters in the sample text and record the results in a table. Count the occurrence of all cipher alphabet characters in the ciphertext. Start with the most frequently occurring letter in the plaintext and substitute it for the most common character in the ciphertext. Do this for the second most common character, the third, etc. Eventually, this frequency analysis begins to reveal patterns and possible words. Remember that the letters occur with relative frequency. So this is not perfect. Letter frequency, for example, differs between writers and subjects. Consequently, using a general letter frequency chart provides various results depending on writing style and content. However, by combining letter socialization characteristics with frequency analysis, we can work through inconsistency hurdles and arrive at the hidden plaintext. Summarizing, monoalphabetic substitution ciphers are susceptible to frequency and pattern analysis. This is one of the key takeaways from this chapter; a bad cipher tries to hide plaintext by creating ciphertext containing recognizable patterns or regularly repeating character combinations. Polyalphabetic Substitution Ciphers Once al-Kindi broke monoalphabetic ciphers, cryptographers went to work trying to find a stronger cipher. Finally, in the 16th century, a French diplomat developed a cipher that would stand for many decades (Singh, 1999). Combining the work and ideas of Johannes Trithemius, Giovanni Porta, and Leon Battista Alberti, Blaise de Vigenère created the Vigenère cipher. Vigenère’s cipher is based on a Vigenère table, as shown in Figure 7-2. The table consists of 27 rows. The first row of lower case letters represents the plaintext characters. Each subsequent row represents a cipher alphabet. For each alphabet, the first character is shifted one position farther than the previous row. In the first column, each row is labeled with a letter of the alphabet. In some tables, the letters are replaced with numbers representing the corresponding letter’s position in the standard alphabet. For example, “A” is replaced with “1,” “C” with “3,” etc. A key is required to begin the cipher process. For our example, the key is FRINGE. The message we wish to encrypt is “get each soldier a meal.” Write the message with all spaces removed. Write the key above the message so that each letter of the key corresponds to one letter in the message, as shown below. Repeat the key as many times as necessary to cover the entire message Identify the rows in the table corresponding to the letters in the key, as shown in Figure 7-3. Since our key is FRINGE, we select the rows designated in the first column as F, R, I, N, G, and E. Each of these rows represents a cipher alphabet we use to encrypt our message. Replace each letter in the message with its corresponding ciphertext character. For example, the first letter in our message is “g.” It corresponds to the letter “F” in the key. To find its cipher character, we find the F row in the table and follow it to the column headed by “g” in the first row. The cipher letter at the intersection is “M.” Following these steps to locate the appropriate cipher characters, our final encrypted message is: MWCSHHNKXZKNKJJALFR Our encrypted message used six cipher alphabets based on our key. Anyone with the key and the layout of the table can decrypt the message. However, messages encrypted using the Vigenère cipher are not vulnerable to frequency analysis. Our message, for example, contains four e’s as shown in red below. A different cipher character represents each instance of an “e.” It is not possible to determine the relative frequency of any single letter. However, it is still vulnerable to attack. MWCSHHNKXZKNKJJALFR Breaking the Vigenère cipher Although slow to gain acceptance, the Vigenère cipher was a very strong and seemingly unbreakable encryption method until the 19th century. Charles Babbage and Friedrich Wilhelm Kasiski demonstrated in the mid and late 1800s respectively that even polyalphabetic ciphers provide trails for cryptanalysts. Although frequency analysis did not work, encrypted messages contained patterns that matched plaintext language behaviors. Once again, a strong cipher fell because it could not distance itself from the characteristics of the plaintext language. Transposition Ciphers Other attempts to hide the meaning of messages included rearranging letters to obfuscate the plaintext: transposition. The rail fence transposition is a simple example of this technique. See Figure 7-4. The plaintext, “giveeachsoldierameal,” is written with every other letter on a second line. To create the ciphertext, the letters on the first line are written first and then the letters on the second. The resulting cipher text is GVECSLIRMAIEAHODEAEL. The ciphertext retains much of the characteristic spelling and letter socialization of the plaintext and its corresponding language. Using more rows helped, but complexity increased beyond that which was reasonable and appropriate. Codebooks In addition to transposition ciphers, codes were also common prior to use of contemporary cryptography. A code replaces a word or phrase with a character. Figure 7-5 is a sample code. Using codes like our example was a good way to obfuscate meaning if the messages are small and the codebooks were safe. However, using a codebook to allow safe communication of long or complex messages between multiple locations was difficult. The first challenge was creating the codes for appropriate words and phrases. Codebooks had to be large, and the effort to create them was significant: like writing an English/French dictionary. After distribution, there was the chance of codebook capture, loss, or theft. Once compromised, the codebook was no longer useful, and a new one had to be created. Finally, coding and decoding lengthy messages took time, time not available in many situations in which they were used. Codes were also broken because of characteristics inherent in the plaintext language. For example, “and,” “the,” “I,” “a,” and other frequently occurring words or letters could eventually be identified. This provided the cryptanalysts with a finger hold from which to begin breaking a code. Nomenclators To minimize the effort involved in creating and toting codebooks, cryptographers in the 16th century often relied on nomenclators. A nomenclator combines a substitution cipher with a small code set, as in the famous one shown in Figure 7-6. Mary Queen of Scots and her cohorts used this nomenclator during a plot against Queen Elizabeth I (Singh, 1999). Thomas Phelippes (cipher secretary to Sir Francis Walsingham, principal secretary to Elizabeth I) used frequency analysis to break it. Phelippes’ success cost Queen Mary her royal head. Contemporary Cryptography Between the breaking of the Vigenère cipher and the 1970s, many nations and their militaries attempted to find the unbreakable cipher. Even Enigma fell to the technology-supported insights of Marian Rejewski and Alan Turing. (If you are interested in a good history of cryptography, including transposition ciphers and codes, see “The Code Book” by Simon Singh.) Based on what we learn from the history of cryptography, a good cipher …makes it impossible to find the plaintext m from ciphertext c without knowing the key. Actually, a good encryption function should provide even more privacy than that. An attacker shouldn’t be able to learn any information about m, except possibly its length at the time it was sent (Ferguson, Schneier, & Kohno, 2010, p. 24). Achieving this ideal requires that any change to the plaintext, no matter how small, must produce a drastic change in the ciphertext, such that no relationship between the plaintext and the resulting ciphertext is evident. The change must start at the beginning of the encryption process and diffuse throughout all intermediate permutations until reaching the final ciphertext. Attempting to do this before the late 20th century, and maintain some level of business productivity, was not reasonable. Powerful electronic computers were stuff of science fiction. Today, we live in a different world. Block Cipher Modes The standard cipher in use today is the Advanced Encryption Standard (AES). It is a block cipher mode that ostensibly meets our definition of an ideal cipher. However, it has already been broken… on paper. AES is a symmetric cipher, meaning that it uses a single key for encryption and decryption. Cryptanalysts have theoretically broken it, but we need better computers to test the discovered weaknesses. It will be some time before private industries have to worry about changing their encryption processes. A block cipher mode “…features the use of a symmetric key block cipher algorithm…” (NIST, 2010). Figure 7-7 depicts a simple block cipher. The plaintext is broken into blocks. In today’s ciphers, the block size is typically 128 bits. Using a key, each block passes through the block algorithm resulting in the final ciphertext. One of the problems with this approach is lack of diffusion. The same plaintext with the same key produces the same ciphertext. Further, a change in the plaintext results in a corresponding and identifiable change in the ciphertext. Figure 7- 7: Simple Block Cipher (“Electronic codebook,” 2012) Because of the weaknesses in simple block algorithms, cryptographers add steps to strong ciphers. Cipher block chaining (CBC), for example, adds diffusion by using ciphertext, an initialization vector, and a key. Figure 7-8 graphically depicts the encipher process ( = XOR). The initialization vector (IV) is a randomly generated and continuously changing set of bits the same size as the plaintext block. The resulting ciphertext changes as the IV changes. Since the key/IV pair should never be duplicated, the same plaintext can theoretically pass through the cipher algorithm using the same key and never produce the same ciphertext. Figure 7- 8: Cipher-block Chaining Cipher Mode (“Cipher-block chaining,” 2012) When the CBC cipher begins, it XORs the plaintext block with the IV and submits it to the block algorithm. The algorithm produces a block of ciphertext. The ciphertext from the first block is XORed with the next block of plaintext and submitted to the block algorithm using the same key. If the final block of plaintext is smaller than the cipher block size, the plaintext block is padded with an appropriate number of bits. This is stronger, but it still fell prey to skilled cryptanalysts. AES, another block cipher mode, uses a more sophisticated approach, including byte substitution, shifts, column mixing, and use of cipher-generated keys for internal processing (NIST, 2001). It is highly resistant to any attack other than key discovery attempts. However, cryptanalysts have theoretically broken AES (Ferguson, Schneier, & Kohno, 2010). This does not mean it is broken in practice; it is still the recommended encryption method for strong data protection. For additional information on attacks against modern ciphers, see “Cryptography Engineering: Design Principles and Practical Applications” by Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno. Key Management The processes underlying all widely accepted ciphers are and should be known, allowing extensive testing by all interested parties: not just the originating cryptographer. We tend to test our expectations of how our software development creations should work instead of looking for ways they deviate from expected behavior. Our peers do not usually approach our work in that way. Consequently, allowing a large number of people to try to break an encryption algorithm is always a good idea. Secret, proprietary ciphers are suspect. A good encryption solution follows Auguste Kerckhoffs’ principle: The security of the encryption scheme must depend only on the secrecy of the key… and not on the secrecy of the algorithm (Ferguson, Schneier, & Kohno, 2010, p. 24) If a vendor, or one of your peers, informs you he or she has come up with a proprietary, secret cipher that is unbreakable, that person is either the foremost cryptographer of all time or deluded. In either case, only the relentless pounding on the cipher by cryptanalysts can determine its actual strength. Now that we have established the key as the secret component of any well-tested cipher, how do we keep our keys safe from loss or theft? If we lose a key, the data it protects is effectively lost to us. If a key is stolen, the encrypted data is at higher risk of discovery. And how do we share information with other organizations or individuals if they do not have our key? AES is a symmetric cipher; it uses the same key for both encryption and decryption. So, if I want to send AES-encrypted information to a business partner, how do I safely send the key to the receiver? Principles of Key Management Managing keys requires three considerations: Where will you store them? How will you ensure they are protected but available when needed? What key strength is adequate for the data protected? Key Storage Many organizations store key files on the same system, and often the same drive, as the encrypted database or files. While this might seem like a good idea if your key is encrypted, it is bad security. What happens if the system fails and the key is not recoverable? Having usable backups helps, but backup restores do not always work as planned… Regardless of where you keep your key, encrypt it. Of course, now you have to decide where to store the encryption key for the encrypted encryption key. None of this confusion is necessary if you store all keys in a secure, central location. Further, do not rely solely on backups. Consider storing keys in escrow, allowing access by a limited number of employees (“key escrow,” n.d.). Escrow storage can be a safe deposit box, a trusted third party, etc. Under no circumstances allow any one employee to privately encrypt your keys. Key Protection Encrypted keys protecting encrypted production data cannot be locked away and only brought out by trusted employees as needed. Rather, keep the keys available but safe. Key access security is, at its most basic level, a function of the strength of your authentication methods. Regardless of how well protected your keys are when not used, authenticated users (including applications) must gain access. Ensure identity verification is strong and aggressively enforce separation of duties, least privilege, and need-to-know. Key Strength Most, if not all, attacks against your encryption will try to acquire one or more of your keys. Use of weak keys or untested/questionable ciphers might achieve compliance, but it provides your organization, its customers, and its investors with a false sense of security. As Ferguson, Schneier, and Kohno (2010) wrote, In situations like this (which are all too common) any voodoo that the customer [or management] believes in would provide the same feeling of security and work just as well (p. 12). So what is considered a strong key for a cipher like AES? AES can use 128-, 192-, or 256-bit keys. 128-bit keys are strong enough for most business data, if you make them as random as possible. Key strength is measured by key size and an attacker’s ability to step through possible combinations until the right key is found. However you choose your keys, ensure you get as close as possible to a key selection process in which all bit combinations are equally likely to appear in the key space (all possible keys). Key Sharing and Digital Signatures It is obvious from the sections on keys and algorithms that secrecy of the key is critical to the success of any encryption solution. However, it is often necessary to share encrypted information with outside organizations or individuals. For them to decrypt the ciphertext, they need our key. Transferring a symmetric cipher key is problematic. We have to make sure all recipients have the key and properly secure it. Further, if the key is compromised in some way, it must be quickly retired from use by anyone who has it. Finally, distribution of the key must be secure. Luckily, some very smart cryptographers came up with the answer. Asymmetric Cryptography In 1978, Ron Rivest, Adi Shamir, and Leonard Adelman (RSA) publicly described a method of using two keys to protect and share data; one key is public and the other private. The organization or person to whom the public key belongs distributes it freely. However, the private key is kept safe and is never shared. This enables a process known as asymmetric encryption and decryption. As shown in Figure 7-9, the sender uses the recipient’s public key to convert plaintext to ciphertext. The ciphertext is sent and the recipient uses her private key to recover the plaintext. Only the person with the private key corresponding to the public key can decrypt the message, document, etc. This works because the two keys, although separate, are mathematically entwined. Figure 7- 9: Asymmetric Cryptography (Microsoft, 2005) At a very high level, the RSA model uses prime numbers to create a public/private key set: Creation begins by selecting two extremely large prime numbers. They should be chosen at random and of similar length. The two prime numbers are multiplied together. The product becomes the public key. The two factors become the private key. There is more to asymmetric key creation, but this is close enough for our purposes. When someone uses the public key, or the product of the two primes, to encrypt a message, the recipient of the ciphertext must know the two prime numbers that created it. If the primes were small, a brute force attack can find them. However, use of extremely large primes and today’s computing power makes finding the private key through brute force unlikely. Consequently, we can use asymmetric keys to share symmetric keys, encrypt email, and various other processes where key sharing is necessary. The Diffie-Hellman key exchange method is similar to the RSA model and it was made public first. However, it allows two parties who know nothing about each other to establish a shared key. This is the basis of SSL and TLS security. An encrypted session key exchange occurs over an open connection. Once both parties to the session have the session key (also know as a shared secret), they establish a virtual and secure tunnel using symmetric encryption. So why not throw out symmetric encryption and use only asymmetric ciphers? First, symmetric ciphers are typically much stronger. Further, asymmetric encryption is far slower. So we have settled for symmetric ciphers for data center and other mass storage encryption and asymmetric ciphers for just about everything else. And it works… for now. Digital Signatures Although not really encryption as we apply the term in this chapter, the use of asymmetric keys has another use: digital signatures. If Bob, for example, wants to enable verification that he actually sent a message, he can sign it. Refer to Figure 7-10. The signature process uses Bob’s private key, since he is the only person who has it. The private key is used as the message text is processed through a hash function. A hash is a fixed length value that represents the message content. If the content changes, the hash value changes. Further, an attacker cannot use the hash value to arrive at the plain text. Figure 7- 10: Digital Signing (“Digital signature,” 2012) When Alice receives Bob’s message, she can verify the message came from Bob and is unchanged: if she has Bob’s public key. With Bob’s public key, she rehashes the message text. If the two hash values are the same, the signature is valid, and the data reached Alice unchanged. If hash values do not match, either the message text changed or the key used to create the signature hash value is not Bob’s. In some cases, the public key might not be Bob’s. If attacker, Eve, is able to convince Alice that a forged certificate she sends to her is Bob’s key, Eve can send signed messages using a forged “Bob” key that Alice will verify. It is important for a recipient to be sure the public key used in this process is valid. Public Key Infrastructure (PKI) Verifying the authenticity of keys is critical to asymmetric cryptography. We have to be sure that the person who says he is Bob is actually Bob or that the bank Web server we access is actually managed by our bank. There are two ways this can happen: through hierarchical trust or a web of trust. Hierarchical trust Private industry usually relies on the hierarchical chain-of-trust model that minimally uses three components: Certificate authority (CA) Registration authority (RA) Central directory/distribution management mechanism The CA issues certificates binding a public key to a specific distinguished name provided by the certificate applicant (subject). Before issuing a certificate, however, it validates the subject’s identity. One verification method is domain validation. The CA sends an email containing a token or link to the administrator responsible for the subject’s domain. The recipient address might take the form of postmaster@domainname or root@domainname. The recipient (hopefully the subject or the subject’s authorized representative) then follows verification instructions. Another method, and usually one with a much higher cost for the requestor, is extended validation (EV). Instead of simple administrator email exchange, a CA issuing an EV steps through a rigorous identity verification process. The resulting certificates are structurally the same as other certificates; they simply carry the weight of a higher probability that the certificate holder is who they say they are, by Establishing the legal identity and physical existence/presence of the website owner Confirming that the requestor is the domain name owner or has exclusive control over it Using appropriate documents, confirming the identity and authority of the requestor or its representatives A simple certificate issuance process is depicted in Figure 7-11. It is the same whether you host your own CA server or use a third party. The subject (end-entity) submits an application for a signed certificate. If verification passes, the CA issues a certificate and the public/private key pair. Figure 7-12 depicts the contents of my personal VeriSign certificate. It contains identification of the CA, information about my identity, the type of certificate and how it can be used, and the CA’s signature (SHA1 and MD5 formats). Figure 7- 11: PKI (Ortiz, 2005) The certificate with the public key can be stored in a publicly accessible directory. If a directory is not used, some other method is necessary to distribute public keys. For example, I can email or snail-mail my certificate to everyone who needs it. For enterprise PKI solutions, an internal directory holds all public keys for all participating employees. Figure 7- 12: Personal Certificate The hierarchical model relies on a chain of trust. Figure 7-13 is a simple example. When an application/system first receives a subject’s public certificate, it must verify its authenticity. Because the certificate includes the issuer’s information, the verification process checks to see if it already has the issuer’s public certificate. If not, it must retrieve it. In this example, the CA is a root CA and its public key is included in its root certificate. A root CA is at the top of the certificate signing hierarchy. VeriSign, Comodo, and Entrust are examples of root CAs. Figure 7- 13: Simple Chain of Trust Using the root certificate, the application verifies the issuer signature (fingerprint) and ensures the subject certificate is not expired or revoked (see below). If verification is successful, the system/application accepts the subject certificate as valid. Root CAs can delegate signing authority to other entities. These entities are known as intermediate CAs. Intermediate CAs are trusted only if the signature on their public key certificate is from a root CA or can be traced directly back to a root. See Figure 7-14. In this example, the root CA issued CA1 a certificate. CA1 used the certificate’s private key to sign certificates it issues, including the certificate issued to CA2. Likewise, CA2 used its private key to sign the certificate it issued to the subject. This can create a lengthy chain of trust. When I receive the subject’s certificate and public key for the first time, all I can tell is that it was issued by CA2. However, I do not implicitly trust CA2. Consequently, I use CA2‘s public key to verify its signature and use the issuing organization information in its certificate to step up the chain. When I step up, I encounter another intermediate CA whose certificate and public key I need to verify. In our example, a root CA issued the CA1 certificate. Once I use the root certificate to verify the authenticity of the CA1 certificate, I establish a chain of trust from the root to the subject’s certificate. Because I trust the root, I trust the subject. Figure 7- 14: Chain of Trust This might seem like a lot of unnecessary complexity, and it often is. However, using intermediate CAs allows organizations to issue their own certificates that customers and business associates can trust. Figure 7-15 is an example of how this might work. A publicly known and recognized root CA (e.g., VeriSign) delegates certificate issuing authority to Erudio Products to facilitate Erudio’s in-house PKI implementation. Using the intermediate certificate, Erudio issues certificates to individuals, systems, and applications. Anyone receiving a subject certificate from Erudio can verify its authenticity by stepping up the chain of trust to the root. If they trust the root, they will trust the Erudio subject. Revocation Certificates are sometimes revoked for cause. When this happens, it is important to notify everyone that the revoked certificates are no longer trusted. This is done using a certificate revocation list (CRL). A CRL contains a list of serial numbers for revoked certificates. Each CA issues its own CRL, usually on an established schedule. Figure 7- 15: Business as Intermediate CA Web of Trust (WoT) Although most medium and large organizations use the hierarchy of trust model, it is important to spend a little time looking at the alternative. The WoT also binds a subject with a public key. Instead of using a chain of trust with a final root, it relies on other subjects to verify a new subject’s identity. Phil Zimmerman (1994), creator of Pretty Good Privacy (PGP), puts it this way: As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys. WoT implementations are available for free download and use by anyone (Download). But the simplicity of the approach for one or two people can disappear as the size of the subject organization grows. Further, complexity is introduced as WoT-based organizations try to interact with hierarchy of trust participants. Consider all business challenges carefully before implementing a WoT solution in your organization. Enterprise Key Management Managing key sharing is easy; managing keys for enterprise encryption implementations is often difficult and ineffective. Good keys are difficult or impossible to remember, must be kept secure, and must be available when production processes require access to encrypted data. Further, all this provides an additional point of attack that, if successful, completely invalidates encryption as a security control (Ferguson, Schneier, & Kohno, 2010). Distributed management of keys is not the answer, as it introduces even more opportunities for key compromise. Centralized Key Management Example Central key management governed by organization-specific encryption policies is likely the best key management option. In addition, only two or three people should have administrative access to key management controls. Vormetric Data Security (Vormetric, 2010), as shown in Figure 7-16, is an example of a product providing these capabilities. In addition to ensuring key security, this type of solution also allows auditing of key creation, use, and retirement. Further, Figure 7-17 depicts the key administrator’s ability to grant custodial access to sets of keys based on job role, enforcing separation of duties. Figure 7- 16: Centralized Key management Services Figure 7- 17: Separation of Key Administration Centralized encryption helps ensure keys are always available and that data is not encrypted when it is not necessary, appropriate, or wanted. Keys are encrypted and easy to backup or export for escrow storage. Cryptography’s Role in the Enterprise Encrypting every piece of data in your organization does not guarantee it is protected from unauthorized access. The only thing guaranteed with this approach is unnecessary costs and potentially unhappy production managers. Before looking at when and what to encrypt, it is important to understand where encryption fits in overall security controls architecture. Just Another Layer Encryption is just another security control; it adds an additional prevention layer, nothing more. The biggest mistake many organizations make is relying on it as a panacea for all security risk. For example, data is decrypted on servers and end-user devices when processed. What good is encryption in transit when the system attack surfaces are ignored? An attacker who compromises an online payment server could care less whether or not you use encryption. Everything he needs is in plaintext. In other cases, a key might be compromised. Assuming encryption provides 100% protection might cause a security team to ignore the importance of detection (e.g., SIEM) or response policies and controls. Again, inspect what you expect. Never assume anything, including encryption, is achieving expected outcomes. Before or while deploying encryption, implement the following controls or processes (Olzak, 2006): Reduce attack surfaces for applications, systems, and the network Implement strong application access controls Implement log management and monitoring solutions to detect anomalous activity across the network, on systems, and in/around databases Separate data access from data management; business users of sensitive data should access it through applications with strong, granular user access controls Ensure reasonable and appropriate physical security for network, storage, and system components Implement strong authentication and authorization controls between applications and the databases they access Follow best practices for securing databases and the servers that house or manage them When to Encrypt A few years ago, Rich Mogull (2005) wrote a paper for Gartner that defines three laws for deciding if to encrypt data. They still apply today; I added number four: Encrypt data that moves Data moving from one trust zone to another, whether within your organization or between you and an external network, is at high risk of interception. Encrypt it. Data moving from trusted portions of the network to end-user devices over wireless LANs almost always at high risk. Encrypt it. Encrypt for separation of duties when access controls are not granular enough For flat file storage, encrypting a spreadsheet file in a department share provides an additional layer of separation. Only employees with the right authorization have access. Application access controls protecting databases often do not provide granular control to strictly enforce need-to-know or least privilege. Using database solutions with field- and row-level encryption capabilities can help strengthen weak application-level access controls. Encrypt when someone tells you to And then there are local, state, and federal regulations. Couple regulatory constraints with auditor insistence, and you often find yourself encrypting because you have to. This type of encryption is often based on generalizations instead of existing security context. For example, just because you encrypt protected health information does not mean it is secure enough… but it satisfies HIPAA requirements. Encrypt when it is a reasonable and appropriate method of reducing risk in a given situation This law is actually a summary of the previous three. After performing a risk assessment, if you believe risk is too high because existing controls do not adequately protect sensitive information, encrypt. This applies to risk from attacks or non-compliance. How to Encrypt Implementing secure and operationally efficient encryption solutions is not easy, and maintaining them adds to total cost of ownership (TCO). Further, data is often spread between internal and cloud-based storage. Any solution you select must support all current and future data storage and transport characteristics. One approach is to purchase a system, install it in your data center, and assign in-house staff to manage it. While this might seem like a good idea, the opportunity costs are high. As with most commodity security controls, encryption solutions can be managed by anyone; they do not require the special knowledge of the business possessed by you or other members of the internal security and LAN teams. Your skills are better applied to projects, assessments, and other business critical activities. Consequently, consider outsourcing encryption and key management. Encryption-as-a-Service (EaaS) vendors provide all the services and protection we discussed, including key management and encryption according to business policy. In addition to encrypting the data center, they can also serve as a third-party that ensures all data housed by your other cloud service providers is managed by encryption policies as if it were in your own data center. Figure 7-18 is an example of an EaaS solution. The EaaS provider does not house your data, only your keys. Your in-house administrator, via a Web interface, performs configuration of encryption policies and subject access. Software as a service (SaaS) or storage as a service providers have no access to data while at rest. Finally, the “cloud” can also mean your own data center. Whether in house or outsourced, make sure your centralized encryption solution meets the following requirements: Central storage, management, protection, and management of keys Enforcement of your data encryption policies across all relevant data, wherever it is in your network or in the cloud Granular access to policy and key management functions based on separation of duties and least privilege Publicly known, tested, and unbroken ciphers used for all encryption Figure 7- 18: EaaS Configuration (enStratus, 2012) Tokenization Although not considered encryption, the payment card industry’s acceptance of tokenization as a secure method of managing payment card data makes tokenization an important concept to understand. Primary account numbers (PANs) are not encrypted; they are replaced by a series of alphanumeric characters of the same length. Also called aliasing, tokenization substitutes an arbitrary value for a PAN. If the PAN is all digits, the token is all digits. In other words, the token takes on the same length and type characteristics of the PAN (RSA, 2009). This allows use of tokens in existing business applications where data length and type matter. After a token is assigned, employees, point-of-sale systems, and other applications use it instead of the actual PAN. This limits the number of points of possible compromise. Figure 7-19 shows how a financial institution might use tokens. Customer PANs are converted to tokens by a token management system. Token/PAN pairs are stored in a secure database. When various departments access customer information, the token appears instead of the actual PAN. Figure 7- 19: Tokenization Process (RSA, 2009, p. 6) The process as it appears in Figure 7-19… An employee enters customer data into the data capture system. The data includes the customer’s real PAN. The data capture system sends the PAN to the tokenization server where a token is assigned and the PAN/token relationship established. The data capture system receives back a token. All future transactions by employees dealing directly with customers use the token instead of the PAN. If an application, such as the settlement application, needs the actual PAN, it sends a request. If the settlement application is authorized to receive the PAN, the tokenization system honors the request. Our example reflects a process occurring in financial institutions. However, it also applies to retail stores. If a store’s payment processor uses tokens, the retail point-of-sale system can retain payment card information (with the PAN replaced by a token) and retain compliance with the payment card industry data security standard (PCI DSS). Figure 7-20 provides a closer look at tokenization architecture. Tokens and associated PANs are encrypted. Instead of PANs existing in business transaction files, only the token appears. If an application requires the actual PAN, employee authentication is not enough. The application must be authorized to retrieve it. Further, all access to PANs is logged and anomalies identified. Instead of tracking PAN use at various locations across an organization, monitoring and control of sensitive customer information is centrally controlled and managed. Figure 7- 20: Tokenization Architecture (RSA, 2009, p. 6) Finally, tokenization provides an easy way to move production data to test environments. If supported, a tokenization server can filter sensitive field data as it moves from production to test. All sensitive fields not already tokenized are filled with tokens for testing changes or new applications, eliminating another potential point of attack. Conclusion The history of cryptography is filled with the back-and-forth between cryptographers creating “unbreakable” ciphers and cryptanalysts breaking the unbreakable. However, valuable lessons from the centuries-old battle are used to strengthen today’s ciphers. Particularly, any cipher creating ciphertext containing frequency and character/word socialization characteristics of the plaintext language is not secure. The more the ciphertext changes after a change to the plaintext the stronger the cipher. Key management is an important and often overlooked aspect of enterprise encryption. Ensuring keys are always available, secure, and locked away from everyone except a handful of key administrators is a good start. Further, central key management usually comes with the ability to apply common encryption policies across all data on all managed devices. The battle cry, “ENCRYPT EVERYTHING!” is not reasonable and appropriate application of encryption as a security control. Decide when and where to encrypt based on risk assessments and management’s appetite for risk. Finally, tokenization is sometimes a better solution than encryption for protecting individual data items. Social security numbers, credit card numbers, and patient insurance information are good examples of possible token targets. If you have to keep these data elements, resist distributing them across desktop and laptop screens when a token will suffice. Sursa
-
Security of a website is very crucial thing for any organization or for personal websites. It’s always advised to check the security of the website because it’s better and safer to know the loopholes of our website before any attackers finds and exploits it. The commonly seen web application vulnerabilities are SQL Injection Cross Site Scripting (XSS) Cross Site Request Forgery (CSRF) Insecure Session Handling Session Fixation Information Disclosure Header Injection Insecure Configuration Weak randomness Over the times, there are many tools which are being developed for providing better security and for uncovering various vulnerabilities. A vulnerability scanner is developed in order to detect the loopholes and some of these tools also suggest the prevention methods which can be implemented. The main reason why we prefer automated tools is because the process of manual exploitation could produce improper results or can crash the applications. A large number of web application scanning tools are available, both commercial and open source. We will see about few top web–application scanners and some of them are as follows Websecurify Net Sparker Community Edition. WSSA NStalker W3af Acunetix 1.Websecurify: URL: Websecurify | Web Application Security Scanner and Manual Penetration Testing Tool Websecurify is a tool that can be run on Windows, Linux and MAC. It is the best tool to find the common web vulnerabilities. Once the target has been scanned, the tool starts fuzzing and gives the results once it is done. The scan results will contains the description about the vulnerability, solution and the URL which is vulnerable which helps us to understand and fix the vulnerability as soon as possible. A sample scan results are shown below It can detect the vulnerabilities like sqli, LFI/RFI, XSS, CSRF and other categories which come under the OWASP top 10. This tool also has an online version of scanner. The url is Websecurify Scanner | Automated web application security testing at your fingertips and the user has to sign up for further scanning. For further reading, go to Websecurify Scanner | Automated web application security testing at your fingertips Note: The online scanner is beta version. Features of this tool: Easy to use Simultaneous testing Advanced Reporting 2. NetSparker: URL:Netsparker, False Positive Free Web Application Security Scanner - Mavituna Security NetSparker is again an effective website vulnerability scanner and also very simple to use. It comes in both community edition and paid version. To perform the scan, press the “Start New Scan” option and enter the target URL and press start scan to scan the website. You can find three tabs namely vulnerability, browser view and Http request/response. In the vulnerability tab, you can get the information about the vulnerable URL, description, impact and how it can be fixed. You can view the vulnerability chart also to understand the various loopholes along with the seriousness level. Features: Some of the features are 1. Post Exploitation feature takes automated exploitation to the next level. 2. We have in built encoder in this scanner. 3. We have an option for controlled scan. 3.Web Site Security Audit – WSSA URL:Web Site Security Audit - WSSA by Beyond Security Beyond Security have a very good web application vulnerability scanner and also integrates with network scanning. The WSSA also comes with Automated Vulnerability Detection System (AVDS) focus is accuracy. A problem in web application scanning is the ‘false positive. There are many scanners that give a long list of possible vulnerabilities but on further investigation it can be found that some of these are not actually present. The technology used does the testing based on the behavior instead of version checking. Testing behavior of a host is like penetration testing. Checking the version number and using that to assume vulnerability is present is inaccurate. For more information on these two testing methods At the same time, some scanners have a false positive rate of 10%. Most of them have 3%. AVDS is at .1%. This low rate means reduced time spent looking for problems that are reported but do not actually exist. It also increases confidence that the vulnerability report is accurate and that repair and remediation action is truly required. This scanner is a paid version but comes with a 15 days trial. Beyond Security also provide an online website scanner which provides the vulnerability report in a very detailed and can be understood easily even by a normal person. The scan reports are pretty fast and accurate and all this service comes free and if interested, you can also upgrade your account by paying some. The first step would be, entering your website Step 2: Enter the email id which is related to the domain name. Once done, you will be getting the detailed report about the domain to the email address which you have specified. This online service can detect almost all the vulnerabilities like poorly coded web pages, database connections that have issues, Examples are: SQL injection, XSS (cross site scripting), Remote File Inclusion, PHP/ASP Code Injection, Directory Traversal and File Disclosure. With this service, we can also identify the results of an attack by a virus, trojan or worm. Example: malicious code may open a TCP port for unauthorized access from the internet. System mis-configuration. Example: a service using a known default user name or password; or omitted security updates/patches. URL:https://scanmyserver.com 4. N-Stalker URL:N-Stalker - Free Edition 2012 N-Stalker Web Application Security Scanner is a Web security assessment tool. It incorporates with a well-known N-Stealth HTTP Security Scanner and 35,000 Web attack signature database. This tool also comes in both free and paid version. Before scanning the target, go to “License Manager” tab, perform the update. Once update, you will note the status as up to date. Now goto “Scan Session”, enter the target URL. In scan policy, you can select from the four options, Manual test which will crawl the website and will be waiting for manual attacks. full xss assessment owasp policy Web server infrastructure analysis. Once, the option has been selected, next step is “Optimize settings” which will crawl the whole website for further analysis. In review option, you can get all the information like host information, technologies used, policy name, etc. Once done, start the session and start the scan. The scanner will crawl the whole website and will show the scripts, broken pages, hidden fields, information leakage, web forms related information which helps to analyze further. Once the scan is completed, the NStalker scanner will show details like severity level, vulnerability class, why is it an issue, the fix for the issue and the URL which is vulnerable to the particular vulnerability? 5. W3af: W3af is another famous open source web application auditing and attack tool. It’s basically divided into various modules like attack, audit, exploit, discovery, evasion, brute force, and mangle which can be used accordingly. These modules in w3af comes with various sub modules like for example, we can select sqli option in Audit module if we need to perform a particular type of auditing. The below diagram shows a brief flowchart of the target website and makes it’s more easy to understand. Once the scan is completed, the W3af framework shows a details information about the vulnerabilities found in the target website which can be compromised accordingly for further exploitation. Once the vulnerability is found, we can configure the plugins in the “Exploit” tab and perform further attack which can help us to get a webshell in the target site. Another major advantage is W3af also comes with MSF for taking the attack to next level. 6. Acunetix: URL: Website Security -? Acunetix Web Security Scanner Acunetix is another famous website vulnerability scanner. It comes in both free and commercial version. To download the scanner, signup in their website and you will be getting the download link. The drawback in the free edition, we cannot perform a full web vulnerability scan. To perform the scan, click the “New Scan” option and enter the target website. Once the target site is selected, the next step is where you can select the scan profile. This option really saves the time while performing a vulnerability scan. If you want to perform a specific scan like for example Blind sql injection, the scanner would test only for Bsqli rather than performing other scan tests. Once the profile is selected, next option is where are you will get options like the details of the website and the technologies used by the target website. Once the details are done, click the next button and perform the scan. The scan time will depend on the size of the website. For example, if the website is small, then the scan would complete in less time period. Conclusion: There are many other scanners too in the market which is good and open source and the only thing which makes them stand out is the efficiency and changes of less false positives. Also the difference between the free edition and commercial version of scanners have many limitations and commercial version of scanners have a better sensors, exploit package but all these come in a high price which is one of the negative point. While getting a scanner, always do some research on the market in order to understand which is better, efficient, less damage causing to the website as well as network, etc. My all time favorite is scanmyserver.com which gives a simple and easy to understand as well the recommendations for the vulnerabilities are easy to follow. Sursa
-
Flame Malware: An Overview Flame Malware: How It Spreads Flame Malware: Architecture And Characteristics Sourcefire .com
-
inca o achizitie strategica marca google anunt oficial:June 4th, 2012 "... We are happy to announce that Meebo has entered into an agreement to be acquired by Google..." :meeblog ? Blog Archive ? Google is acquiring Meebo! prin intermermediul meebo puteai avea acces la serviciile:Windows Live, aim, Facebook, Yahoo, myspace, gtalk, icq, jabber, myyearbook posibil ca in curand sa ne putem loga pe toate conturile direct din google .ro
-
Web application security is always an important topic to discuss because websites seem to be the first target of malicious hackers. Hackers use websites to spread their malwares and worms, and they use the compromised websites for spamming and other purposes. OWASP has created an outline to secure a web application from the most dangerous vulnerabilities in web application, but it is always good to be actively learning about the new weaknesses and the new ways that an attacker might use to hack into a web application. Hackers are always trying to discover new ways to trick a user so from a penetration tester’s point of view a website administrator should take care of each and every vulnerability and the weaknesses that an attacker may exploit to hack into a website. There are so many automatic tools and manual techniques available to test a website for the most common vulnerabilities, like SQL-injection, cross site scripting, security misconfiguration and others, but we should take care about the variant of these vulnerabilities. SQL-injection is dangerous because an attacker may get access into a database and steal the information of the user and the administrator of the website, but what if an attacker simply hijacks the user or simply redirects your visitor to a malicious website. This can break the trust of the visitor on your website. In this article, we will discuss the attack at HTML level or attack at HTML codes, iframe is the part of HTML or a technique used in HTML to embed some file (document, video and others) in the same HTML page. The simple way to explain iframe is that “iframe is the technique to display the information from another web page within the same (current) page”. Security risk in iframe is an important topic to discuss because the usage of iframe is very common- even the most famous social networking websites are using iframe. The simple attribute to use iframe is as follows: <iframe src=”http://www.infosecinstitute.com”></iframe> The above picture shows how to display another website within a website. Example 2: <iframe src=’http://infosecinstitute.com/’ width=’500? height=’600? style=’visibility: hidden;’></iframe> Width and height of an iframe has been defined, but since the frame visibility is hidden there is no physical presence of Infosec Institute’s website. This technique is not used by the attacker because the frame occupies the area (width and height). <iframe src=’http://infosecinstitute.com/’ width=’1? height=’1? style=’visibility: hidden;’></iframe> Now it is completely hidden from the user’s eye, but the iframe is working as normal. Look at the picture below. Here I put Infosec Institute’s website URL, but an attacker can insert the URL of some malware and spamming website. Obfuscated iFrame Injection Attacks Obfuscated iframe injection attack is a dangerous and tricky attack because it is very difficult to detect and find the malicious injection code on a website. Obfuscated is the way to hide the meaning of the communication so that it is difficult to find the injected code. The aim of this attack is the same- to trick the user and then redirect to the third party web page to exploit the user. If a website has been compromised by using iframe injection attack, then it is easy to find and locate the injection code because the code is easy to read. However, in an obfuscated iframe injection attack, it is not easy to read the injected code. Let’s consider an example- A website has been compromised and it redirects or displays another web page within a page to sell some products. The visitor of this website trusts your website, and they usually purchase products so you need to make sure to clean the website from this tricky attack. A simple way is to review the index page for the possible iframe and redirect code. Let’s suppose you have reviewed but have not found any URL of the third party website. Now, there is no URL of the third party website so what is the problem? Sometimes attackers use human weaknesses (social engineering technique) in a web application attack. Let’s suppose there is a code like: ++++%23wp+/+GPL%0A%3CScript+Language%3D%27Javascript%27%3E%0A++++%3C%21--%0A++++document.write%28unescape%28%273c696672616d65207372633d27687474703a2f2f696e666 f736563696e737469747574652e636f6d2f272077696474683d273127206865696768743d273127207374 796c653d277669736962696c6974793a2068696464656e3b273e3c2f696672616d653e%27%29%29%3B%0A ++++//--%3E%0A++++%3C/Script%3E It seems to be normal and an important code for this website; but in reality, it is the root cause of the problem. Let’s decode it by using the java decoding function and the result is: #wp / GPL <Script Language='Javascript'> <!-- document.write(unescape('3c696672616d65207372633d27687474703a2f2f696e666f73656369 6e737469747574652e636f6d2f272077696474683d273127206865696768743d273127207374796c653d 277669736962696c6974793a2068696464656e3b273e3c2f696672616d653e')); //--> </Script> Again, it seems to be a legitimate piece of code because the attacker has created it very carefully and used the term “GPL” “wp” and “Java” so the code seems to be legitimate. In actuality, it is the root cause but how can this be confirmed. Everything is good with the code, but the numbers and letters seems to be HEX. In the next step, we need to decrypt it via hex decoder. Remember take only: 3c696672616d65207372633d27687474703a2f2f696e666f736563696e737469747574652e636f6d2f272 077696474683d273127206865696768743d273127207374796c653d277669736962696c6974793a206869 6464656e3b273e3c2f696672616d653e The result is: <iframe src=’http://infosecinstitute.com/’ width=’1? height=’1? style=’visibility: hidden;’></iframe> Now, you can imagine why it is difficult to fight against the obfuscated iframe injection attack. How To Clean Iframe Injected Code Shut down your website, and display the regular maintenance message. It is always good to shut it down immediately otherwise the malware can spread in your visitor computer. Create a backup of your website (core files, database and all other folders). Even though it is an infected website, backup is necessary because if something wrong were to happen during the cleaning process you could recover your website at the same condition. If some previous backup is available, then that is wonderful. Make sure your computer is clean from the malware and viruses; if you have any doubt on your computer, then clean it first. It is a necessary step because the malware has an ability to record the FTP credentials. Passwords – Change all of the passwords associated with your website (FTP password, SSH password, Admin password, Cpanl or other hosting panel password, database password and so on). If a clean backup of your website is available, then kindly scan it once by using an Anti-virus software to make sure that it is clean. After that, upload it on your web server, and check the functionality and everything. If there is still some problem, then you need to manually check the files to identify the injected code. If there is no clean backup available, then manually locate the injected code and remove it. I have discussed both (simple iframe injection and hidden iframe injection) possible cases of iframe injection attack, so follow the previous procedure to analyze the code for the possible injection. (It is recommended to make a backup of each change.) Make sure that the website no longer contains the injected code. Now, it is recommended to find the possible ways and the root cause of the problem. You need to find out how the hacker has injected the code for future prevention. The most common and possible ways that an attacker may use are: Outdated CMS (content management system) software (make sure to update all the software and plugins to the newest version) Vulnerability at the server software (the web host company is responsible to keep update the server software’s) FTP and other credentials have been compromised, SFTP is recommended over FTP. Vulnerability at the application level (in web application code). 8. Your computer must have an Anti-virus and anti-malware software. Do not forget to scan your computer before going to the FTP of your website. It is not recommended to save the password of FTP and SSH. iframe & Phishing Phishing attack vector in iframe is important to discuss because some famous social networking websites, like Facebook, allow users and developers to integrate the third party web page to their fan pages and other applications by using iframe. So the iframe is dangerous because an attacker might use it for phishing purposes. The proven concept of iframe phishing attack has been discussed by f-secure lab. In the analyses, they have successfully demonstrated the phishing and other scamming by using iframe. html> <head> <title>Infosec Institute iFrame by Irfan</title> </head> <iframe src="http://resources.infosecinstitute.com/author/irfan/" width="1450" height="300" frameborder="0"></iframe> <iframe src="http://phishing.com/wp-login" width="1450" height="250" frameborder="0"></iframe> </body> </html> Now you can see how easy is to trap the user into a phishing website, an attacker might exploit the cross site scripting (XSS) vulnerability on a web application to inject the phishing code as an iframe. The other dangerous variant of iframe attack is that an attacker might redirect the user to a web page that automatically downloads some malicious file (the malicious file might be hidden behind a general file). An attacker could also exploit the vulnerability of software- for example, having the user download a malicious PDF file and then run into an old and vulnerable version of adobe reader. This scenario would allow an attacker to own the remote computer. Conclusion Hackers are always using some new way to trick users, so it is your job to keep updated with the dangerous and common security threats that can exploit your website. Iframe is a dangerous attack because it breaks the trust that a user has in your website. It is always good to build a relationship and establish trust with your users, and making sure that your website is clean will allow users to easily trust your website. Sursa
-
- 1
-
-
Before looking how we can prevent ourselves from Google hackers, let’s see what Google hacking is. Google Hacking: Google hacking is a hacking technique that uses Google Search and other Google applications to find security holes in the configuration and computer code that websites use - Wikipedia Google is a very powerful web search engine and is capable of doing many things which are very useful for a hacker. Using simple Google dorks, people are able to hack a website and many web developers are not or unable to protect themselves or their customers data from such attacks. For example, using Google dorks, the attacker can extract various information like the database configuration details, username, passwords, directory listings, error messages, etc. For example, intitle:index.of.config These directories can give information about a web server’s configuration. This is not meant to be public since it contains files with passwords depending on the level of security. It can also contain information on various ports, security permissions. The major reason for such leaks of data is improper security policy related to data on the internet. There are few methods by which we can protect our web server. The public server is always used for storing data which are mostly being accessed by the public and if you are really concerned of keeping the data private, then the easiest and the best way is to keep it away from the public server. Though such documents are kept isolated, it is easy to get access to such pages. All know the risk associated with directory listings, which can allow user to see most of the files stored inside the directory, the sub directories, etc. Sometimes even the .htaccess file is being listed which actually is used to protects the directory contents from unauthorized access but a simple misconfiguration allows this file to be listed and also read. Since many have the habit of uploading important data on their servers to enable access from anywhere and they are indexed by the web search crawlers. One of the simple rules is that Web site administrators can create a robots.txt file that specifies particular locations, so that the search engine should not explore and store in its cache. To protect yourself,use robots.txt file to avoid indexing of such documents or folders. E.g. User-agent: *Disallow: /documents Also to block individual pages or if you don’t want the page to be indexed by any search engine, we can use something like meta tag “meta name=’sipder_name’ content=’NOarchive’ Robots.txt Examples: The following allows all robots to visit all files. User-agent: * Disallow: This entry will keep all robots out of all directories. User-agent: * Disallow: / We can specify particular directories that we don’t want. The following example, will keep all robots out of the /infosec/ directory, as well as any subdirectories. User-agent: * Disallow: /infosec/ By not including the trailing /, we can stop the spiders from crawling files as well. The following example will stop Google’s robots (googlebot) from crawling anything on our site, but allows all other robots access to the whole site. User-agent: googlebot Disallow: / The following meta tag will prevent all robots from scanning any links on the site. <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW"> We can also deny or allow certain spiders using this tag. Example: <META NAME="GOOGLEBOT" CONTENT="NOINDEX, NOFOLLOW"> For more information, you can visit : The Web Robots Pages. Google Dork for checking the .htaccess file is intitle:index of “.htaccess” would list the websites with the file .htaccess in the directory listing. Directory listing should be disables unless required. The directory listing also happens when the index file defined by the server configuration is missing. On apache servers, we can disable the directory listings by using a dash or minus sign before the word Indexes in the httpd.config file. Check your own site: This article is to show how you can test your own website to get an idea about the potential security loopholes and prevent it by using both manual and automated testing. Many web developers don’t have knowledge about how to hack or other related technicalities like a pen tester does. In this topic we will cover how to prevent your website from Google hacking. I will show how you can see your own site in the Google perspective. Starting with manual method, the most common and simple Google dork is site keyword. The keyword site can be used if you want to narrow down your search results with a particular domain or server. For example site:infosecinstitute.com can list all the Google cached pages from the domain infosecinstitute.com. Now you can click and open all the links listed and check if the data shown is supposed to be public or not, but this seems to be time consuming if the query results has more than hundred or thousand links. So in this scenario, we can go for some automated testing. The tools which we are going to look are Gooscan Sitedigger Wikto Gooscan: Gooscan is a Linux-based tool and can be used for bulk Google searches. This tool violates the Google TOS (Terms of Service) since it doesn’t use the Google API. And if you’re using a tool which violates the Google TOS, then there are chances of the getting few IP address blocked. Gooscan options: There are list of options available in this tool for getting various results. There are two required argument which has to be passed for performing the scan and also you have other optional arguments. The Required arguments are -t target : This is used to scan a target. A target can be a host name or an IP address. -q query | -I query_file : This argument is used to send the query to get a particular search result. The –q takes only single argument or in other words a single Google dork. For example: -q intitle:index of ".htaccess" The tool can also take multiple queries which will be read from the query file. The optional arguments are -o output_file : If you want to create a html output file, then you can use this argument. The output file will contain all the links which were fetched by the query used. -p proxy:port : To use a html proxy server -v : Verbose mode. -s site : As said before, this can be used to get the results from the particular domain or target. Using Gooscan: Gooscan can be used to in two ways wither by sending a single query or by sending multiple queries. A simple example would be Gooscan –q "hack" –t www.google.com –s infosecinstitue.com To create an output file Gooscan –q "hack" –t www.google.com –o infosec.html Performing a multiple query based search using the Gooscan can cause problems. In order to avoid that, we can send small batches of queries rather than sending huge amount of files. To create a small data file, use the head command. Head -5 data_files.gdork.gs > data_files/small_dorks.gs Gooscan –t Google –i data_files/small_dorks.gs –o multiplequeries.html Once the output file has been created, click on the links which you find suspicious. SiteDigger: The first and most basic tool is sitedigger, written by Foundstone. Sitedigger is integrated with the Google hacking database and uses the Google API. Sitedigger only allows you to select a site to test and choose which Google hacking signatures to run against it or just you can select any category of dork and run the query which will return the links accordingly. Select any query and click the urls which are shown in the results. Wikto: Wikto is another tool which is used for Google hacking. It is a complete web assessment tool which means that you can use this tool for testing the server, applications which are running on the server. To perform Google hacking, we have an applet named Googler. This applet will search the certain file types in the Google index which are then imported and used as a backend. There is another applet which can be used in Wikto and its called GoogleHacks which import the GHDB and execute the queries from the GHDB automatically on any particular site Google Hack Honeypot: Google Hack Honeypot (GHH) is designed to provide reconnaissance against attackers that use search engines as a hacking tool. This implements the honeypot concept to provide additional security to your web. The best factor of this is that it allows us to monitor any attempts by malicious attackers to compromise your security. GHH also have a logging functionality which allows us to administer it and take actions accordingly. You can download this from GHH - The "Google Hack" Honeypot Installation details can be found in GHH - The "Google Hack" Honeypot Conclusion: It is very much essential to follow secure coding practices and implement security code reviews in this approach. For a better understanding, you can go through OWASP guide for secure coding practices. There is also an option to request for immediate removal of the content from Google’s index. This can be achieved by sending a request to Google after registering through a Google account with the Google’s automatic URL removal system after creating either the META tags or the ‘robots.txt’ file in the web server. Sursa
-
Cryptography Tutorial #1 -Intro Cryptography Tutorial #2 -Checksum Cryptography Tutorial #3 -Binary numeral system Cryptography Tutorial #4 -CRC Cryptography Tutorial #5 -Hex Cryptography Tutorial #6 -Base32 Cryptography Tutorial #7 -Base64 Cryptography Tutorial #8 -MD5 Sursa: http://www.softwaresprogramming.com/search/label/Cryptography
- 1 reply
-
- 1
-
-
Free access to scientific research 1,755 books 13 journals 60,000+ authors Physical Sciences, Engineering and Technology Chemistry Computer and Information Science Earth and Planetary Sciences Engineering Materials Science Mathematics Nanotechnology and Nanomaterials Physics Robotics Technology Life Sciences Agricultural and Biological Sciences Biochemistry, Genetics and Molecular Biology Environmental Sciences Immunology and Microbiology Neuroscience Health Sciences Medicine Pharmacology, Toxicology and Pharmaceutical Science Veterinary Medicine and Science Social Sciences and Humanities Business, Management and Economics Psychology Social Sciences http://www.intechopen.com/ nu pot sa spun decat: felicitari celor cu initiativa aceasta si autorilor acestor carti
-
Publication Date: March 9, 2012 Reverse engineering encompasses a wide spectrum of activities aimed at extracting information on the function, structure, and behavior of man-made or natural artifacts. Increases in data sources, processing power, and improved data mining and processing algorithms have opened new fields of application for reverse engineering. In this book, we present twelve applications of reverse engineering in the software engineering, shape engineering, and medical and life sciences application domains. The book can serve as a guideline to practitioners in the above fields to the state-of-the-art in reverse engineering techniques, tools, and use-cases, as well as an overview of open challenges for reverse engineering researchers. Part 1 Software Reverse Engineering 1 Chapter 1 Software Reverse Engineering in the Domain of Complex Embedded Systems Chapter 2 GUIsurfer: A Reverse Engineering Framework for User Interface Software Chapter 3 MDA-Based Reverse Engineering Chapter 4 Reverse Engineering Platform Independent Models from Business Software Applications Chapter 5 Reverse Engineering the Peer to Peer Streaming Media System Part 2 Reverse Engineering Shapes Chapter 6 Surface Reconstruction from Unorganized 3D Point Clouds Chapter 7 A Systematic Approach for Geometrical and Dimensional Tolerancing in Reverse Engineering Chapter 8 A Review on Shape Engineering and Design Parameterization in Reverse Engineering Chapter 9 Integrating Reverse Engineering and Design for Manufacturing and Assembly in Products Redesigns: Results of Two Action Research Studies in Brazil Part 3 Reverse Engineering in Medical and Life Sciences Chapter 10 Reverse Engineering Gene Regulatory Networks by Integrating Multi-Source Biological Data Chapter 11 Reverse-Engineering the Robustness of Mammalian Lungs Chapter 12 Reverse Engineering and FEM Analysis for Mechanical Strength Evaluation of Complete Dentures: A Case Study Download: http://vouzo.com/y5m53pci6p11 http://depositfiles.com/files/adssti86p Sursa: http://www.opensc.ws/e-books/19723-reverse-engineering-recent-advances-applications-2012-a.html //cartea poate fi citita si online, pe site-ul editurii: intechopen.com/books/reverse-engineering-recent-advances-and-applications
-
- 1
-
-
crack.exe MD5: 9407df7b41ce3355684b7847ad625645 full akl.exe MD5: 60ea5bd072b07b5a6930c7fb2fdaa103 akl 3.9 full.rar MD5: 59dd5bd38be9abc8303fc929a21cd689
-
sursa: http://www.securitytube.net/video/4107
- 1 reply
-
- 1
-
-
Introduction: In this post, we’ll look at an application reversing challenge from HTS (hackthissite.org) resembling a real-life protection scheme. You can find a copy of the challenge here: http://www.hackthissite.org/missions/application/app17win.zip Put simple, the program creates a key for your username, and compares it to the one you enter. This tutorial is not meant as a spoiler for HTS since for every username a dedicated password will be computed. This tutorial is purely written to allow you to understand how some (even real-life) protection schemes are implemented. The goal of the HTS challenge is to create a key generator, but in this tutorial I just want to find out my own dedicated password. Note: the length of the password is NOT static, and there are no anti-debugging mechanisms in effect I used Windows XP SP3, so if you have a different windows version the addresses may be different as well. Thanks to HTS, and thanks to NightQuest for coding this nice application. Run the application: Z:\HTS\app17win>app17win.exe ****************************************** * HackThisSite Application Challenge #17 * * Coded by NightQuest - 02-14-2009 * ****************************************** Objective: You need to create a key that is unique to your HackThisSite username. The idea is to create a keygen, but any method is allowed. An example would be: Username: SomeUserName Password: HTS-1234-5678-9012-3456 Username: fancy Password: ********** Username: fancy Password: **** Username: As you can see, when you supply the wrong password, you’ll be asked to input your username again! Find the password: Start the program (inside the debugger right away), enter your username and an arbitrary password. Do NOT press enter after entering the password. The idea is to use the debugger to “hook” into the execution flow at this point, and see what happens. Username: fancy Password: 123456 Press the pause button (or press F12) in the debugger (OllyDbg or Immunity Debugger) to interrupt the execution. Next, press Alt-F9 (return to user). This will tell the debugger to break as soon as it returns from any Operating System code and starts executing code from the application itself again. The process should now be running again. Open the command prompt and press enter. You’ll notice that the debugger will intervene and pause the process again right after a call to kernel32.ReadConsoleInputA The idea now is to continue to step through the application, and try to see where our input was used. Let’s use F8 (step over) to step through the instructions at this time. F8 will step over CALL instructions (to keep things a bit easier at this point), but we do need to keep an eye on the stack, every time we’re about the execute a CALL. In fact, whenever reaching a CALL instruction, before pressing F8, check the stack and see if we can see our username and password on the stack. Press F8, and you should find this “magic call”: 00402210 E8 8AEEFFFF CALL app17win.0040109F ; # magic call Why is this the “magic” call? Well, not only does it use the input, but also returns a value in eax. It basically sets AL to 1 if a wrong password is given. When it returns AL=0, the “TEST AL,AL” would set the zero flag, so the “JNZ” instruction below would not be taken. The app will then tell you: “Congratulations! Enter that password on HackThisSite. Let’s evaluate what happens inside this call. Instead of setting over the call with F8, use F7 to step into the call. Then use F8 to step until you reach 0x004012B2. This is where AL is set to 1 (indicating the password is wrong). You’ll notice that the routine jumped directly to the MOV AL,1 instruction, and didn’t execute the XOR AL,AL and JMP just above it: Anyways, AL is set to 1, but we we should avoid this !!! In fact, we should try to make the application jump to 004012AE 32C0 XOR AL,AL There are a lot of compare functions in this routine, and a lot of things are going on. Let’s set a breakpoint at the magic CALL at 00402210, restart the application and enter a password in the format suggested by the application. In fact, you’ll need to use your hackthissite.org username. I used “fancy” before, but my real username is “fancy__004?, so that’s what I’ll use from this point forward Username: fancy__04 Password: HTS-1234-5678-9900-1122 When the breakpoint at 0×00402210 gets hit, we can enter the function again with F7 and step through the routine. You’ll see that there’s a a lot of calls to this function: 004011B3 E8 68150000 CALL app17win.00402720 By stepping into this function you’ll see that there’s a lot of computing done. To understand the algorithm behind calculating the password you have to examine this function. But I’ll continue since I just want to have my password. We will encounter 2 compare functions which are not important to us, like this one: But then this compare function is interesting: Why is this compare interesting? If you follow the jump which is taken when these values are not equal, AL is set to 1 (remember: we need AL = 0) and then the function returns. So here we have a compare of the values 10 and 12. Note: it’s in hex, in decimal it is: 16 != 18 So what does that mean?? Well, based on the input we provided (the username & password), we can derive this relationship: 16 = the number of digits in the password 18 = twice the number of characters of your username Since my username has a fixed length, maybe the password is too short?? Let’s add some digits, according to the password convention, and use this password: Password: HTS-1234-5678-9900-1122-00 Restart the application, use the “new” password and stop again at the same location: This time, we pass the test. Good !!!! Let’s recap real quick. The following compare instruction validates if the number of digits in the password is correct: 004011BA 3BF8 CMP EDI, EAX Let’s step further. The next interesting compare is this one: Why is this compare interesting? Again if the values don’t match we take the jump to the instruction setting AL = 1. What’s compared here? 12 = first 2 digits of my password: HTS-1234-5678-9900-1122-00 11 = ??? Maybe the real digits of the password? I think so So, in short: this instruction validates 2 digits in the provided password by comparing what I entered with what the application has computed. 0040128C 3B75 94 CMP ESI,DWORD PTR SS:[EBP-6C] Let’s set a breakpoint at this location, restart the app and only change the corresponding part of the password. Since we know the first 2 digits should be 11, we simply update the password and replace 12 with 11. When using password HTS-1134-5678-9900-1122-00, the compare will pass: Excellent. Press F9 to let the application to run, and the breakpoint will get hit a second time. This time, it shows the next 2 digits from the password, and it indicates what the calculated value is. Based on that info, we know that the next 2 digits of the password should be 28 Restart the application again, and change the next 2 digits, so the new password would be HTS-1128-5678-9900-1122-00 this time. Since the calculated values can be seen at the breakpoint, we can now manually generate the entire password, 2 digits at a time: HTS-1128-5678-9900-1122-00 HTS-1128-5678-9900-1122-00 HTS-1128-5678-9900-1122-00 HTS-1128-5678-9900-1122-00 HTS-1128-5678-9900-1122-00 HTS-1128-5678-9900-1122-00 HTS-1128-5678-9900-1122-00 HTS-1128-5678-9900-1122-00 Let’s validate if this works. Let’s enter the following credentials: Username: fancy__04 Password: HTS-1128-2320-040D-2903-18 Z:\HTS\app17win>app17win.exe ****************************************** * HackThisSite Application Challenge #17 * * Coded by NightQuest - 02-14-2009 * ****************************************** Objective: You need to create a key that is unique to your HackThisSite username. The idea is to create a keygen, but any method is allowed. An example would be: Username: SomeUserName Password: HTS-1234-5678-9012-3456 Username: fancy__04 Password: ************************** Congratulations! Enter that password on HackThisSite. w00t. Sursa: https://www.corelan.be/index.php/2012/05/14/reversing-101-solving-a-protectionscheme/
-
How Malicious Code Can Run in Microsoft Office Documents
Usr6 posted a topic in Tutoriale in engleza
One of the most effective methods of compromising computer security, especially as part of a targeted attack, involves emailing the victim a malicious Microsoft Office document. Even though the notion of a document originally involved non-executable data, attackers found ways to cause Microsoft Office to execute code embedded within the document. Below are 4 of the most popular techniques used to accomplish this. VBA Macros Support for executing code that’s embedded as a VBA macro is built into Microsoft Office. Once the victim opens the document and allows macros to run, this code can run arbitrary commands on the user’s system, including those that launch programs and interact over the network. The penetration testing tool Metasploit makes it relatively straightforward to generate payload that attackers could embed in an Office file as a VBA macro. (See one example by Chris Patten.) Such macros can be included in “legacy” binary formats (.doc, .xls., .ppt) and in modern XML-formatted documents supported by Microsoft Office 2007 and higher. In either case, Office will present the user with a security warning, stating that macros have been disabled and offering to “enable content.” Social engineering techniques can persuade the victim to click the button that will allow the embedded macro to run and infect the system. Payload of a Microsoft Office Exploit Another way to execute malicious code as part of an Office document involves exploiting vulnerabilities in a Microsoft Office application. The exploit is designed to trick the targeted application into executing the attacker’s payload, which is usually concealed within the Office document as shellcode. In this case, Microsoft Office has to be exploited to execute the attacker’s code. This is in contrast to the previous scenario, where the attacker takes advantage of macros, supported by Microsoft Office as a feature. For instance, vulnerability CVE-2012-0141, announced in May 2012, could allow the attacker to craft a malicious Excel file to include an exploit that would “take complete control of an affected system.” Embedded Flash Program Embedding a Flash program inside an Office document provides attackers yet another way to run malicious code on the victim’s system. In this case, the code within the Flash object run as soon as the victim opens the document without any warnings and without relying on exploits. This code is till subject to security restrictions imposed by Flash Player, so to perform escalated actions the code would need to exploit a vulnerability in Flash Player. One example of this attack has been described by Mila on the Contagio blog. The malicious Word document “DOC Iran’s Oil and Nuclear Situation.doc” was sent to a victim as part of a targeted attack. The document contained a Flash object, as seen below. (See steps to manually embed a Flash object in an Office document.) Attackers can embed Flash objects in Office documents using automated tools. manual steps to do this the Flash object instructed Flash Player to download and play an MP4 file that was designed to exploit the CVE-2012-0754 vulnerability in Flash Player, announced in February 2012. This allowed the attacker to infect the victim’s system with a malicious Windows executable (trojan). Embedded JavaScript Another way to automatically execute code then the victim opens a Microsoft Office document involves embedding the ScriptBridge ActiveX control in the file. This control allows the attacker to embed and execute JavaScript, as was the case with the malicious “World Uyghur Congress Invitation.doc” file I obtained from the Contagio blog. This file used the “Microsoft Scriptlet Component”, implemented as the ScriptBridge ActiveX control to execute the embedded JavaScript code. This Word file used ScriptBridge to execute embedded JavaScript code, which downloaded a malicious Flash file by from the specified URL. Microsoft Office automatically invokes embedded ActiveX controls that are marked Safe-For-Initialization, which is the case with ScriptBridge. (I’d love to better understand how ScriptBridge is being used to run JavaScript, so if you have more details, please let me know.) In the case of this Word document, the downloaded Flash file was crafted to exploit the CVE-2012-0779 vulnerability in Flash player, announced in May 2012. These are some of the techniques that intruders have used to execute code in Microsoft Office documents to compromise the system. The attacker could directly take advantage of a vulnerability in the targeted Office application. In other cases, the attacker uses functionality provided by Microsoft Office to either trick the user into allowing the malicious code to run (VBA macros) or to use a weakness in Office settings to run code that exploits vulnerabilities in other applications (Flash Player). — Lenny Zeltser Source:How Malicious Code Can Run in Microsoft Office Documents -
How to Be a Terminal Pro | 2012 | 02:50:00 | mov 1280x720 | 296 MB In this course, you’ll learn how to take advantage of that scary app you never touch: Terminal! We’ll begin with the obligatory “hello world” command, and work our way up to advanced usage. Introduction Hello Terminal 4m 47s Navigating the File System 7m 38s Working With Files and System Processes Files, Links, and CRUD 10m 16s Finding Files 8m 55s Managing File Permissions 8m 49s Editing Files 14m 49s Piping, Redirection, and Output 5m 56s Managing Processes on Mac OSX 6m 39s Interacting with remote Machines SSH Keys for Password-Less Logins 6m 4s Working With Remote Files: SCP, SFTP, and cURL 10m 47s Advanced Terminal Applications A Primer to Bash Scripting/Aliasing 13m 5s Automated Background Processes 8m 29s Configuring the Apache Web Server 5m 17s Terminal Customization Customizing the Prompt 5m 16s http://extabit.com/file/2b4wq9ibfespo/How to BeTerminal Pro.rar http://rapidgator.net/file/13326672/How_to_BeTerminal_Pro.rar.html http://bitshare.com/files/684sf2xl/How-to-BeTerminal-Pro.rar.html Sursa
-
|_ /\ /\/\ |_| |_ -|- | /\ |\| |.
-
Socket Programming 101 - Introduction Socket Programming 101 - Part 1 : Network Basics Socket Programming 101 - Part 2 : More Network Theory Socket Programming 101 - Part 3 : Tcp And Udp Socket Programming 101 - Part 4 : Endianness Socket Programming 101 - Part 5 : Socket Theory Socket Programming 101 - Part 6 : Some Useful Utilities Socket Programming 101 - Part 7 : Struct Sockaddr_In Socket Programming 101 - Part 8 : Two More Structures Socket Programming 101 - Part 9 : Getaddrinfo() Socket Programming 101 - Part 10: Ip_Find.C Socket Programming 101 - Part 11: Socket() Socket Programming 101 - Part 12: Bind() Socket Programming 101 - Part 13: Listen() Socket Programming 101 - Part 14: Accept() Socket Programming 101 - Part 15: Connect() Socket Programming 101 - Part 16: Send() / Recv() Socket Programming 101 - Part 17: Sendto() And Recvfrom() Socket Programming 101 - Part 18: Close() Socket Programming 101 - Part 19: Tcp_Messageserver.C Socket Programming 101 - Part 20: Tcp_Client.C Sursa + episoade viitoare: http://www.securitytube.net/tags/socket%20programming
-
- 1
-