Search the Community
Showing results for tags 'computing'.
-
IBM has announced it’s surmounted one of the biggest hurdles on the road toward creating the world’s first true usable quantum computer. A number of analysts have predicted that the jump from traditional computing to quantum chips could be on par with the revolution we saw when the world moved from vacuum tubes to integrated circuits back in the early sixties. The reason for this increased power is that quantum computers are capable of processing multitudes more calculations than traditional CPUs at once, because instead of a transistor existing in one of either two states — on, or off — independently of one another, a quantum bit can be both at the same time. How is that possible? Well, while the specifics of the mechanism that makes it work involves a bit more math than I could sit through in college, at its essence the computer is taking advantage of a quantum phenomena known as “superposition,” wherein an atom can act as both a wave and a particle at once. In short, this means that at least in theory, quantum bits (or “qubits”), can process twice as much information twice as fast. This has made the race to create the world’s first true quantum computer a bit of a Holy Grail moment for big chip makers, who have found themselves inching closer to maxing out Moore’s Law as 22 nano-meter transistors shrink to to 14nm, and 14nm tries to make the jump to 10. Related: Leaked table of Intel’s sixth-generation processors packs few surprises So far we’ve seen just one company pull out in front of the herd with its own entry, D-Wave, which first debuted all the way back in 2013. Unfortunately for futurists, the D-Wave is more a proof of concept that quantum computing is at least possible, but still not necessarily all that much quicker than what we have to work with today. Now though, according to a statement released by IBM Research, it seems Big Blue may have found a way around one of the biggest qualms in quantum computing by sorting out the problem of something known as “quantum decoherence.” Decoherence is a stumbling block that quantum computers run into when there’s too much “noise” surrounding a chip, either from heat, radiation, or internal defects. The systems that support quantum chips are incredibly sensitive pieces of machinery, and even the slightest bit of interference can make it impossible to know whether or not the computer was able to successfully figure out that two plus two equals four. IBM was able to solve this by upping the number of available qubits laid out on a lattice grid to four instead of two, so the computer can compensate for these errors by running queries against itself and automatically compensating for any difference in the results. In laymen’s, this means that researchers can accurately track the quantum state of a qubit, without altering the result through the act of observing alone. “Quantum computing could be potentially transformative, enabling us to solve problems that are impossible or impractical to solve today,” said Arvind Krishna, senior vice president and director of IBM Research, in a statement. Related: Intel may turn to Quantum Wells to enforce Moore’s Law While that may not sound huge, it’s still a big step in the right direction for IBM. The company believes the quantum revolution could be a potential savior for the supercomputing industry, a segment that is projected to be hardest hit by the imminent slowdown of Moore’s trajectory. Other possible applications up for grabs include solving complex physics problems beyond our current understanding, testing drug combinations by the billions at a time, and creating unbreakable encryption through the use of quantum cryptography. Se pare ca aceste tipuri de calculatoare vor conduce la "securitatea suprema". Sursa:Quantum computing may not be as far off as we think, says IBM | Digital Trends
-
What is Cryptography? Cryptography is the science of study of secret writing. It helps in encrypting a plain text message to make it unreadable. It is a very ancient art; the root of its origin dates back to when Egyptian scribes used non-standard hieroglyphs in an inscription. Today, electronic or Internet communication has become more prevalent and a vital part of our everyday life. Securing data at rest and data in transit has been a challenge for organizations. Cryptography plays a very important role in the CIA triad of Confidentiality, Integrity and Availability. It provides mathematical techniques related to aspects of information security such as confidentiality, data integrity, entity authentication, and data origin authentication. Over the ages, these techniques have evolved tremendously with technological advancements and growing computing power. Encryption is a component in cryptography or science of secret communication. The part “en” means “to make” and “crypt” means hidden or secret. Encryption can be defined as a process to make information hidden or secret. In this digital age, encryption is based on two major algorithm. Asymmetric or Public key cryptography: Uses two keys, one is a public encryption key and other is a private decryption key. Symmetric or Secret key cryptography: Uses the same key for encryption and decryption processes. Challenges in traditional cryptography The keys used in modern cryptography are so large, in fact, that a billion computers working in conjunction with each processing a billion calculations per second would still take a trillion years to definitively crack a key. Though this doesn’t seem to be a problem now, it soon will be. Quantum computers are going to replace traditional binary computing in the near future. Since they can operate on the quantum level, these computers are expected to be able to perform calculations and operate at speeds no computer in use now could possibly achieve. So the codes that would take a trillion years to break could possibly be cracked in much less time with quantum computers. Traditional cryptography has the problem of key distribution and eavesdropping. Information security expert Rick Smith points out that the secrecy or strength of a cipher ultimately rests on three major things: The infrastructure it runs in: If the cryptography is implemented primarily in software, then the infrastructure will be the weakest link. If Bob and Alice are trying to keep their messages secret, Tom’s best bet is to hack into one of their computers and steal the messages before they’re encrypted. It’s always going to be easier to hack into a system, or infect it with a virus, than to crack a large secret key. In many cases, the easiest way to uncover a secret key might be to eavesdrop on the user and intercept the secret key when it’s passed to the encryption program. Key size: In cryptography, key size matters. If an attacker can’t install a keystroke monitor, then the best way to crack the ciphertext is to try to guess the key through a “brute-force” trial-and-error search. A practical cipher must use a key size that makes brute-force searching impractical. However, since computers get faster every year, the size of a “borderline safe” key keeps growing. Algorithm quality: Cipher flaws can yield “shortcuts” that allow attackers to skip large blocks of keys while doing their trial-and-error search. For example, the well-known compression utility PKZIP traditionally incorporated a custom-built encryption feature that used a 64-bit key. In theory, it should take 264 trials to check all possible keys. In fact, there is a shortcut attack against PKZIP encryption that only requires 227 trials to crack the ciphertext. The only way to find such flaws is to actually try to crack the algorithm, usually by using tricks that have worked against other ciphers. An algorithm usually only shows its quality after being subjected to such analyses and attacks. Even so, the failure to find a flaw today doesn’t guarantee that someone won’t find one eventually. At present, RSA Key length of 2048 bits is considered “Acceptable”. In 2009, researchers were able to crack a 768-bit RSA key and it remains as the current factoring record for the largest general integer. The Lenstra group estimated that factoring a 1024-bit RSA modulus would be about 1,000 times harder than their record effort with the 768-bit modulus, or in other words, on the same hardware, with the same conditions, it would take about 1,000 times as long. Breaking a 2048 bit key would take about 4.3 billion times longer than doing it for a 1024-bit key. A symmetric key algorithm DES is considered to be insecure now since the 56-bit key size it used was too small. Although DES uses a block size of 64-bit, only 56 bits are actually used by the algorithm; the final 8 bits are used for the parity check. In simple words, traditional cryptography and its security are based on difficult mathematical problems which are mature both in theory and realization. Both the secret-key and public-key methods of cryptology have unique flaws. With growth of computing power, the strength of traditional cryptography might become weak and breakable. DNA Computing A new technique for securing data using the biological structure of DNA is called DNA Computing (A.K.A molecular computing or biological computing). It was invented by Leonard Max Adleman in the year 1994 for solving the complex problems such as the directed Hamilton path problem and the NP-complete problem similar to The Traveling Salesman problem. Adleman is also known as the ‘A’ in the RSA algorithm – an algorithm that in some circles has become the de facto standard for industrial-strength encryption of data sent over the Web. The technique later on was extended by various researchers for encrypting and reducing the storage size of data that made the data transmission over the network faster and secured. DNA can be used to store and transmit data. The concept of using DNA computing in the fields of cryptography and steganography has been identified as a possible technology that may bring forward a new hope for unbreakable algorithms. Strands of DNA are long polymers of millions of linked nucleotides. These nucleotides consist of one of four nitrogen bases, a five carbon sugar and a phosphate group. The nucleotides that make up these polymers are named after the nitrogen base that it consists of: Adenine (A), Cytosine ©, Guanine (G) and Thymine (T). Mathematically, this means we can utilize this 4 letter alphabet ? = {A, G, C, T} to encode information, which is more than enough considering that an electronic computer needs only two digits, 1 and 0, for the same purpose. Advantages of DNA computing Speed – Conventional computers can perform approximately 100 MIPS (millions of instruction per second). Combining DNA strands as demonstrated by Adleman made computations equivalent to 10^9 or better, arguably over 100 times faster than the fastest computer. Minimal Storage Requirements – DNA stores memory at a density of about 1 bit per cubic nanometer, where conventional storage media requires 10^12 cubic nanometers to store 1 bit. Minimal Power Requirements – There is no power required for DNA computing while the computation is taking place. The chemical bonds that are the building blocks of DNA happen without any outside power source. There is no comparison to the power requirements of conventional computers. Multiple DNA crypto algorithms have been researched and published, like the Symmetric and Asymmetric Key Crypto System using DNA, DNA Steganography Systems, Triple Stage DNA Cryptography, Encryption algorithms inspired by DNA, and Chaotic computing. DNA Cryptography can be defined as a technique of hiding data in terms of DNA sequence. In the cryptographic technique, each letter of the alphabet is converted into a different combination of the four bases which make up the human deoxyribonucleic acid (DNA). DNA cryptography is a rapid emerging technology which works on concepts of DNA computing. DNA stores a massive amount of information inside the tiny nuclei of living cells. It encodes all the instructions needed to make every living creature on earth. The main advantages of DNA computation are miniaturization and parallelism of conventional silicon-based machines. For example, a square centimeter of silicon can currently support around a million transistors, whereas current manipulation techniques can handle to the order of 1020 strands of DNA. DNA, with its unique data structure and ability to perform many parallel operations, allows one to look at a computational problem from a different point of view. A simple mechanism of transmitting two related messages by hiding the message is not enough to prevent an attacker from breaking the code. DNA Cryptography can have special advantage for secure data storage, authentication, digital signatures, steganography, and so on. DNA can also be used for producing identification cards and tickets. “Trying to build security that will last 20 to 30 years for a defense program is very, very challenging,” says Benjamin Jun, vice president and chief technology officer at Cryptography Research. Multiple studies have been carried out on a variety of biomolecular methods for encrypting and decrypting data that is stored as a DNA. With the right kind of setup, it has the potential to solve huge mathematical problems. It’s hardly surprising then, that DNA computing represents a serious threat to various powerful encryption schemes. Various groups have suggested using the sequence of nucleotides in DNA (A for 00, C for 01, G for 10, T for 11) for just this purpose. One idea is to not even bother encrypting the information but simply burying it in the DNA so it is well hidden, a technique called DNA steganography. DNA Storage of Data has a wide range of capacity: Medium of Ultra-compact Information storage: Very large amounts of data that can be stored in compact volume A gram of DNA contains 1021 DNA bases = 108 Terabytes of data. A few grams of DNA may hold all data stored in the world. Conclusion DNA cryptography is in its infancy. Only in the last few years has work in DNA computing seen real progress. DNA cryptography is even less well studied, but ramped up work in cryptography over the past several years has laid good groundwork for applying DNA methodologies to cryptography and steganography. Researches and studies are being carried out to identify a better and unbreakable cryptographic standard. A number of schemes have been proposed that offer some level of DNA cryptography, and are being explored. At present, work in DNA cryptography is centered on using DNA sequences to encode binary data in some form or another. Though the field is extremely complex and current work is still in the developmental stages, there is a lot of hope that DNA computing will act as a good technique for Information Security. References An Overview of Cryptography Handbook of Applied Cryptography Encryption vs. Cryptography - What is the Difference? Traditional Cryptology Problems - HowStuffWorks Understanding encryption and cryptography basics https://www.digicert.com/TimeTravel/math.htm http://securityaffairs.co/wordpress/33879/security/dna-cryptography.html http://research.ijcaonline.org/volume98/number16/pxc3897733.pdf http://searchsecurity.techtarget.com/answer/How-does-DNA-cryptography-relate-to-company-information-security http://www.technologyreview.com/view/412610/the-emerging-science-of-dna-cryptography/ Source
-
- computing
- cryptography
-
(and 3 more)
Tagged with: