Jump to content
nedo

Are processors pushing up against the limits of physics?

Recommended Posts

Posted

When I first started reading Ars Technica, performance of a processor was measured in megahertz, and the major manufacturers were rushing to squeeze as many of them as possible into their latest silicon. Shortly thereafter, however, the energy needs and heat output of these beasts brought that race crashing to a halt. More recently, the number of processing cores rapidly scaled up, but they quickly reached the point of diminishing returns. Now, getting the most processing power for each Watt seems to be the key measure of performance.

None of these things happened because the companies making processors ran up against hard physical limits. Rather, computing power ended up being constrained because progress in certain areas—primarily energy efficiency—was slow compared to progress in others, such as feature size. But could we be approaching physical limits in processing power? In this week's edition of Nature, The University of Michigan's Igor Markov takes a look at the sorts of limits we might face.

Clearing hurdles

Markov notes that, based on purely physical limitations, some academics have estimated that Moore's law had hundreds of years left in it. In contrast, the International Technology Roadmap for Semiconductors (ITRS), a group sponsored by the major semiconductor manufacturing nations, gives it a couple of decades. And the ITRS can be optimistic; it once expected that we would have 10GHz CPUs back in the Core2 days. The reason for this discrepancy is that a lot of hard physical limits never come into play.

For example, the ultimate size limit for a feature is a single atom, which represents a hard physical limit. But well before you reach single atoms, physics limits the ability to accurately control the flow of electrons. In other words, circuits could potentially reach single-atom thickness, but their behavior would become unreliable before they got there. In fact, a lot of the current work Intel is doing to move to ever-smaller processes involves figuring out how to structure individual components so that they continue to function despite these issues.

The gist of Markov's argument seems to be that although hard physical limits exist, they're often not especially relevant to the challenges that are impeding progress. Instead, what we have are softer limits, ones that we can often work around. "When a specific limit is approached and obstructs progress, understanding its assumptions is a key to circumventing it,' he writes. "Some limits are hopelessly loose and can be ignored, while other limits remain conjectural and are based on empirical evidence only; these may be very difficult to establish rigorously."

FURTHER READING

BROADWELL IS COMING: A LOOK AT INTEL’S LOW-POWER CORE M AND ITS 14NM PROCESS

Once Intel can get past the delays, its new chips will have a lot to offer.

As a result, things that seem like limits are often overcome by a combination of creative thinking and improved technology. The example Markov cites is the diffraction limit. Initially, this limit should have kept the argon-fluorine lasers we use from etching any features finer than 65 nanometers. But by using sub-wavelength diffraction, we're currently working on 14nm features using the same laser.

Where are the current limits?

Markov focuses on two issues he sees as the largest limits: energy and communication. The power consumption issue comes from the fact that the amount of energy used by existing circuit technology does not shrink in a way that's proportional to their shrinking physical dimensions. The primary result of this issue has been that lots of effort has been put into making sure that parts of the chip get shut down when they're not in use. But at the rate this is happening, the majority of a chip will have to be kept inactive at any given time, creating what Markov terms "dark silicon."

Power use is proportional to the chip's operating voltage, and transistors simply cannot operate below a 200 milli-Volt level. Right now, we're at about five times that, so there's the potential for improvement there. But progress in lowering operating voltages has slowed, so we may be at another point where we've run into a technological roadblock prior to hitting a hard limit of physics.

The energy use issue is related to communication, in that most of the physical volume of a chip, and most of its energy consumption, is spent getting different areas to communicate with each other or with the rest of the computer.

Here, we really are pushing physical limits. Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other. The best we can do with current technology is to try to design chips such that areas that frequently need to communicate with each other are physically close to each other. Extending more circuitry into the third dimension could help a bit—but only a bit.

Continuare articol aici

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...