• iceberg
  • boy with flowers
  • checking water quality
  • planet eclipse
  • solarsystem model
  • rangitoto trees
  • kids with test tubes
  • kids with earth
  • snowy mountains
  • teens in physics class
  • Rainbow Clouds

    Refraction and diffraction of light through ice crystals in the clouds

  • Philippa On The Ice

    Philippa On The Ice Philippa Werry at an Antarctic research camp 2016

New Zealand Science Teacher

Science Education & Society

The end is nigh: Silicon-based computers and what might replace them

We’ve become accustomed to each successive generation of computers making the last obsolete within months for about a half-century or so now. What does the future hold, JAYLAN BOYLE asks, in our latest science-fiction inspired article.

104309310Left: Could the computers we use every day soon be a thing of the past?

Many experts are predicting though that the entire paradigm – silicon-based processor technology – on which all computers have been based since, well, at least since we could fit one into a single-story building, is about to reach the end of the line.

So what’s next?

If you know a bit about the way computers work, comparing your latest gaming laptop with computers of yesteryear can be mind-blowing when you’ve got your head around the numbers.

Fundamentally, the guts of any computer is the processor (actually, they’re all ‘micro’ these days, making the prefix redundant). The definition of a computer at the highest level is ‘a machine that accepts input stuff, does something with that stuff, and then outputs the results of all that stuff.’ It’s the processor that ‘does something with that stuff.’

For the layperson, the easiest way to evaluate a computer’s performance is processor speed: how fast it can process input and deliver a result. Over the years, many have been conned into believing that the number on the box boasting ‘XXX ghz (gigahertz) processor’ is the ultimate mark of awesomeness, but that turns out to be little more than a sales pitch: ‘hertz’ is a measure of frequency, in this case referring to ‘clock rate.’ Some processors might have a high clock rate but take twice as many ‘clock cycles’ to execute a particular instruction as another.

So the metric we’re going with is ‘instructions per second’, and more specifically ‘FLOPS’ – Floating-point Operations Per Second. Go forth and read up on that if you must. Essentially, they both mean ‘speed of stuff manipulation.’

Popular comparisons abound when we’re pitting the titanic super-computers of yesteryear against today’s consumer products, to give us an idea of how far we’ve come. A couple of goodies:

-          Those that remember the 1980s – and were of a ‘techy’ mentality – remember Cray. The company was synonymous with ‘supercomputer’, and the Cray X-MP that dominated the decade seemed so monolithically powerful you felt like it could probably read your mind or levitate or something. It would also have set you back about 20 million bucks (in 80s money, so a whole more in today’s). These days, the poor thing would struggle to run Windows Vista.

-          How about the computational muscle required to get to the moon. There must have been a pretty formidable machine guiding Apollo 11, right? Nope. Comparing an iPhone to the Apollo 11 using Socrates and a particularly dim infant as our metaphor would be doing the learned Athenian a giant disservice. Aliens would require some lateral thinking to pick up on the fact that these two devices come from the same planet, particularly given that, compared to the age of the species, the two more or less coincided in time. Put it this way: when the IBM PC XT was released in 1981, it was eight times as fast as the computer that got man to the moon.

Most know that computers have been increasing exponentially in capability since the 1950s. This is down to one principal factor: our ability to pack more and more circuitry into a processor. In fact, way back in 1965, the guy who founded Intel Corporation, Gordon E. Moore, predicted that the amount of computational power we can squeeze into the same space would double every two years. He was eerily correct, if a bit conservative: it’s proven to be more like every 18 months. That’s why it’s called ‘Moore’s Law.’

Moore’s law has held thus far, and while it may feel like the pace of obsolescence in computers just keeps increasing, in fact, the speed of processor improvement has been slowing since 1998, according to Science magazine. Back then, a general purpose computer’s computational ability expanded by 88 per cent on the previous year. Improvement is still increasing today, but it’s come down to about sixty per cent.

It might seem obvious that there’s only so far we can go with this miniaturisation thing, which is likely to be true, but not for the reason you might think.

Back in the 80s and 90s, the computer world used to get all breathless when a processor manufacturer announced that they’d been able to scale their circuitry down by one micron – one millionth of a metre. In computing terms, these measurements don’t represent the size of the individual transistors on a chip, but rather the space between components, which as we’ll see is crucial.

These days, we’ve abandoned the micron scale: we talk about nanometres (nm; one micron equals 1000 nanometres – i.e. one billionths of a metre). Computers of a couple of years ago had 45-60 nm chips: that’s tiny to an astounding degree when you consider that a caesium atom is about 0.3 nm fat. In theory, we could get down to atom-sized components, but the problem is, when one out-of-place atom means that a chip isn’t going to work properly, it’s no longer possible to commercially produce reliable and cost-effective circuitry.

Therein lies a key difference between an electronic computer and the clump of grey spongy stuff in your head: the brain is ‘fault tolerant’, meaning that one dud neuron doesn’t bring the whole thing to a grinding halt. Computers? Not so much. One errant transistor means the dreaded beach-ball/blue screen of death. (Some may remember Apple’s ‘cartoon bomb with lit fuse’ telling you that all was not well with your machine. Scary.)

The other thing is that, while yes, it’s possible in theory to produce components with atom-sized dimensions, the laws of physics mean that their behaviour would become increasingly unreliable before we got there. I’m pretty sure nobody needs a computer that does even more randomly bizarre stuff than mine currently does.

Michio Kaku is a bit of a rock star of the physics world. It seems there are not many science-based TV shows produced over the last few years in which he doesn’t make an appearance. When he pipes up, the science world listens, and in 2012, he boldly made the statement that Moore’s law would collapse by 2022.

"Computer power simply cannot maintain its rapid exponential rise using standard silicon technology … there is an ultimate limit set by the laws of thermodynamics and set by the laws of quantum mechanics as to how much computing power you can do with silicon,” he said in a Daily Galaxy article.

Basically, he’s saying that when chip components get down to a certain size, they will start to overheat because there’s only so much electrical charge that you can have going through the same amount of processor real estate. Essentially, the amount of energy required to run a chip doesn’t scale down at the same rate as size.

From whence then will come the new technology that will rescue us from this silicon corner we’ve painted ourselves into?

"If I were to put money on the table, I would say that in the next ten years as Moore’s Law slows down, we will tweak it. We will tweak it with three-dimensional chips, maybe optical chips, tweak it with known technology pushing the limits, squeezing what we can. Sooner or later, even three-dimensional chips, even parallel processing, will be exhausted and we’ll have to go to the post-silicon era,” Kaku says in the same article.

Quantum computing

A cool name for a mind-bending concept. Many think it’s the next logical step in computers: once we get down to the atomic scale, we might as well start using atoms themselves as components.

If you’ve ever tried to get your head around quantum mechanics, you may have been left feeling like you’d been sucked into an ‘Alice in Wonderland’ universe where nothing is what it seems. Quantum theory is the ‘rules’ that apply to the very small – like atoms – and it turns out that particles don’t play by the same laws as large objects, much to Albert Einstein’s disappointment.

I don’t feel like a headache just now, so we’re not going to go too much further into quantum mechanics. Suffice to say, it’s spooky. For example – and this has actually been demonstrated – sub-atomic particles can be in two different points in space at the same time. In fact, they can probably be in an infinite number of places. You’ve probably heard science fiction reference an ‘alternate universe’, where everything is the same but different, where Hitler won or whatever: ‘super position’ as it’s called gives rise to this philosophical consequence of quantum theory. And even weirder, it’s predicted that these particles, simultaneously existing in more than one spot, would mirror each other: if you do something to one, the others would feel the effect.

It’s transistors in traditional computers that do the processing. Basically, transistors are simple ‘off-on’ switches, and by stitching them on or off, processors generate the 1’s and 0’s that you’ve heard so much about. There’s up to 4.3 billion transistors on a single commercially available processor, which sounds like a lot, but the thing is, that’s all they can do – be in either an ‘on’ or an ‘off’ state. That means they can perform only one operation at a time.

A quantum computer makes use of the weirdness of the very small. An atom-sized transistor could theoretically be either ‘on’, ‘off’, or in some combination of the two, all at the same time. Yes, that sounds impossible, but so does the fact that quantum theory also states that particles don’t like being looked at. This will really bake your noodle, but certain sub-atomic particles exist in several states simultaneously, until they’re observed, when they choose one position. Basically, in order to understand quantum mechanics, it’s necessary to forget everything you know about how stuff works, or it gets quite terrifying.

If a component is able to be in multiple states at the same time, that means it can do multiple calculations at the same time, and that means SPEED. Possibly millions of times faster computers than we currently have: today’s machines run at speeds measured in gigaFLOPS; a quantum computer would easily hit the teraFLOPS mark.

DNA computers

You could think of DNA like this: a microscopic pen drive inside every cell you own, with instructions stored on it that tell the cell what to do. When you were first conceived, your cells were pretty much blanks, before your DNA started telling them to become lungs, or legs, or whatever.

So if we can figure out how to programme DNA, we’ve got it made, for several reasons:

-          a tear drop-sized computer that derived its processing power from DNA would be more powerful than our best silicon super-computer. A DNA computer weighing half a kilogram would be more powerful than all the electronic computers ever built, combined.

-          DNA computers go about their business in parallel. Electronic computers operate ‘linearly’ – that is, they can do only one thing at a time. Having the ability to compute several things at once means that DNA computers could solve problems in hours that take our poor silicon dunce computers hundreds of years.

-          There’s no shortage of DNA. You can just grow some. There’s enough clogging the drain after you’ve had a shower to meet the computational needs of the entire world.

-          Unlike the toxic chemicals used in manufacturing silicon computer components, DNA computers can be built with no mess whatsoever.

DNA computing is just one facet of an exciting vision in science toward replacing things that we’ve built out of stuff like silicon, with biological material. The possibilities are boundless, but then Mother Nature has known that for quite a while. For an awesome example of what a bio-computer might look like (as well as the possible future of gaming), check out a movie called Existenz.

There’s another advantage to be had from building computers out of biological material: in the process, we might learn more than the next-to-nothing we currently know about the most powerful computer known to humanity: the one between your ears.

Metal-oxides

Doesn’t sound too exciting? Well, quantum and DNA computers are a hard act to follow, but last year, researchers announced a development that might be just as important to the future of computing. While it’s not an entirely new computing paradigm, on one level, the use of metal oxides in computer chips mimics the way our brain works.

Basically, the researchers quietly made known that they’d figured out how to convert metal oxide materials, which are natural insulators, into a state of conductivity and back again. That’s nothing new. Yawn. What’s cool about it is that the change in state remains stable even when you turn the power off.

The big problem with all computers is that they’re horrifically inefficient because they must have power flowing through their components at all times. Power equals heat. You’ve experienced this energy inefficiency every time your phone dies or your legs get all hot and sweaty from using a laptop.

So imagine if you could turn a transistor on or off with just a burst of electricity, rather than supplying it constantly with electricity. The power saving would be enormous. Many think that this is what fundamentally separates traditional computers from our brain: the grey stuff in our skulls uses a millionth of the power – in fact, about as much as a dim torch. That neatly sidesteps the issue of components becoming unreliable due to excess heat, too, so maybe the end of Moore’s law isn’t as nigh as first thought.

-          Jaylan Boyle is a reporter at NZME.

Up