One Step Closer To Speedier, Bootless Computers 249
CWmike writes "Physicists at the University of California at Riverside have made a breakthrough in developing a 'spin computer,' which would combine logic with nonvolatile memory, bypassing the need for computers to boot up. The advance could also lead to super-fast chips. The new transistor technology, which one lead scientist believes could become a reality in about five years, would reduce power consumption to the point where eventually computers, mobile phones and other electronic devices could remain on all the time. The breakthrough came when scientists at UC Riverside successfully injected a spinning electron into a resistor material called graphene, which is essentially a very thin layer of graphite. The graphene in this case is one-atom thick. The process is known as 'tunneling spin injection.' A lead scientist for the project said the clock speeds of chips made using tunneling spin injection would be 'thousands of times' faster than today's processors. He describes the tech as a totally new concept that 'will essentially give memory some brains.'"
Boolean Memory. (Score:2, Informative)
He describes the tech as a totally new concept that 'will essentially give memory some brains.
Computer memory combined with logic gates. [trnmag.com]
Re:One step closer to SkyNet (Score:3, Informative)
Re:Wishful thinking... (Score:3, Informative)
Re:Bad summary again... (Score:1, Informative)
If the reporter is talking about devices remaining on without charging, what does he think is going to power the antenna and the display? The scientists haven't invented a free energy device.
I think the implication is that you can suspend most of the time and the computer wouldn't use any power to do so, Basically, hibernation would not be necessary, since you can keep whatever's in memory still in memory without using power. Combined with a screen that can remain on while the processor is suspended (like E Ink, or the OLPC XO's in-hardware trick), devices could use far less power than they do now, and you could leave them "on" while they use no or almost no power at all.
Well, that's how I understood it.
Booting (Score:5, Informative)
Computers needing to "boot" is a relatively modern invention caused in part by hardware hotplug, backwards compatibility modes and reliability checks.
Most of the boot process is:
- Moving out of legacy modes (e.g. enabling increased capabilities from basic instructions sets to full modern ones, enabling different memory access models, enabling 64-bit etc.), ramping up core speed, enabling things like DMA and moving from "safe" memory timings to those that the chips report they can support when the negotiations finally take place, bringing up the non-boot CPU's, etc.
- Contention. Doing only a certain number of things on the bus at any one time, making the buses serial, making the buses have sub-buses and other ideas. Sometimes there is no quicker way to do things. Sometimes it *will* take 1000ms before the disk will respond that it's up to speed.
- Checking that RAM does indeed do what it's told, that a boot loader is present, that a floppy is present (yes, even on some modern BIOS's), checking IDE/SATA channels and retrieving capabilities, checking memory timings, checking PCI and USB buses, checking that disks are spinning, etc.
Some of my servers take up to 3 minutes to get to the point where they can actually load the first byte from disk to begin loading it. A lot of this time is BIOS handoff to the BIOS on the RAID cards (and sometimes the network cards), those RAID cards checking, assembling and enabling the drives, etc. With two RAID cards, we've just nearly doubled boot time. Proper (reasonable) memory checks of several GB of RAM still takes a while, even for a simple test. And yet there's still a minute or so of absolute complete waste as we start in some 8086 legacy mode and slowly have to ramp up disks, cards and our own CPU's, not to mention external hardware like USB and DVD drives "just in case". And then the OS has to go and do it all itself again later anyway.
This is why things like the LinuxBIOS (now called Coreboot) project actually work better and faster - when we KNOW what the BIOS needs to do, we find that lots of it is done twice, lots of it are unnecessary, lots of it can be delayed until we actually NEED the DVD drive, some of it can occur in the background because it will ALWAYS take a long time to start etc. But how many fixed sets of hardware does that project actually work on? Few. Because not only is it tricky to do that sort of analysis, but it's tricky to lock-down exactly what the BIOS needs to do and do better than the original BIOS.
We can have an "instant on" computer. It's easy. My ZX Spectrum did it nearly 30 years ago. My calculator does it now. The Psion organisers all did it. Most portable games consoles manage it. The thing you have to realise though is that it means: booting into a single, fixed OS that's tricky to upgrade, making power management apply to every process perfectly, fixing a set of hardware down that we know can always boot into a certain configuration very quickly, changing the way that all our chips work so they start in their best mode, not their worst (and thus probably destroying things like OS installers as we know them and making them specific to a machine type - no more installers modern OS on old computers, or old OS on modern computers), removing any sort of consistency checks and having to rely on things not going wrong or the hardware being able to handle all hardware errors (e.g. ECC memory for everything with reporting of anything it can't handle), and building every component so it doesn't "negotiate" or "initialise" but just works (e.g. even a keyboard controller can take some time to come back online at the moment, not to mention graphics, disks, USB buses, etc.).
Instant-on computers are always possible, and some of them are very useful for certain things. But generic PC's and instant-on won't happen until CPU's, disks and bus negotiations take literally fractions of a second for any operation (and thus we still do as many instructions to initialise but they take clock cycles
Re:Wishful thinking... (Score:2, Informative)
Isn't 1000x faster too fast? I heard we are already close to the limit of speed of light. If we go faster than chips would have to get smaller so signals can travel across them in one cycle.
The day that the speed of light is holding us back we'll be in pretty good shape technologically speaking. I'm not sure if our planet will last long enough for us to get there, but it's not like we've got any other choice. Damn the electrons, full speed ahead!
Re:Wishful thinking... (Score:3, Informative)
The problem of artificial intelligence is not one of processing power. Even given infinite speed we have no clue how to begin emulating the function of the human brain.
I'm assuming that by "real AI" you mean a self aware computer program.
Re:Wishful thinking... (Score:3, Informative)
5GHz means cycling every 0.2 nanoseconds. In 0.2 nanoseconds, light travels about 6cm. We're already pretty close to the limit for keeping processing synchronised over a large blob of silicon without using methods more cunning than just saying "feh, doesn't matter, light is fast"
Re:Wishful thinking... (Score:1, Informative)
Re:Wishful thinking... (Score:1, Informative)
DOSBox is misleading. It gives the impression software started instantly in the late 80s. Performance is very different when you're booting and loading from floppies (because a 10 megabyte hard drive isn't affordable) and constantly swapping floppies (because a second floppy disk drive isn't that affordable either) and running on a 4.77MHz 8088 processor with 256 megabytes of RAM.
Re:Wishful thinking... (Score:2, Informative)
Again, let's just look at the history. Computers are about 1000x faster than they were in 1980.
Math Fail. //a doubling of speed in 13 months. Not sure if this is accurate
30 years is 30*12=360 months.
360/13=about 27.7
2^27.7 = 218,037,342.4.
That is way more than 1000 times
Example: A Cray X-MP (1982) had 400 MFLOPS
The Cray XT-5 (2009)has 1.759 PFLOPS
This is (1.759x10^15)/(400*10^6)=4,397,500 times as much. Not as much as predicted with x2 every 13 months, but you get the picture.
Re:Wishful thinking... (Score:5, Informative)
Not enough people, if you ask me (front line support tech). Laptop users especially have completely gotten out of the habit of shutting down their computers, making their systems progressively slower and less stable as time goes on. Then they come into my office or call me on the phone with a problem (e.g. Program X won't start or keeps crashing). I shut down ("not just 'shut' but actually 'shut down'") their computer, turn it back on again, and it's "fixed". A waste of my time... and theirs.
Annoying as it is, the boot process has the benefit of restoring a system to a largely-predictable known-good state. I miss it already.
Re:Wishful thinking... (Score:3, Informative)
This especially when we consider spintronics devices as their dimnuitive power requirements allow us to hypothise about a cubic cm supercomputation circuit without having to factor in a cooling system capable of maintaining polar caps on the sun.
While moores law holds up nicely as a maintainer and guideline for incremental improvements in technology we have to consider that moores law is a half century old 'invention'. We will eventually have a paradigm shift which suprasses it because of some innovative breaktrough, just like moores law itself was enabled to be conceptualized by the invention and progress of integrated circuits.
I suggest we name the sucessor "Even Moore's Law" because calling it Moore's law 2.0 would be so 1.0.
Re:Wishful thinking... (Score:3, Informative)
Moore's law is an observation, not a law, and it's actually about the number of transistors per surface unit, it doesn't say anything about speed.