


HP Introduces Defect-Tolerant Nano Elements 93
versicherung writes "With the ever shrinking feature size in microelectronics it will soon be prohibitively expensive to manufacture defect-free nano elements. HP has come up with a new way to produce fault-tolerant microchips. Utilizing mathematical techniques borrowed from coding theory, HP will be able to produce those chips by using a cross-bar architecture and adding 50 percent more wires as an 'insurance policy,' to fabricate nano-electronic circuits with nearly perfect yields even though the probability of broken components will be high."
Re:Cool (Score:5, Interesting)
Imagine being able to jump to a lower-micron manufacturing process far earlier because you don't need perfection. Intel and AMD would love that.
Bugs (Score:2, Interesting)
Re:Nanotech & Chinese Military (Score:1)
Re:Bugs (Score:3, Insightful)
quantity over quality? (Score:1)
Re:quantity over quality? (Score:1)
Re:quantity over quality? (Score:1)
My question is, why does size matter? I mean the bigger you make these things the more places there are for defects to occur right? Shouldn't this work the other way around?
Re:quantity over quality? (Score:2)
The idea of coding theory (usually applied to telecommunications, and I'll speak of it as it applies to communicating bits of data here because that's the aspect of it I'm most familiar with) is to introduce predictable redundancy into the data you transmit so that if some of it gets corrupted, you can recover the original message without error.
An example of this is the (23,12) Golay code. For every 12 bits of i
Re:quantity over quality? (Score:2)
For example the CRC error detection scheme, while not cryptographically secure, it is tolerant, for example, to errors which are long streams of 0s or 1s. This is a common error on communication lines.
Another example is error correction on DVDs. The data is coded in a way that a scratch will be detected for example by keeping the parity bits physically far from th
Re:quantity over quality? (Score:3, Insightful)
Probably how the brain works too. (Score:2)
Wow. Now if only.... (Score:2)
Re:Wow. Now if only.... (Score:1, Insightful)
Re:Wow. Now if only.... (Score:2)
Re:Wow. Now if only.... (Score:2)
If this is the path they've chosen, it seems like they should get rid of all these researchers (maybe Intel might want them), and just concentrate on making printers and PCs. Of course, their PC division doens't seem to be doing so well against Dell, so maybe they just should dump that too and just ma
Re:Wow. Now if only.... (Score:1)
Re:Wow. Now if only.... (Score:2)
It's True! (Score:4, Funny)
When they do fail, HP will claim it's not their fault and we'll have to tolerate it.
wouldn't the cost be the same (Score:2, Insightful)
Re:wouldn't the cost be the same (Score:2)
Re:wouldn't the cost be the same (Score:1, Interesting)
What I would worry about is more on the chip perfromance side of things - namely the additonal capacitance loading, cross talk and the overall routing density for this approach.
Re:wouldn't the cost be the same (Score:2)
Also the power consumption. I don't have a breakdown of where this technology would be used and where chips spend the most power (apart from knowing cache doesn't take much power), but it might hurt on laptops.
Re:wouldn't the cost be the same (Score:1)
Coding theory allows much better returns than 50% less errors with 50% more wires.
For instance the Hamming Codes will correct one error in a word 2 to the power of R bits long for the cost of R bits.
So if your chip processes 32-bit words, you could instead process 32-5=27 bit words, and if one of your 32 little gatey-things didn't dope correctly, you would still get the right answer.
For the chip to need to be chucked you would
Re:wouldn't the cost be the same (Score:2)
VLSI chips spend about 30% of their real-estate on the clock and power wires. So, a single particle of dust acts like a meteorite knocking out a whole suburb of a city. The damage caused by a broken power or clock wire is far more substantial as it can knock out other areas not immediately covered by the unwanted object.
If you have redundancy (like texture pipelines on a GPU), you can increase your yield
Re:wouldn't the cost be the same (Score:1)
Re:wouldn't the cost be the same (Score:2)
Re:wouldn't the cost be the same (Score:1)
Re:wouldn't the cost be the same (Score:2)
Re:wouldn't the cost be the same (Score:1)
Why do I even post here?
Re:wouldn't the cost be the same (Score:2)
Re:wouldn't the cost be the same (Score:1)
Re:At last! (Score:3, Informative)
HP Nanotech web page [hp.com]
And the design itself has already been covered here a few times...
http://science.slashdot.org/article.pl?sid=05/02/0 1/1823256&tid=173&tid=14 [slashdot.org]
The research had probably been going on long before Carly arrived. The biggest connection you could draw between the two is, she didn't axe it during her reign...
Only good as long as defect rate is high (Score:4, Interesting)
This kind of concept is already in use throughout the rest of the microprocessor world - Intel (maybe AMD too, I dunno) has extra cache lines in their microchips, and they deactivate defective cache lines, and reroute them to the "spare" lines to improve yield.
Re:Only good as long as defect rate is high (Score:3, Insightful)
Re:Only good as long as defect rate is high (Score:2, Interesting)
Comment removed (Score:4, Interesting)
Re:Brute force! (Score:3, Insightful)
Re:Brute force! (Score:1, Insightful)
Now we're not talking about communication channels here, but the analogy is the same. There are some factors we as engineers can't control (such as thermal noise, for example) and so we have to work around them. I won't get into the technological details of nano-fabrication right
What Would Scotty Say??? (Score:5, Funny)
Okay all fixed. I guess you don't need me anymore, I'll just go and get drunk in the corner.
Big Picture (Score:2)
Re:Big Picture (Score:1)
Re:Big Picture (Score:2)
Re:Big Picture (Score:1)
Re:Big Picture (Score:2)
More than what we are lead to believe (Score:1)
Re:More than what we are lead to believe (Score:1, Informative)
Nothing sinister here.
Bypassing the technology (Score:1)
HP always #1 (Score:5, Funny)
Because they designed this in the last 2 months. (Score:2, Informative)
Old and new uses of error checking of computations (Score:1, Interesting)
Before there were computers, people sometimes checked the accuracy of their arithmetic by "casting out nines" (google for it). When computers were big things full of vacuum tubes that had the tendency to go out in the middle of a calculation, people used parity-checking to ensure the integrity of the calculations. Coding theory has come a long way since then , with new schemes for different applications, such as crypto and telecom (from TFA). The principle is old, but I'm sure these guys had to co
Graceful degradation vs. constant-spec products (Score:4, Insightful)
For example, I wish my ATA hard drives would let me access all of the space on the drive, including spare blocks tagged for remapping of bad blocks. A flexi-capacity drive would show higher-than spec capacity on first install and then gradually degrade. Standard practice of never using 100% of a available space would guarantee the availability of at least a few spare blocks. Current drive logic fails the drive once the spare blocks are used up, but a smarter drive would keep working by steadily shrinking the drive capacity. The OS might show this as a steadily-growing, locked "BAD_BLOCK" file. A well-used hard disk might last much longer, but shrink below rated capacity and still function adequately.
A dynamic version of this technology would be a real boon to over-clockers. Say you buy a heavily multi-cored CPU (guaranteed to have at least 32 of 40 fabricated cores functioning). It might come with 35 of the 40 fabricated cores working at design clock-speed. Over-clocking might knock out a few cores that were marginal but let the system's user optimize the speed of the cores vs. number of usable cores in realtime. A fully dynamic self-testing, self-healing system might automatically bring marginal cores back online once the clock-speed is dropped.
I realize that companies currently sell the same chip with different ratings by testing for speed or usable components (e.g. usable vertex shaders in a GPU), but what I want is different. Rather than use spares to guarantee some fixed spec performance (the current industry practice of leaving only a fixed set of available good components active on a chip), users could enjoy both more initial performance and longer life from products using a dynamic self-testing, self-healing system that uses all know-good components. Such systems would gracefully degrade as vertex shaders, disk blocks, RAM cells, or cores die or stop functioning at high speeds and temperatures.
Re:Graceful degradation vs. constant-spec products (Score:2)
The only situation that results in greatly accelerated degradation is that of where you overvolt a chip beyond the specifications. Since it only happens when run out of spec, there is no need in the manufacturers eyes to create a gracefully degrading system.
Hard drives, as well, have little use for their internal defect management then to look pretty to the user, as the magnetic
Re:Graceful degradation vs. constant-spec products (Score:3, Informative)
Silcon circuits most certainly do degrade over time, even in normal use. It just so happens that so far this has been "under control". But as technologists keep reducing the feature size, these effects will become much more important.
Several people in my team work in exactly this area of micro-electronics research by the way: how to optimally compensate for these (and other re
Re:Graceful degradation vs. constant-spec products (Score:1)
The reason I say this, is because it would involve lot's of complex handling, probably both in the filesystem and the disk firmware. Either the disk knows about the filesystem, and you need rediculously complex protocols talking to the disk about what the fs really looks like (because stuff yet to be written is in the cache), or you need to handle "this file used to be in block 3532552, but the disk is now only 3532550 blocks large, so it must have moved to...".
A
Re:Graceful degradation vs. constant-spec products (Score:2)
remember it's nano-circuits (Score:1, Informative)
'crossbar architecture' is an experimental architecture that, using carbon nanotubes laid out in a grid with selectively chosen connections, allows you to perform useful functions, such as logic. HP recently (a few months ago) announced the crossbar latch [google.com], which they claim will eventually eliminate the need for transistors.
Unfortunately t
Hippies, strip down in aisle nine here (Score:2)
What happens with components that are not "bad" (Score:4, Interesting)
I see this tech as a temporary crutch for something more advanced - self diagnosing and self-healing chips. Now that would be frikkin' cool.
Nearly perfect yields (Score:2)
By the time the learning curve decays, it could be cheaper just to throw away bad parts in the old technology than to modify new ones in the new one.