Protein-Packed Hard Drives Promise High Capacity 142
Digimax writes "The New Scientist has an interesting article on a technology being developed by NanoMagnetics which involves using a protein responsible for storing iron in the body to store data on a hard drive. Is this the start of the BioTech revolution?"
Re:solid state (Score:3, Informative)
Re:solid state (Score:2, Informative)
Re:solid state (Score:5, Informative)
Even with a 15k scsi drive, if you handle large files, which is becoming more and more common, the hard drive is going to be the bottleneck. Even if you only handle small files, the access time for a hard drive is generally 100 times slower than the access time for ram, regardless of how you RAID it, or the spindle speed. That is a lot of idle time when loading large files, or accessing lots of small files. Granted, SCSI helps because it takes the load off the cpu.
I can't possibly see how your athlon 2200 is the bottle neck, except when you are doing cpu intensive stuff. If I am pulling a filter in Photoshop, yea. Raytracing, etc. I expect that. But I use photoshop every day, and the amount of time I spend pulling a filter is still much less than loading and saving files.
Re:New Scientist article sucks (Score:2, Informative)
Agreed. Here's what the company's website says:
Technology Overview
Hard disk drives currently store information at densities up to 70 billion bits (or gigabits) per square inch, with data stored as microscopic magnetic patterns arranged in circumferential tracks on a media. At extreme magnification, individual bits of data are revealed to be composed from grains of different sizes and shapes. The density at which information can be stored is restricted by how cleanly these patterns can be represented. Current production technology is limited by coarse granularity as well as the presence of some very small grains which spontaneously lose their memory - the superparamagnetic effect. These limitations will likely only allow for a possible tripling of storage capacity in the future. To significantly extend storage capacity, data patterns would ideally be recorded on orderly and uniform grains.
NanoMagnetics grows tiny magnetic grains within hollow protein spheres called "apoferritin", which are 10,000 times smaller than the diameter of human hair. The resulting nanoparticles are limited in size by the inner cavity of the spheres, producing highly uniform material which we call DataInk(TM). Importantly, DataInk(TM) is produced using mild and inexpensive chemical techniques.
The resulting particles can pack closely, like oranges on a grocery shelf. Films of DataInk(TM) are baked (or "annealed") to optimize their magnetic performance, and to also carbonize the protein spheres. What remains is an ordered assembly of uniform, magnetic grains. This type of media is ideal to expand the storage capacity of hard disk drives, as it is able to support smaller and smaller patterns. Using individual grains to represent bits of data, this protein-derived media could ultimately extend densities to between 5,000 and 10,000 Gbits/in2.
Current Status Since our Series A round in 1999, NanoMagnetics has sustained a 1700% annual increase in areal density. At this rate, we will overtake industry's anticipated areal densities by Q3 2003. Leveraging on our compelling progress, NanoMagnetics will aim to qualify DataInk(TM)-enhanced media, then partner with one or more hard disk manufacturers for the next generation of drives. NanoMagnetics' phenomenal series of milestones and their dates are as follows:
The Company is currently preparing to scale up the manufacture of DataInk(TM) and is working with a number of key industry players.
Because the read/write method doesn't change! (Score:4, Informative)
Reading and writing is still done the way it is today (mangentically) but, with a more regular magnetic matrix, greater storage densities can be acheived...
Re:solid state (Score:3, Informative)
More ram does and does not incur more overhead.
your computer already has to deal with the overhear of being able to address 4 GB of memory. It's 32-bit and that's how much memory it can address. Unless you've got more than 4 GB of RAM installed the overhead is _already_ built into the system.
This is part of why the 64-bit Opteron with it's 40-bits of addressable and 48-bits of virtually adressable space adds a slight overhead, and slightly lower memory perfomance than a 32-bit athlon at the same clock speed.
Yeah, being able to address 1 TB of ram is kinda silly if you don't even need to address 4 GB, since there is a performance penaltry in being able to address 1TB. That's why the opteron doesn't have a 64-bit addressable memory space It only needs 1 TB for today's 4-8way server applications.
But adding more ram only incurs boot time overhead in 'checking' that ram to make sure it's good.
It doesn't decrease performance anywhere else, because the adressable range is already designed into the system. so the performance hit is already there.