Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Science

Protein-Packed Hard Drives Promise High Capacity 142

Digimax writes "The New Scientist has an interesting article on a technology being developed by NanoMagnetics which involves using a protein responsible for storing iron in the body to store data on a hard drive. Is this the start of the BioTech revolution?"
This discussion has been archived. No new comments can be posted.

Protein-Packed Hard Drives Promise High Capacity

Comments Filter:
  • Re:solid state (Score:3, Informative)

    by eenglish_ca ( 662371 ) <(moc.liamg) (ta) (hsilgnee)> on Sunday April 27, 2003 @01:35PM (#5819878) Homepage
    We need to go to a 64 bit architecture so that we can can avoid the more than 1 byte sector issue aswell because that takes considerable overhead. In addition, the chemical reactions used in a protein drive would make it much faster than the reading and writing to a magnetic driver.
  • Re:solid state (Score:2, Informative)

    by JDevers ( 83155 ) on Sunday April 27, 2003 @02:29PM (#5820110)
    In what way is Intel's NetBurst NOT x86? New bus topography comes and goes but the instruction set stays the same. I would say that x86-64 is a much more significant change to the core architecture than NetBurst which is basically marketing speak for a slightly different bus layout combined with a very deeply pipelined CPU.
  • Re:solid state (Score:5, Informative)

    by Pharmboy ( 216950 ) on Sunday April 27, 2003 @02:30PM (#5820118) Journal
    More ram incurs more overhead. After a point, it ends up slowing your box down if you are not actually USING the extra ram.

    Even with a 15k scsi drive, if you handle large files, which is becoming more and more common, the hard drive is going to be the bottleneck. Even if you only handle small files, the access time for a hard drive is generally 100 times slower than the access time for ram, regardless of how you RAID it, or the spindle speed. That is a lot of idle time when loading large files, or accessing lots of small files. Granted, SCSI helps because it takes the load off the cpu.

    I can't possibly see how your athlon 2200 is the bottle neck, except when you are doing cpu intensive stuff. If I am pulling a filter in Photoshop, yea. Raytracing, etc. I expect that. But I use photoshop every day, and the amount of time I spend pulling a filter is still much less than loading and saving files.
  • by TheRealRamone ( 666950 ) on Sunday April 27, 2003 @03:22PM (#5820355)

    Agreed. Here's what the company's website says:

    Technology Overview

    Hard disk drives currently store information at densities up to 70 billion bits (or gigabits) per square inch, with data stored as microscopic magnetic patterns arranged in circumferential tracks on a media. At extreme magnification, individual bits of data are revealed to be composed from grains of different sizes and shapes. The density at which information can be stored is restricted by how cleanly these patterns can be represented. Current production technology is limited by coarse granularity as well as the presence of some very small grains which spontaneously lose their memory - the superparamagnetic effect. These limitations will likely only allow for a possible tripling of storage capacity in the future. To significantly extend storage capacity, data patterns would ideally be recorded on orderly and uniform grains.

    NanoMagnetics grows tiny magnetic grains within hollow protein spheres called "apoferritin", which are 10,000 times smaller than the diameter of human hair. The resulting nanoparticles are limited in size by the inner cavity of the spheres, producing highly uniform material which we call DataInk(TM). Importantly, DataInk(TM) is produced using mild and inexpensive chemical techniques.

    The resulting particles can pack closely, like oranges on a grocery shelf. Films of DataInk(TM) are baked (or "annealed") to optimize their magnetic performance, and to also carbonize the protein spheres. What remains is an ordered assembly of uniform, magnetic grains. This type of media is ideal to expand the storage capacity of hard disk drives, as it is able to support smaller and smaller patterns. Using individual grains to represent bits of data, this protein-derived media could ultimately extend densities to between 5,000 and 10,000 Gbits/in2.

    Current Status Since our Series A round in 1999, NanoMagnetics has sustained a 1700% annual increase in areal density. At this rate, we will overtake industry's anticipated areal densities by Q3 2003. Leveraging on our compelling progress, NanoMagnetics will aim to qualify DataInk(TM)-enhanced media, then partner with one or more hard disk manufacturers for the next generation of drives. NanoMagnetics' phenomenal series of milestones and their dates are as follows:

    August 1999 - 75 bpi or 0.002 Gbits/in2
    August 2001 - 0.7 Gbit/in2
    December 2001 - 2.2 Gbit/in2
    June 2002 - 6.0 Gbit/in2
    August 2002 - 12.1 Gbit/in2

    The Company is currently preparing to scale up the manufacture of DataInk(TM) and is working with a number of key industry players.

  • by YuppieScum ( 1096 ) on Sunday April 27, 2003 @05:59PM (#5821021) Journal
    The process is about how to organise and homogenise the arrangement of magnetic particles on the disc surface.

    Reading and writing is still done the way it is today (mangentically) but, with a more regular magnetic matrix, greater storage densities can be acheived...
  • Re:solid state (Score:3, Informative)

    by kesuki ( 321456 ) on Sunday April 27, 2003 @11:04PM (#5822321) Journal
    His point, was that brain dead operating systems could be using ram to speed up the hard drive -- instead of using the hard drive to pretend you have more ram.
    More ram does and does not incur more overhead.
    your computer already has to deal with the overhear of being able to address 4 GB of memory. It's 32-bit and that's how much memory it can address. Unless you've got more than 4 GB of RAM installed the overhead is _already_ built into the system.
    This is part of why the 64-bit Opteron with it's 40-bits of addressable and 48-bits of virtually adressable space adds a slight overhead, and slightly lower memory perfomance than a 32-bit athlon at the same clock speed.
    Yeah, being able to address 1 TB of ram is kinda silly if you don't even need to address 4 GB, since there is a performance penaltry in being able to address 1TB. That's why the opteron doesn't have a 64-bit addressable memory space It only needs 1 TB for today's 4-8way server applications.
    But adding more ram only incurs boot time overhead in 'checking' that ram to make sure it's good.
    It doesn't decrease performance anywhere else, because the adressable range is already designed into the system. so the performance hit is already there.

With your bare hands?!?

Working...