Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Science

Graphene Could Make Magnetic Memory 1000x Denser 123

KentuckyFC writes "The density of magnetic memory depends on the size of the magnetic domains used to store bits. The current state-of-the-art uses cobalt-based grains some 8nm across, each containing about 50,000 atoms. Materials scientists think they can shrink the grains to 15,000 atoms but any smaller than that and the crystal structure of the grains is lost. That's a problem because the cobalt has to be arranged in a hexagonal close packing structure to ensure the stability of its magnetic field. Otherwise the field can spontaneously reverse and the data is lost. Now a group of German physicists say they can trick a pair of cobalt atoms into thinking they are in a hexagonal close packing structure by bonding them to a hexagonal carbon ring such as graphene or benzene. That's handy because the magnetic field associated with cobalt dimers is calculated to be far more stable than the field in a cobalt grain. And graphene and benzene rings are only 0.5 nm across, a size that could allow an increase in memory density of three orders of magnitude."
This discussion has been archived. No new comments can be posted.

Graphene Could Make Magnetic Memory 1000x Denser

Comments Filter:
  • by Geoffrey.landis ( 926948 ) on Monday June 29, 2009 @05:25PM (#28520159) Homepage
    Diet Smith said, "He who controls magnetism, controls the world!"

    -- I'm just not sure he knew exactly how that would come out to be true!

  • Re:Not again! (Score:3, Interesting)

    by denis-The-menace ( 471988 ) on Monday June 29, 2009 @05:40PM (#28520383)

    I would worry too much.
    Tape seems to be on the way out because it can't keep up with the density requirements (Data silos anyone?)

    Some places now just mirror to other hard drives.
    -Some are smart and take those HDs off line and treat them like tapes.
    -Some are idiots and leave them online only to find them corrupt like the main disks (I read that somewhere...)

    Anyhow, it seems we are going to mimic Star Trek and just not bother to have backups for computer systems...

  • Re:Not again! (Score:5, Interesting)

    by sexconker ( 1179573 ) on Monday June 29, 2009 @05:44PM (#28520439)

    Tape is still very much "in" if you're talking about long term storage.

  • by Anonymous Coward on Monday June 29, 2009 @06:03PM (#28520673)

    As data density increases, so does the rate at which it can be read. Assuming two orders of magnitude increase (100x) and individual bits staying roughly the same shape, the linear density increases by a single order of magnitude. (10x bits per track, 10x tracks). The drive will be able to read at 10x the speed.

    At 3 orders of magnitude, you can expect a read speed improvement of roughly 3000%. (sqrt(1000) ~31.6)

  • by Macka ( 9388 ) on Monday June 29, 2009 @06:24PM (#28520907)

    Ok sequential IO is going to improve some as more data will pass under the head compared to todays disks. But Random IO isn't going to feel the same benefit because that's influenced more by Rotational Delay (fixed by spin speed) and the time it takes for the head to shift between tracks: Disk Seek. So your figures are going to be wildly off in real life.

  • by Chyeld ( 713439 ) <chyeld@gma i l . c om> on Monday June 29, 2009 @07:06PM (#28521375)

    Back when I was in college one of the 'cool' old Comp Sci professors had a tale he liked to share with his classes on the first day. I had him in a couple of classes, so I heard it over and over again. His presentation made it an amusing story if you could get over the fact that he smelt as if he lived in an ashtray.

    It seems that back in the mainframe days, the standard way of increasing storage size on your hard drives was to make a bigger platter. Seems rather simple, right? The storage size grows exponentially with its radius. So adding an inch each time can lead to some fairly nice results, and with some platters topping out at 24 inches, that's some space.

    Except....

    One day, the university ordered the 'latest' hard drive for one of their mainframes. I'm sure it was a behemoth, it probably held around 50 meg. The vendor came by and installed it, and everything seemed fine till a few months later when the drive seemed to start failing, at about 30% capacity, writes stopped working and anything written to seemed to have been corrupted. They were puzzled, but this is why such things service contracts. The vendor came out, replaced the drive, and everyone went on with life.

    Till it happened again, at about the same capacity. Another replacement was made and vendor was quite red-faced and explained that they seemed to have run into a batch of dud drives. All was forgiven and life went on.

    Till, it happened the third time. At this point, it was starting to embarrass everyone: The vendor, the people who ordered the hard drive in the first place, etc. So this time, instead of just allowing the vendor to take the drive back, the dean of the department demanded they diagnose the issue there on the spot.

    Now, this wasn't the age of the sealed drive cases, certainly drives were still kept 'clean' but we weren't to the point yet where a single grain of dust could wipe out megabytes of info (heck, even the 24 inch platters needed to be in arrays of 50+ just dream of hitting 100 meg) so cracking open the drive wasn't that big of a deal.

    So the vendor's tech, hoping to appease a clearly angry customer in the day and age when parts cost tens of thousands of dollars, popped open the drive.

    Want to guess what they found?

    Larger disks do indeed result in more surface area, but they also result in a higher centrifugal force on the edges. An increased force which the vendor apparently hadn't accounted for. Once the disks began to spin up, the glue holding the magnetic dust to the platter gave way, resulting in the platters being stripped clean after a certain radial length from the center. The disks themselves were fine up to that point, the dust was plastered to the case itself and when the platters came up to speed any dust that had fallen back onto them was once again flung up against the case.

    The reason why the disks didn't seem to fail till they reached a certain capacity was simply because they weren't being used in a RAM fashion but were being written to in a sequential manner. The outer portions of the platters were only being hit once the inner portions were written to.

    Perhaps the reason spindle speeds haven't gone up lately could be part of the same issue. Or perhaps I'm simply indulging in a bit of pointless nostalgia as I wait for this report I'm running to finish. Who knows...?

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...