Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

The Story Of GMR Heads 114

lopati writes "The story of GMR heads, "the breakthrough that boosted the capacity of hard-drives from a few gigabytes to 100 gigabytes and more--came from chance observation, basic research and a vast, painstaking search for the right materials." Check out the helpful infographic." Background: This is a story, essentially, about how hard drives broke through some of the space limitations at the beginning of the 1990s - pretty cool background.
This discussion has been archived. No new comments can be posted.

The Story Of GMR Heads

Comments Filter:
  • Hard drive size... (Score:1, Interesting)

    by NecroPuppy ( 222648 )
    My dad still makes the mistake of refering to hard drive size in megabytes, because that's what he started with...

    Makes me wonder how long it will be before we have commercially available (~$200-$300) terabyte drives... And how long it will be before we have apps that require them...
    • Makes me wonder how long it will be before we have commercially available (~$200-$300) terabyte drives...


      And how long it will be before we have apps that require them...


      Mmmmm... A.I.
    • It won't be TOO long, I'd imagine. Remember how long it took to bridge the gap between 100 MB drives and 1 GB drives... We've got 100 GB drives now.

      The problem is, some people will STILL find a way to fill THAT much space up with MP3s, warez and pr0n. *sigh* Oh well...
      • by big.ears ( 136789 ) on Monday December 17, 2001 @11:31PM (#2718241) Homepage
        Yeah....there will come a time, probably within our lives (maybe 20 years), when a $200 hard drive will be able to hold every movie, song, and book ever created. How do you fill that one up? Well, when they get that big, there might not be enough of a market for that much storage so the price will go up. But...

        (For the pedantic, my argument rests on the fact that in 1992, a 100 megabyte HD cost about $200, and today, a 100 gigabyte HD costs about the same (give or take). At the same rate, we'll have 100,000 gigabyte in ten years, and 100,000,000 gigabyte in 20. Physics blah blah blah.)

        At DVD-size of ~2.5 gigabytes per movie, uncompressed music at about 40 mb/song, and books at (generously) 20mb/pdf-book, this makes (in 20 years):

        400,000 movies, -or-

        2.5 billion (10^9) uncompressed songs

        5 billion books.

        (Please don't flame me if my math is wrong--just correct me politely). Unfortunately, I wouldn't be surprised if in 10 years, most people are still using 56k dialup and 4 gb? DVDs. Again, I ask you, how are you gonna fill up that disk?

        But, I'm not good at predicting the future of hard drive storage. In 1989, I had a big argument with a buddy about hard drives. My contention was that nobody would be able to use more than 30 (well, maybe 40) megabytes of hard drive space.
        • Current DVDs at 480p normally take up about 3-4Gb. with more storage space available we could easily more to higher resolution movies. Imagine having film resolution on a disc...
        • Forget DVDs, I want an on-line video library that uses the SMPTE-292M [fedele.com] video standard for uncompressed HDTV. That requires 1.485 gigabit/sec or 668 gigabyte/hour. If you want something comparable to 35mm or 70mm film, the data rates will be even higher.
        • At DVD-size of ~2.5 gigabytes per movie

          ~4.5 gigabytes =) (fight club was 7.38 gigabytes)
        • For the pedantic, my argument rests on the fact that in 1992, a 100 megabyte HD cost about $200, and today, a 100 gigabyte HD costs about the same (give or take). At the same rate, we'll have 100,000 gigabyte in ten years, and 100,000,000 gigabyte in 20. Physics blah blah blah.)

          However, there's no guarantee that this will come to pass. You could make the same argument circa 1960 about airplanes: it was amazing how far they had progressed since 1903. As it happens, the exponential progress of aviation technology hit a limit about that time, and only linear improvements have occurred since. The same could happen at any time for any aspect of computer technology.

          The real question for storage is can they come up with any new tricks to get 1000X more density in a hard drive, or will they have to switch to attempt new and untested concepts like 3D holographic crystal storage.

        • Well, 20 years ago we were impressed with 240 line interlaced video - in 20 years we'll be dealing with video sources and display devices with orders of magnitude higher quality than what we use now. 1080 line progressive source is around the corner, and that's on the order of 20GB for a film...
        • there will come a time, probably within our lives (maybe 20 years), when a $200 hard drive will be able to hold every movie, song, and book ever created. How do you fill that one up?

          By archiving reality.

          Why should the data you store by limited to "properly" published materials? I currently have on my 2 GB hard drive every single email I've received or sent since 1998 (about 6,000), and it's my electronic memory. And it hardly makes a dent in the drive space.

          So take all those cameras you bought (as instructed by the wonderful popup X10 ads :) and send the live video to be archived onto your hard drive. Months from now you can go look at any feed. So it's just like my email archive, only scaled up by a factor of 10^6 -- from 100 kB of emails a day to 100 GB of video per day (1 Mbps per camera, 10 cameras).

          Still not enough? Consider the fact that video is just a tiny fraction of what you perceive as reality itself. Go for insane resolution, 360-degree field of view (or even better, 4-pi steradians :) , 5.1 surround sound, the other three senses, sixth and seventh senses ...

          Now you've archived your own perceived reality (of your own space), how about experiencing someone else's? Think movies, incuding porn :)

          Every advance is storage capacity is immediately filled by increased appetite for storage.

          What is the bitrate of reality?

      • The problem is, some people will STILL find a way to fill THAT much space up with MP3s, warez and pr0n. *sigh* Oh well...

        No such luck

        certain operating systems will likely beat you to the punch

        ;)

        Actually thinking of software generated by genetic [geneticprogramming.com] programming [genetic-programming.com], etc. which produces code that obviously never passed through human fingers.

  • More space makes happy geeks but ofcouse to a certain point. Do you really need over 100gb on a desktop. I Have a pretty good sized collection of mp3s but my 40gb hd isn't even near full.
    • In 1992 I puzzled at how I would ever fill my newly purchased 210 meg harddrive - I didn't even know they came that large!

      The data you consume just gets larger and at the same time your tidying up gets slacker. Sooner than you believed possible you are out of space. You may think that we've reached a limit where you can have more space than you could possibly ever need, but time will prove you wrong :-)

      • A question: The amount of space users require has increased with the types of files being used (or is it vice versa). EIther way, as we've gone from text files to graphics to audio to video, we've been using more and more space. What comes after video? What's going to be the next disk space hog? (And who's going to be controlling *that* media?)
        • EIther way, as we've gone from text files to graphics to audio to video, we've been using more and more space.

          that's a good question. i'm not insightful enough to guess what the Next Big Thing will be, but i do think we'll see the PC (at least in the hands of power users) getting more and more use as a PVR unit.

          also, if hard drives continue to grow, i think you'll see people ripping large amounts of their OWN dvd's to the hard drives, for convenient viewing, much as many people (like me) have ripped most of their CD collection to mp3 for convenience's sake.

        • Simple, full immersion video. Sure, it's still video, but almost a totally different animal that will surely take a huge amount of space.

          Someone will always come up with a way to use resources. The drive manufacturers know that.
    • Enough space? (Score:3, Insightful)

      by PopeAlien ( 164869 )
      ..Of course 640k should be enough for anyone..

      This always comes up in discussons about huge hard-drives. I've got a couple of hundred gigs on my desktop, and I'm currently going through and trying to clean the thing up to make some free space. Granted there is a lot of junk there, but I actually need the space for working with video files - I do graphic design and video editing and I can tell you that 100gb sounds great, but can fill up fairly fast when your working with uncompressed files.
  • Comment removed (Score:4, Informative)

    by account_deleted ( 4530225 ) on Monday December 17, 2001 @09:50PM (#2717988)
    Comment removed based on user account deletion
  • Research really needs to be poured into the development of long-term solid-state storage. Even with GMR heads and modern EPRML magnetic encoding techniques, we are rapidly approaching the limitations of the magnetic medium. New technologies seek to enhance drive speed and capacity at the sake of reliability; I have had four 7200-rpm 100 GB drives fail on me within a year of their purchase. I have had no such trouble with older drives. With RAM and other solid-state getting progressively cheaper and being at absurdly low prices already, it seems foolish to still be reliant on fault-prone mechanical platters for long-term storage.
  • by freebsd guy ( 543937 ) on Monday December 17, 2001 @09:56PM (#2718009)
    I used to be an engineer at Maxtor [maxtor.com]. Though my job was entirely on the software end (maintaining Max-Blast, the drive fitness tester, and other assorted hack jobs), I had several friends who designed and tested drive hardware, and we sat down a few months back to talk about the 75GXP problems. As it turns out, those issues are closely related to the implementation of the GMR head.

    IBM [ibm.com]'s major problem was that, although they were able to scale down the GMR head very easily, they had large stocks of old media that was not certified for use on GMR drives. (Incidentally, most of that media is in an enormous warehouse in Hungary, which is where most of their drives are produced now.) They designed a recertification process that was supposed to allow them to separate the media that would be suitable for the 75GXPs from the media that wasn't suitable, but that process was deeply flawed and this resulted in the high failure rates of their drives.

    You may find it a bit odd to be hearing this from a former Maxtor employee. Well, the dirty little secret of storage companies is that reverse engineering is rampant. My colleagues at Maxtor probed, disassembled, and tested the IBM drives; indeed, they might have known what the bug was even before IBM did.

    So, the obvious RISK of GMR technology is: do not use platters that are not certified for use with the new heads. Those who disregard this creed are certain to meet with a nasty public relations disaster in due time.

    freebsd guy

    • by wass ( 72082 ) on Tuesday December 18, 2001 @12:08AM (#2718376)
      I'm way too busy cramming for my quantum mechanics final (when the TA actually says it's gonna be hard, then it's gonna be friggin' impossible). So I'm too busy to write about stuff (not too busy to browse /. though ;-) )

      Here's a link [slashdot.org] to one of my posts on the Spintronics slashdot article a few weeks ago. I think I posted it a few hours too late for most people (moderators included) to notice it.

      Explains basics of GMR, which is based on magnetoelectronics, or it's catchier nickname Spintronics. Also related to GMR are the non-volatile RAM's commercially available now.

      Cool part is that GMR devices were commercially available only a few years after discovery in the lab. That's an accomplishment usually reserved for potentially ground-breaking devices (ie, transistors). T'will be very interesting to see how this field progresses in the future.

    • Anyone know what the status of the suit against ibm over the 75gxp's is? I have one of these puppies and am afraid to touch it now. Magnetic media is scary enough (from a reliability standpoint) but to have to worry about it just going kaput is another matter entirely...
    • are 60GXPs more reliable ?

    • I highly doubt they are trying to use media left over from pre-GMR days...May as well be a century in disk drive years... Maybe they were having screening issues, but it has nothing to do with GMR heads!!

      seibed
    • If you have any information regarding the high failure rate of the IBM 75GXP, please let me know! I am an enthusiast that bought a 75GXP, and had it RMAed by IBM three times. I registered with Sheller, and was later contacted by an attorney. I believe they are trying to form a class, and I am a class representative. I'm posting here to ask that anybody with specific information regarding the high failure rate of these IBM drives contact me and let know know. Thanks, Adcadet acalvin@mediaone.net
  • My local Fry's had an ad in the paper today for a Seagate 80 GB drive for only $119. While this is still more expensive (in $/MB) than other media types, the relative prices are decreasing significantly. Soon an affordable 1 TB of storage won't be just a dream for the average geek!
    • Er... Sorry, that's $129 [outpost.com] after the rebate. I think you can buy from fry's online at outpost.com (but why would you do that when you can go to the store and enjoy the pleasant holiday experience?)
  • Yup.. (Score:4, Funny)

    by appleprophet ( 233330 ) on Monday December 17, 2001 @10:01PM (#2718027) Homepage
    "the breakthrough that boosted the capacity of hard-drives from a few gigabytes to 100 gigabytes and more--came from chance observation, basic research and a vast, painstaking search for the right materials."

    In summary, the guys at IBM ran out of HD space for their um, 'special files'? ;)
  • So... How long 'til we hit a few yottabytes?
  • by Bob_Robertson ( 454888 ) on Monday December 17, 2001 @10:20PM (#2718075) Homepage
    Back in 1988 when I worked in an IBM mainframe shop, I had the good fortune to run accross one of their "technical newsletters", a publication of data from basic research efforts in various IBM labs around the world.

    It's since been my "case in point" in any argument that there is no market for "basic research" and therefore government, taxes, theft, must be used in order to better the human condition.

    Fulton, Bell, Edison, Tesla, and a host of for-profit universities all doing basic research not withstanding, some people just love using guns to force others to support their theory of "good".

    IBM didn't keep their basic research secret then, and even with something as impossibly profitable as keeping GMR secret now might have been, the article notes that the highest density drive on the market isn't even made by IBM. They're not keeping it secret now, either.

    Bravo.

    Bob-

    • some people just love using guns to force others to support their theory of "good".

      That surprises you? That is the reason guns were invented, and it is their primary purpose. The government has always had and is always going to have more and better guns than you. That's why they collect taxes and you don't.

      Anyway, if IBM and other private research institutions didn't already have a cozy relationship with the government, they'd use their large budgets to buy their own guns and collect taxes from you themselves.

      • As E.S.Raymond mentioned, firearms are just one form of last resort in individual defense, and therefore don't have to be the latest and greatest. The government has tanks and laser guided bombs, anyway.

        To paraphrase, "The gun, the picket line, the lawsuit. Each are the last resort. No one wants to go on strike, sue someone, or shoot an attacker, but real problems are happening when there is effort to take those options away."

        Bob-

  • What's really scary is thinking back to circa 1983 when external hard drives for some of the first PC's were in the 5 megabyte range, and those cost $4,000 to $5,000. Only for those that could afford such luxuries, or could justify a serious business need for such devices.

    5 megabytes. $5,000.00. That just makes my head hurt now. Each single one of my self-ripped MP3's comes in at more space than that!

    And then it occurred to me on this trip down memory lane that the real danger in the science (fiction for now) of time travel is getting ourselves killed by taking our relatively awe-striking hardware back in time and gloating to our younger selves.
    • now THAT'S an idea

      /quickly builds a time machine
      /takes a 1.6Ghz Althon with 512MB of ram to 1990, installs DOS (5.5 was the top version in '90 right?)

      /gives it to a magazine to review, but doesn't give them any clue to the specifications

      muhahahaha :)
      • small problem, DOS could only address 64 MB (less back then?) of RAM.

        Amazing how quickly we forget thoes silly litttle things we had to deal with back then.
      • > /takes a 1.6Ghz Althon with 512MB of ram to 1990, installs DOS (5.5 was the top version in '90 right?)
        > /gives it to a magazine to review, but doesn't give them any clue to the specifications


        Just for shits and giggles, I tried running an old benchmark on a P3-800. Here's what I got.


        SI-System Information, Advanced Edition 4.50, (C) Copr 1987-88, Peter Norton

        Computer Name: IBM AT
        Operating System: DOS 7.10
        Built-in BIOS dated: Thursday, April 26, 1900
        Main Processor: Intel 80386 Serial Ports: 2
        Co-Processor: Intel 80387 Parallel Ports: 3
        Video Display Adapter: Video Graphics Array (VGA)
        Current Video Mode: Text, 80 x 25 Color
        Available Disk Drives: 6, A: - F:

        DOS reports 640 K-bytes of memory:
        80 K-bytes used by DOS and resident programs
        560 K-bytes available for application programs

        [ ... ]

        Computing Index (CI), relative to IBM/XT: Not computed. Clock inactive.
        Disk Index (DI), relative to IBM/XT: Only hard disks can be tested.

        Performance Index (PI), relative to IBM/XT: Not computed.


        So it looks like they'd say

        "For a '386 without a clock, it suuuuuuuure is fast! Dunno where it puts all that data, though, must be some sort of solid-state RAMdrive, 'cuz there ain't no way it fits all that into 640K, and Norton sez it ain't got no hard drive!"

        Analysis:

        CPU: Looks like it finished the busy-wait-with-some-x87-instructions used to evalute the "computing index" in less than 1/18 of a second from the internal system clock, and concluded there was no clock, rather than trying to divide by zero. Mad propz to Peter Norton for thinking ahead.)

        Hard Drive: it probably looked at the partition table, saw how many gigs it was, or that it was FAT32, and said "Fucked if I know! Hard drives aren't supposed to be over 30M per partition!" (So I guess we know that GMR wasn't that great an innovation, 'cuz, hey, all these gigabytes, and I don't have a hard drive :-)

        I ran it a few times and finally got "lucky" and got a number for "computing index" - 62,910 on an 800 MHz P3. (The whole benchmark fits in cache, so it's not surprising that it's over 60000 times faster than a 4.77 MHz XT. I suppose I'd have to run the benchmark 100 times and figure out how many of those runs straddled a 1/18th of a second boundary to derive, statistically, just how much faster than "60000 times faster than an XT" it is... ;-)

        Thanks for the walk down memory lane, dude. Running old benchmarks on new hardware is fun!

  • Edison? (Score:1, Interesting)

    by Anonymous Coward
    "...painstaking search for the right materials."

    Sounds like the light bulb.
  • by Bobartig ( 61456 ) on Monday December 17, 2001 @10:44PM (#2718134)
    One of my physics profs, Yumi Ijiri, moved to my school after doing a few years of research for NIST and IBM regarding GMR technology. Basically, noone could figure out why GMR worked, or how to systematically improve upon the concept. IBM found a neat combination of thin films created these extremely sensitive magnetic sectors, and instead of finding out why/how it works, they empirically tried some 22000 or so combinations until they progressively found better and better arrangements. After the fact, they hired Yumi to figure all the physics out, but her research was also inconclusive. It kinda scares me that there's stuff in my harddrive that IBM and NIST couldn't figure out after 4 years of research.
  • by Anonymous Coward
    This is kind of offtopic, but interesting nonetheless. The economist provides that nice little Flash infographic as part of the story, served off their own URL, and it's actually impressively done.

    But get to the final step, and you'll see that "this translates into very large capacity hard drives that can be made cheaper and more reliable for our IBM customers." It's marketing fluff for IBM!

    It looks like The Economist was happy to be given this material, since it probably looked so snazzy. But I think at best they'll be embarrassed by their lack of (online) journalistic savvy, and at worst it's the start of a new world of checkbook journalism.
  • by apsmith ( 17989 ) on Monday December 17, 2001 @11:49PM (#2718315) Homepage
    I remember first reading about these in some physics articles in about 1991 or 1992; we had a presentation from one of our colleagues on the underlying physics about then. The commercial companies really jumped on it to bring these out so quickly! The only other case I can recall of such quick and major deployment of a basic discovery was when the Erbium-doped fiber amplifiers came out, within a year or so of the discovery of Erbium's ability to amplify optical signals; this is why we can double capacity on optical fibers with ease now, even trans-oceanic cables, just by changing the equipment on the ends, and is one of the major reasons for the rapid increases in bandwidth capacity of the last few years (getting the telco's to actually release that bandwidth for a reasonable price is another story of course...)
  • Talking about how much modern HD's can hold. . .
    The present record holder, a pocket-sized 120 gigabyte hard-drive from
    Western Digital, can store the equivalent of a stack of double-spaced
    typewritten pages taller than an 18-storey building.


    Assume that one story is 10 feet
    Assume that 300 pages stack 1" high.
    Assume 250 words per typewritten page.
    120,000,000,000 / (18 * 10 * 12 * 300 * 250) = ~740 bytes per word!
    If an word averages 6 characters, then they are using over 100 bytes to
    represent each word!
  • by Your Anus ( 308149 ) on Tuesday December 18, 2001 @12:23AM (#2718420) Journal
    With all this great technology, I wonder why these larger capacties are only available on IDE drives.

    It seems to me, SCSI drive capacities used to outstrip IDE by quite a bit, and the price penalty wasn't all that much (~$200). Lately, all I see in the catalogs for SCSI is 18GB or 36GB, while IDE is at 80GB, 120GB, and even 160GB.

    Is there something about this technology that isn't compatible with SCSI, or does SCSI not scale well, or what?
    • by Anonymous Coward
      You must have missed that Seagate has been selling a 180GB drive for nearly a year now, and will supposedly be releasing a 500GB drive sometime next year.

      They just aren't really priced with home users in mind. Pricewatch has 'em though, if you got the spare cash.
      • Let me add that while IDE disks are sold to home users who usually have only one HDD in their computer, I've rarely (if ever) seen a "server"-class system without a RAID controller.
        This means that the absolute least configuration I've seen in a "server" configuration is 2 HDDs in mirroring, with the most usiual (for an x86-class "server" - let's try and compare apples to pears at least) being 3-5 HDDs in RAID5.
        18Gb is the low-end cut for SCSI drives now, which makes the "standard" storage size for a server anywhere between 36 and 72 Gb.
        High-end dedicated storage appliances have LOTS of drives. A fully-loaded NetApp F840 (I'm not sure about the model number though) can use up to a full 42U rack, with 5 units or so going to the appliance proper, and everything else only holding disks (active or hot-standby) and power supply units.
    • by Anonymous Coward

      These large drives are mainly used by home users for their porn/mp3 stuff. Capacity is everything, performance is unimportant. It's all driven by the Wintel market, where SCSI never caught on.

    • Well SCSI is inherently more expensive because some complex controller logic is also built into the drive. This allows SCSI to be smart and utilize less CPU cycles for all tasks. I notice a big difference when transferring large (>200MB) files from one SCSI drive to another vs one IDE to another. The difference is my computer is still really responsive, almost like there is no file transfer at all. Now, I know you guys are screaming DMA you dumb@ss. Well DMA is fine, but on big file transfers IDE drives still slow down your computer. IDE drives are cheap because they rely on the CPU to do their extra tasks. Yes, DMA on IDE is awesome, but SCSI is still better. I just wish these jerky manufacturers would go to economics class and learn about supply and demand. Although I am a big fan of SCSI, I cannot justify the extremely high pricing for SCSI drives :(

      JOhn
      • my computer is still really responsive, almost like there is no file transfer at all.

        Tell me about it.

        I had a dual P3 with SCSI drives, but these drives were so noisy that one day I got fed up with the noise and sold them off and bought 7200 rpm ATA100 IDE drives instead.

        For a while I enjoyed the near silence of the new drives but soon the disappointing performance hit home. The computer simply choked on the heavy I/O during compilation.

        Now I'm back with a single 18 GB 15 krpm SCSI drive. Yeah, it's small and rather noisy and but at least the I/O isn't a performance bottleneck anymore.


    • Keep in mind that since SCSI is aimed at a higher speed, there are other problems to deal with, things like disk flutter (caused by the high speed) which casues the designers of high speed drives to use smaller disks (3.5 inch form factor, 2.5" disks)... to fit the same capacity in the drive, you need to stack the disks higher and higher, this gets expensive, as does going to to a full-height form factor instead of a half height. Of course there is more to it than that...

      seibed
  • the IBM team set about examining elements drawn from practically the whole of the periodic table. All told, the group made and tested no fewer than 30,000 multi-layer combinations of elements. The periodic table has about 100 elements (that last for more than a second). I don't much about this stuff but isn't the periodic table made so that you don't have to examine each and every element? You can sort of predict across 'rows' and 'columns'? Also, how can you make 30,000 combinations out of multi-layers of 100 elements? Hmm, wonder if they tried Sodium Chloride as one of the combinations? There can't be 30,000 valid combinations in solid state physics that satisfy the criterion needed for GMR heads! ... so, the team made an absolutely crucial discovery. They found that varying the thickness of the spacer layer actually affected the behaviour of the magnetic layers. I would think that would be obvious. I mean after 30,000 test, they find that thickness matters?
    • by apsmith ( 17989 ) on Tuesday December 18, 2001 @01:18AM (#2718550) Homepage
      You start combining the elements of course. 100x99x98/6 is about 160,000, which is the number of different combinations of 3 elements you can have. But then you can also continually adjust their relative concentrations - A B_x C_y allowing x and y to be any number between 0 and infinity - in practice you might sample at 10 different points in x and y to get a rough idea: that's another factor of 100, so about 10 million ways to combine three elements just in terms of chemistry. Go look at alloy phase diagram books for a sample of the complexity you can get combining three metallic elements into alloys. And why stop at 3 elements? The high T_c superconductors take 4 or 5 or more to work.

      But this isn't just chemistry either - the material is nonuniform, layered. Each layer can be composed of some different magnetic or non-magnetic alloy, and each layer can have a different thickness, and the number of layers is itself a variable. The combinatorial possibilities are in the billions! Obviously they narrowed it down considerably to find what they needed in just 30,000 samples - but there may be something even more spectacular out there among the billions of other possibilities, just waiting to be found.

      That's what makes science these days so interesting :-) And so difficult :-(
      • I remember something called the "Buckingham Pi theorem" but never fully understood it : )

        Supposedly it can be used to fully characterize the solution space without having to try all those billions of combinations of different alloys and thicknesses.

        Would it apply in this situation? Does anyone have a link that explains how it works?
  • by cvd6262 ( 180823 ) on Tuesday December 18, 2001 @12:50AM (#2718492)

    My family's been in MR tech (well, magnetic storage) for over 30 years now. I worked 3 years in IBM's MR head manufacturing facility in San Jose (Cottle Rd.). It used to be that the substrate people (the ones who made the actual disks) didn't have much to do because the MR heads could not write small enough to pressure them.

    GMR heads caused quite the stir because they could write smaller than the substrate had resolution.

    Now IBM's "pixie dust" has swung the pendulum the other way, as the head is once again the bottleneck.

    An interesting tid-bit is how many production managers were hired away from IBM soon after GMR heads were released.

  • This whole article seems to come from inside IBM.
    While I am prepared to believe that the company are behind GMR (I remember the original announcement), it seems a bit implausible that they invented all other forms of magnetic mass storage as well - something that this article implies.

    Remington Rand bought up the ENIAC in the early 1960s and tried to make a commercial proposition of it. They must have used some form of mass storage apart from 12" floppies.

    This sounds a bit like Al Gore inventing the Internet (which he apparently never actually claimed anyway).
  • Obviously the writer has no understanding of the subject matter.
    It is written so that it seems like it may be informative, but for anyone with a brain it is clear that it is a mish-mash of facts that when put together make little sense.
    This is the worst kind of obfuscation, but calling it obfuscation implies the writer knows what he/she is talking about, and this I doubt.
    Thankfully, the slashdot crowd can make up for this sorry lakuna of an article, with several coherent (not this one!) comments, that actually make me think.
    Now, how do we tap into this wealth of knowledge and experience without having to read crap economist no-brain articles like this?

It's hard to think of you as the end result of millions of years of evolution.

Working...