Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Ask Slashdot: Breaking the Computing Bottleneck? 254

MidKnight asks: : "With CPU speeds venturing into the GHz, typical DRAM access times falling towards 6ns, and network speeds trying out the 100Mb waters, the major computing bottleneck continues to be the hard drive. While other hardware speeds grow in orders of magnitude, hard drives aren't much faster than they were in the 70's. So, the question is, when will mass storage speeds start catching up, and what will the cause the revolution?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Breaking the Computing Bottleneck?

Comments Filter:
  • by Anonymous Coward
    SCSI hard drives are quite a bit more expensive,
    they command anywhere from a 40-100% price increase for the same capacitiy and rotational speed.

    Most people only have 1 hard drive and would never see the performance improvement from SCSI. Most of SCSI's benefits are when you have more than 1-2 drives and lots of simultaneous traffic which the average PC user simply doesn't do.
  • by Anonymous Coward
    I work on mainframes, which have much greater data moving ability than desktop PC's. The drives they use are basically the same however as PC drives, but the BUS interface they use is much better. Until the PC's BUS is updated you won't get the performance you are looking for from PC's. Ideally the bus would run at the same speed as the CPU so you keep a 1:1 ratio for timing. Right now the CPU is twiddling it's thumbs waiting for the I/O to occur.
  • by Anonymous Coward
    I saw it operating about 3 years ago on the Discovery channel - the system uses a galium arsenide (spelling?) cube about an inch square that contained gigabytes of storage capacity. The primary and reference lasers (a row of 100 beams each) did the writing, and the primary did the reading - and it was rewritable. They even played a segment of the host show from it. The entire setup took up a whole table-top - so I'm sure miniaturation is their primary concern now. This is assuming some asshole corporation isn't tying up the technology in court with copyright lawsuits (a common ploy to slow down competitors). In anycase - we have the technology. Today. Whether we see it on the market is up to the analysts and whether they think it's economically feasable. Chances are, it'll go the way of the sound-driven refrigeration unit (no pump, no freon - and it works). Our kids will probably see the stuff trickle to market when they're having kids unless someone is willing to take a chance and do something different now... It sickens me to see how much radically different technology is available but being held back by an industry that doesn't like change.
  • by Anonymous Coward
    Even an optimal caching strategy would still leave a lot of idle CPU cycles while the processor is waiting for disk.

    Only if the CPU has no way to tolerate the latency involved. Go read about multithreaded architectures. The principal is that the CPU should never wait for a high-latency event to complete before continuing. Instead, it just switches to a new thread. Normally these threads are much lighter-weight than the POSIX variety, but they're of similar flavor. I do not know a good, on-line summary of the various types of multithreaded CPUs. If you're interested, look around U. Wisc., Stanford, and the ACM Digital Library for phrases like `multithreaded', `SMT', etc. You could also go to a bookstore and flip through Culler and Singh's ``Parallel Computer Architecture'' book for some now-slightly-dated info.

    One of these machines does exist. It's from the Tera Computer Company [tera.com]. It has no data cache, and it's pretty fast for scientific-style jobs. IIRC, we were getting the equivalent of an 8-way UltraSparc-I on only one or two processors (argh, I really can't remember) on a list ranking benchmark. They're only considering the latency to memory locations (around 170 cycles, which is between 9 and 1.5 insn word opportunaties for a stream), but the general idea should hold for any high-latency event. The engineering for extremely high latencies probably gets trickier, though.

    Expect multithreaded architectures in the higher end workstations within the next 3-4 years (at the longest). A certain up-coming architecture has the ability to grow in this direction pretty easily (I believe, haven't thought about it too hard), but one of the folks involved in the design was quite down on multithreading a few months ago.

    Jason

  • by Anonymous Coward
    I have quite a significant chunk of RAM, but I am looking for something similar to Ramdrive.sys. However, this file gives you a maximum of 32MB, which is pretty pityful if u wanna run something like Quake, or quake2 at full speed, or capture a large amount of data over a long time without wearing your HD out. Any urls people? ????

    Onei
  • by Anonymous Coward
    I work on a project at Michigan State univeristy on Diamond Latice growth. In conjunction with the physics program, we have been attempting to grow carbon nano tubes. There has been success in the growth of the tubes, and the bucky balls, but moving the bucky ball from one end to the other of the tube has been a pain in the ass. There needs to be the creation of a EMF to "un-stick" the bucky ball. This would elimate volitle storage (RAM) and terabites of storage would be avaible on a single chip. It is a strong possiblity and there is going to be a conference in September/October at MSU that is the correlation of the research across the world.

    Exciting stuff! More storage/less space.
  • by Anonymous Coward
    SCSI does almost nothing to alleviate the problem to which MidKnight is referring. When computer architects call the hard drive the computing bottleneck, they are referring to its place in the memory heirarchy.

    You can think of all the storage space in your computer as being arranged in a heirarchy from fastest to slowest: CPU registers, L1 cache, L2 cache, RAM, disk, nonlocal media (network, tapes sitting on the shelf, etc.).

    The CPU works directly on registers. When it needs something else, it sends a request to the next level, L1 cache. If it's not there, that level requests the data from the next level, and so on. The first four levels of the memory heirarchy are fairly close to each other in performance:
    • Registers are zero-latency by definition.
    • L1 cache is very nearly zero-latency.
    • L2 cache requires the CPU to idle for a couple of clocks while waiting for the data.
    • RAM makes the CPU wait for a few dozen cycles in the best case, on a fast bus.
    And then there's the disk. Disk latencies are measured in milliseconds. With a modern 500-MHz CPU, a millisecond means 500,000 cycles. When the CPU needs data from a page of virtual memory that's on disk, it has to wait millions of cycles for that data to be loaded into RAM so it can then fetch the data.

    That's five orders of magnitude worse than RAM (by contrast, RAM is one order of magnitude worse than L2 cache). This is why virtual memory thrashing is so disastrous for performance.

    RAM-to-disk is the biggest critical gap in the memory heirarchy. SCSI hardly does anything to close the gap. A 7-ms latency is not much better than a 9-ms latency-- it's still millions of cycles-- and SCSI's higher transfer rate doesn't mean a thing for virtual memory since pages are pretty small to begin with.

    OTOH, applications that are limited by transfer rate rather than latency may benefit from SCSI, but transfer rate has never been the big problem with hard drives.

    ~k.lee
  • by Anonymous Coward
    Researchers at the Hitachi Cambridge(UK) Lab at Cambridge University are developing a new memory device named 'PLEDM' - Phase-state Low Electron (hole)-number Drive Memory, which they believe to be 'a promising candidate for the multi-Gbit memory chip which is scheduled to become available early in the next century'. In principle, it is possible to make a fast non-volatile PLEDM cell by modifying the barrier-structure in the channel, and they are confident that many of the present day data storage devices (eg: computer hard disk drives) could be replaced by PLEDM in the future.

    The Hitachi press release [hitachi.com]
  • A minor correction on your last point: you are correct in saying that the interface speed isn't that much different, but three factors that scsi is still far superior to IDE in are: expandability to more than 2 drives/channel, simulataneous data transfer to multiple drives on the same channel, and CPU usage.
  • I'm seeing a lot of comments to the effect of drives not being all that slow. Let me provide an example to demonstrate just how slow they are.

    Let's say you have a large forklift which is capable of carrying a few hundred boxes of paper from your archives. You're the forklift operator, and I'm the CPU.

    Ok forklift, go get me box of paper #421769, which is in our on-site storage facility downstairs.

    You drive off, and are back about two minutes later with the paper. That's the speed of memory.

    Now, forklift, I'd like box of paper #090210, which is in our long-term storage facility.

    You drive off. I go home. I have dinner, come back the next day, do stuff, go home ... go on vacation for a while... probably get a new job... new car...

    FOUR YEARS later, you come back. That's the speed of hard disk.

    No matter how big that forklift (bus bandwidth) is, the latency difference between two minutes and four years is a pretty huge one.






    --
  • by Jordy ( 440 )
    Well either that or place a few thousand heads per platter in a drive. For every head you place on a drive per platter, you increase access times, transfer rates, etc. At least logically this makes sense,

    I can imagine one really big head the size of the platter itself with 1 mini reader/writter per track, hell you wouldn't have the spin the thing :)

    Of course, that might cost too much :

    --
  • Virtually any hard drive on the market today can fill a 10baseT ethernet continuously, even with noncontiguous reads/writes. As for 100BaseT, you can fill that with a RAID setup. For example, cdrom.com fills its 100BaseT ethernet constantly, so the network is the bottleneck there, not the hard drives.

    As for latency, hard drives are behind memory and CPUs, sure, but they're still ahead of networks. Most large networks have much worse than 9ms latency.
  • Posted by Largo_3:

    I disagree, compare the price of a decent tape drive to the average harddrive size, the price is almost on par with each other, hence Tape media is not a useful option for people anymore, unlike years ago when everyone wanted a 'colorado jumbo 250'

    it only takes time...
  • Posted by Largo_3:

    I agree. less moving parts also means === SMALLER
  • Your hard drives do that right now. Data doesn't move to your drive in serial, it uses a parallel style. That's why you have that wide ribbon cable that can only be so long. Hard drives can usually write more than one thing at a time also. Much of the technology they use is a 'trade secret' as I recall though. You may be able to find some more info on how it works at a college research site or something. Trust me, the hd makers are pushing as much as they can. Raid does help alot, but the problem is still latency. Cache helps that some... We'll have to see what happens.
  • Well, rather than just leaving the same aspect ratio you'd probably want to increase the resolution of the screen while keeping a relatively constant physical size. It'd be worth while for fonts, and maybe still images depending on what your doing. I think most animation/game stuff isn't going to need much more than 1280x1024 though. :)
  • Is faster yes, the problem is that as long as we use a spinning platter to store data with a physical head moving around it that imposes some limitations. The platter spins at a finite speed and the head moves again at a finite speed. We need some sort of solid state mass storage, the problem is that is *MUCH* more expensive.

    --Zach
  • Yeah, a new hardware architecture can help lots. At Cambridge Computer Labs (part of the University) they have a prototype computer built on the Desk Area Network. The premis here was that the main system bus was replaced with an ATM switching fabric, so any device could talk directly to any other device, and these conversations could go on in parallel.

    A good example is drawing video to the screen from a camera. The data would come off the capture device, and talk straight to the video card. No going thru main memory. Or network packets could go straight into the processor cache.

    I saw it running 6 odd real time video displays (some TV, some local camera) at a very good framerate, yet the processor was only an Arm 6.

    So, to cope with modern loads (especially multimedia data types) a good architecture helps a lot. You can download the PhD thesis describing the DAN architecture from the cambridge FTP stie somewhere. Can't remember the URL off the top of my head though.

    -- Michael

    PS: Little factlet - the DAN prototype site right next to the (in)famous coffee pot that was the subject of one of the first web cams :-)

  • What an interesting work of fiction.
  • It is the "tube" between our CPU's and out memory....

    Yea Lisp, scheme, Turing, Backus....

    I'm done
  • Silly me, I always thought the bottle neck was the user...
  • There was a discussion [deja.com] recently on comp.arch [comp.arch] about the "Memory Wall", which is a similar phenomenon to the hard disk latency problem. Processors are getting faster faster than other parts of the system.

    One of the more insigtful comments is that we need to go back to Knuth's Fundamental Algorithms and see how he handles the memory hierarchy. In this case, tape drives. Tape drives in those days had reasonable bandwidth (for the time) and miserable latency. Sounds like hard drives now. So we need to reread the old algorithms and do a little translation:

    Tape drives -> disk drives

    Disk drives -> DRAM main memory

    Core (RAM) memory -> 2nd level cache

    Cache -> 1st level cache

    This isn't simple, of course, because things have changed, amongst other things there is no detailled control over what is in which cache level, but the gist of it is that we need to drop the assumption that accessing random addresses in RAM is just as fast as accessing RAM in such a way that cacheing and readahead are possible. This is still a universal assumption in algorithms textbooks.

    Similarly, we need to drop the idea that we can get to any part of the disk within a reasonable time. If we treat modern disks like a tape drive with really good mechanics for fast-forward and rewind, then we are closer to the reality, where you can transfer over a megabyte in the time it takes you to read a random bit from the other end of the disk, (about 10ms).

    This implies our 'hard disks' should be RAM disks, and the current hard disks should be relegated to some sort of backup role.

    The mainframe I used at Cambridge (Phoenix, a heavily hacked IBM 370 - this is only 10 years ago!) would move your files out to tape if you didn't access them for a few weeks. Next time you accessed them it took a few minutes for the tape robot to find your tape and put it back on the disks.

  • Hard disks are about 3 cents per megabyte [rcsystems.com].

    RAM is about $1 per megabyte [rcsystems.com] And according to rumour, IBM is about to speed up the capacity race in hard disks, while memory is about to hit the quantum effects wall.

    RAM isn't a real alternative to disks and if you want to use RAM to cache disks, then you need to be very careful that your access patterns are such as to let the cache have an effect.

    Research should be into how we make a large amount of RAM and a humongous amount of disk space work as fast as if the disk were RAM. Perhaos ReiserFS [devlinux.org] will help Linux with this.

    Incidentally, DAT tapes aren't much cheaper than hard disk space these days, though they are more portable, which is important (if you back up by copying to different disks within the same LAN then you have a problem if lightning strikes your ethernet).

  • With RAM prices falling the way they do, we'll soon be able to keep the whole contents of a HDD in the main memory.

    Actually, I think hard disk capacities are pulling away from memory capacities.

    After all, it's almost trivial to have as much as 1GB in your home machine right now

    Yes, if you buy a new PC, but then it will probably come with a 10GB hard disk.

    Argument: how about power failures?
    Reply: can you say UPS?

    Can you say "irritating patronising phrase"

    If a new fast version of ext2fs came along that always did a reformat instead of a file system check would you use it? Even if you had a UPS? You'd be crazy. Linux may be stable, but we are not at the point where I would be willing to bet the entire contents of my harddisk that none of the drivers have a lock-up bug in them.

    The solution probably is something that uses RAM to hide the slow disk speed, but it has to be more complex than your suggestion if it is going to work.

  • we can't decrease latency, using current techonology that's bound by the speed of light.

    I don't think that's the limitation just yet. Light can cross 1m in 3ns, that's less than the distance from my processor to it's RAM and back, yet the RAM has a latency around 100ns. And it's less than the distance to my hard disks and back, but they have a latency over a million times more than this.

  • Even an optimal caching strategy would still leave a lot of idle CPU cycles while the processor is waiting for disk.
    Only if the CPU has no way to tolerate the latency involved. Go read about multithreaded architectures
    This is the wrong order of magnitude. At the level of waiting for the disk the OS can context switch, there's no need for hardware support. The hardware support is only needed if you want to context switch on very short waits like a L2 cache miss.

    In both cases the main problem is finding some other useful work for the processor to do while it is waiting for the answer. A lot of the time, the user is waiting for one particular job to finish, it isn't easily parallelisable and what the processor does while it waits for high-latency IO isn't that relevant.

    The engineering for extremely high latencies probably gets trickier, though.
    On the contrary, it gets much simpler, and any decent OS will already context switch while waiting for a disk.
    Expect multithreaded architectures in the higher end workstations within the next 3-4 years (at the longest).
    Introducing new architectures is a difficult business. I don't expect more than two new architecture on high end workstations, they are the Transmeta processor (if that is what they are doing) and IA64.
    A certain up-coming architecture has the ability to grow in this direction pretty easily
    If you mean IA64 I can't see how. All those registers would make fast context switches more difficult. They have other ways of dealing with high latency memory, though.

    If you mean Transmeta, then I understand why you are posting as an AC :-)

    If I understand correctly, the Alpha 21464 goes a different way, and puts 4 processors with fast interconnect on one chip. Again, the trick is to make your compiler so clever that it can split the job into several parallel parts.

  • Why do you think your IDE cables can't be any longer than [insert spec number here. it's less than 1 meter]. It's because of 0.6c and the latency that follows.

    Rubbish, it's because IDE isn't terminated and isn't differential, and for all I know it isn't asynchronous. You can make Ultra-2 SCSI much much longer, even extend it to a couple of miles with fibre.

    I think we need CPU architecture to build upon single simple devices that communicate in a way that is not bound by the speed of light. Once the CPUs get there, the rest will follow, as is seen to fit. Read ``Communicating sequential process'' by Hoare et.al. and think about the communication happening way above c. That's sweet :)

    Either you or Hoare or Einstein has misunderstood something. I can't say for sure which of you it is...

  • IA64 can issue multiple bundles at once (if the stop bit isn't set)

    All modern processors can issue lots of instructions at once, the only difference is that IA64 expects the compiler to do the analysis, whereas others think it is better done at run time

    I was surprised to see how many similarities there were between IA64 and Tera. There are huge differences, though. IA64 has one set of registers, and I think adding 4 or 8 would be quite an overhead. I don't think making IA64 into a low-latency multithreaded architecture would be that simple, and anyway the point is moot, since AFAIK neither Intel nor HP plan to try.

    If CPUs support many suspended threads, DB servers can use them for tolerating disk latency, too. Yes, it's a really long latency, but CPU-level context switching can get rid of the OS-level context switching overhead for responses.

    Context switch times are measured in microseconds, while disk seek times are measured in milliseconds. There's little need to optimise the OS context switch time if you are waiting for the disk. And remember, hardware support for hundreds of Tera-style threads would cost you a lot of performance in other areas.

    And DB servers already do use threads to tolerate disk latency, but that only works if you can find something else to do in the meantime, and a 10ms seek time is an age. And a user waiting for several minutes an answer that needed thousands of disk seeks to find isn't going to be comforted by knowing how many other users got their answers at the same time, especially if they are using the same disk.

  • scsi may be faster than ide. the scsi bus is
    quick. scsi hardware tends to be higher speed.
    however, the gap between scsi and ide is never
    more than an order of magnitude even when you
    consider raid solutions. ram is 5 orders of
    magnitude faster than disk - scsi or ide. it's
    not like doubling disk speeds would help. we
    need to multiply them by 100,000 to catch up.
  • Actually, diamonds would be an excellent choice, were it not for the DeBeers family. You see, the hell of it is, diamonds aren't even all that rare. It's just that the DeBeers family owns all of the known mines. They exercise monopoly power in ways which would make Bill Gates look like RMS, and that's why diamonds are so expensive.

    Diamonds aren't that tough to manufacture in labs, either, believe it or not (after all, diamonds are just carbon, which is plentiful; add a bit of boron and you get one hell of a semiconductor).

    However, you still have one major problem: speed. Consider the following: CD/DVD has made creat strides in the past years speed-wise, but they're still much slower than a hard drive. And that's only on 2-dimensional media. Add a third dimension, and you get something which is absolutely wonderful for backups, but not for general use.

    Though I'll be the first to admit, the day I can use Trek-like "isolinear chips" will be quite cool indeed...
  • Why not build hard disc heads the same way we build CPUs?
    Instead of a single head sitting on an arm that has to move back and forth to reach a particular track, why not use chip manufacturing technologies to build "head chips" which have thousands of heads, at least one per track.
    Then transfers will be purely related to rotation speed.

    Cheaper too.
  • Seek time doesn't scale, ok you got that right.

    But it doesn't either on CPUs or RAM (well it's done some, but it won't go on).

    The fact is, information travels thru copper with less than the speed of light, and certainly never with more than this speed. We can only increase transfer rates (by using wider busses), we can't decrease latency, using current techonology that's bound by the speed of light.

    Until we start using tunneling (that some physicists seem to believe can transfer information above the speed of light) we will not scale well in latencies.

    Disk latencies, being so much larger than that of RAM and CPUs, will scale for some time to come. RAM and CPUs are almost dead in the water, until we get this breakthrough.

    Until then, we can only re-arrange our problems to be transfer-rate bound, more than latency-bound. That requires thinking ofcourse.

    I don't know much about physics, but I try to keep up by asking the right people the right questions. And there seem to be some possibilities in either tunneling or super-conducters, both which in some cases might allow information to travel above this limit of c. Meanwhile I'll be doing what I can do, to make problems rate-bound rather than latency-bound.
  • Still, you have _miles_ of wire in your CPU alone.

    And light travels thru copper wiring at about 0.6c.

    Why do you think your IDE cables can't be any longer than [insert spec number here. it's less than 1 meter]. It's because of 0.6c and the latency that follows. The speed of light is a real limitation even in low end devices _today_.

    Disks can improve latency for quite some time to come, because they're bound by the rotational speed rather than the speed of light, so no worries there. But the rest of the machine is dead in the water in this comparison.

    I think we need CPU architecture to build upon single simple devices that communicate in a way that is not bound by the speed of light. Once the CPUs get there, the rest will follow, as is seen to fit. Read ``Communicating sequential process'' by Hoare et.al. and think about the communication happening way above c. That's sweet :)
  • Sorry to follow up on my own posing.

    But light of course doesn't trave thru cooper. What I mean was: Information travels thru copper at around 0.6 times the speed of light.
  • For an electric signal to travel thru less than 1 meter of copper, it still takes more than three nanoseconds (because we're travelling at at less than c).

    Build your disks infinitely fast, with 1ps seek time, you'll still have more than several nanoseconds of latency.
    Build the same disks, but use superconductors/tunneling for transport, and you'll be rockin'.

    I believe neither I, Hoare nor Einstin misunderstood anyting in this subject. Because neigher of us really cared/cares much.

    As long as information travels at a speed slower than some fixed boundary, the information latency is going to become a bottleneck at some point in time.

    Still, to get back to the original point of this discussion, disks have a long way to go, and CPUs, RAM etc. doesn't.

    Expect superconducting/tunneling to become big, or expect disks to scale much better than CPUs/RAM/etc. in the future.

    Or, perhaps you have some theory that I'm not aware of ? I'd appreciate to know about it.
  • By turning copper into fibre, you only scale from the copper speed to somewhere nearer the light-speed (in vaccuum). You'll never _really_ scale.

    As was pointed out in another post, another solution might be (temporarily) to decrease the length of the wiring. Still, in the long run, we need communication faster than the speed of light.
  • Drives is not a bottleneck. If you need performance, go with RAID.

    I can transfer 34 MB/s from my disks (sustained), and around 100 MB/s from my main memory (RAM). As I see it, disks are not the problem here.

    RAM today does some sort of parallel-like access to get the performance they get today. If you do the same with you disks, (and why wouldn't you, if performance is that important?) you get equal speedup. Also put more CPUs in your box, when the CPUs become the bottleneck.

    Today in PCs, the PCI bus is the only real bottleneck, perhaps along with the memory subsystem. The memory subsystem has improved a lot over the last few years, but the majority of PCs still come with 32 bit 33MHz PCI busses.

    When we start seeing motherboards with 64 bit 66 MHz PCI, that's when we'll again be looking at disks, cpus and memory as the bottleneck.
  • I think there's room for the Solid-state "drives". Which is actually memory, but in harddisk speed and size (several Gigabytes). But it should be cheaper than hard disks (sooner or later ;), and it is already faster.
    Actually you can even buy Solid-state drives today, but it costs much, isn't big enough for your bucks...
    Can anyone throw in a few URLs?
  • Well, you're never going to send one forklift to the long-term facility. You're going to send a convoy of trucks.

    More than anything, my point is that "only a car is like a car". Can't we just evaluate computer technology using its own terms? Bringing forklifts into the discussion won't help much, I think.
    --

  • >Well, you're never going to send one forklift to the long-term facility. You're going to send a convoy of trucks.

    > More than anything, my point is that "only a car is like a car". Can't we just evaluate computer technology using its own terms? Bringing forklifts into the discussion won't help much, I think.


    I think his point is not about forklifts, it is about time, and the difference between two minutes and 4 years.
    But then, it does not take four years to get data off your drive, and as he pointed out, the cpu can keep working on something else while it waits those four years.

    Cheers
    Eric Geyer

  • Promise Technologyy ( http://www.promise.com [promise.com]) makes a Raid controller for Ultra DMA drives. I've been looking at these for a while, but just haven't had a good enough reason to go out and buy one. Not yet, at least. (I was planning on getting one for my _next_ computer.) It seems like a great way to get a lot of performance out of cheap drives. I think it is all done "in the hardware" -- it looks like a single drive to the os and so It _should_ work with linux, but don't quote me on that one.

    The price is good, about $125 (US), and four Ultra DMA drives to fill this up would be much, much less expensive than four equivalent SCSI drives. Granted, an uIDE Raid array will be different than SCSI-based array, but it should be close enough for a meaningful compairison, I think.

    Has anyone used uDAM Raids? Heard Any reviews?
  • Dang! It looks like the current card, the uDMA one, is plug-and-pray. I think it _could_ work, but booting from a pnp device could be a real drag. I guess if you used a boot floppy. . .
  • The Promise cards doesn't sound that good, on further thought. I can put two different uDMA's on my current motherboard, each on a different chain, and set up one for all the data and the other for swap. I'm not sure how different that would be, performance wise, from an uDMA "Raid".
  • There's a Norwegian company called Opticom ASA [opticomasa.com] which has been working on new storage media for some time now. They've been working with different kinds of polymer and other materials in order to create an all-organic "film" that works as a storage medium. They claim that their technology [opticomasa.com] and coming products will provide storage media that supercede existing solutions by several orders of magnitude in both size, speed, and energy use as well as storage capacity.

    They recently presented an operative one-gigabyte ROM version of their product to the public, and from what I understand, they're currently working to commercialize this technology. They also claim to be able to produce a working RAM version of their product in less than a year. This article [yahoo.no] (in Norwegian, I'm afraid) sums up their product and marketing ambitions for next year.

    I'm not qualified to judge neither the quality nor the feasability of their technology, but they've been highly debated in Norway. The debate has apparently subsided somewhat now when they have working prototypes that may indicate they're moving in the right direction. If somebody knowledgeable out there finds their theories worthy a comment, I'd really like to hear about it!


  • And despite what you said, you would have to spin head-per-track. Maybe you're thinking of the obvious extension of head-per-bit, but without motion there is no signal for a magnetic change detector.

    More to the point, if you were going to make a head per bit, just use a memory chip: if you're already going to have O(one piece of circuitry) per bit, just make it a flip-flop or a capacitor instead of a magnetic reader/writer.

    Actually, I have been thinking that having maybe two or three arm assemblies, instead of just one, would be a good idea: they could each be assigned to a smaller group of cylinders, reducing seeks, or all cover all cylinders but be arranged around the surface to reduce latency, or some combination of these. It would be expensive, and it would make the electronics in the controller a lot more expensive, but that's what these new faster ICs are for, right?

    I remember the other day someone was complaining that the drive manufacturers keep concentrating on making the drives bigger instead of making them faster, which is the opposite of what is needed for big RAIDs. I guess the problem is that capacity looks more impressive, and the extra expense for a faster drive of the same size wouldn't impress most normal people, so it wouldn't pay off so well.

    David Gould
  • I guess I don't quite understand your point about display resolutions. Why do they need to increase? Aren't they dependent on the size of your monitor and the amount of memory on your graphics card? Who would want to view 3200x2400 on even the largest monitor? Talk about eye strain.

    Steve

  • Heh. I suppose that means that the computer market might be really interesting in a decade or so, instead of just chosing between different variations of similar technology, we'll have to see if we can settle for existing optical and/or biological devices or if we want to upgrade to the new devices based on quantum effects. Much more interesting than RISC or CISC, SCSI or IDE, blah blah ..
  • I am an "Average" computer user and have (2) two hard drives, a DVD, and a CD-R.. all IDE unfortunately..
    Stan "Myconid" Brinkerhoff
  • I'd been wondering why I couldn't read stuff on screen as fast as stuff on dead trees, no matter what size or resolution monitor. Got any links to those lab tests?

  • Chevy had a fuel injected V-8 back in '57 or '58.
    ('course Diesels have always been fuel injection)

  • I remember performance-measuring on my first drive. About 100k per second. The PC probably contributed to that number: It did 3600 RPM, 17 sectors per track, 0.5k per sector, 60 seconds per minute: 510kbyte per second.

    My new Quantum does 15Mb per second. That's a factor of 30. Moore's law predicts a factor of 30 every 10 years. (= A factor of 1000 every 20 years, = a factor of 5 every 5 years).

    How long ago did I have my first hard drive? '87: 12 years go.

    I'd say that harddisks follow Moore's law pretty closely.

    Indeed, processors seem to have outperformed moore's law a bit the last decennium, so indeed the gap has increased a bit. (Oh well a factor of 20... :-)

    Does anybody remember those PC indexes that measured how much your PC outperformed a 4.77Mhz IBM PC? What would a Celeron/400 score now?
    Is that realistic?

    Roger.

  • I saw this before. Did you read it?

    with the assistance of "Tata Industrial Semiconductors" (name changed for security purposes), a Taiwan based (location changed for security purposes) silicon foundry.

    "new Electron Trap design..."
    "We may just NOT sell it to the top 10 PC Companies who monopolize our industry."

    and my favorite:

    "We have no idea where the drawings from which we derived our TCAP came from."

    Umm.. aliens, maybe? And put it in a PII shell? Come on.... The whole thing just sounds ludicrous. :-)

  • by Booker ( 6173 ) on Saturday June 26, 1999 @06:57AM (#1831391) Homepage
    Heh, did some more digging, and they *do* claim that the technology came from aliens! :-)

    Here's a guy who has some info on their claims, mostly debunking them. http://www.uni-weimar.de/~kuehnel/TrueorNot.htm
  • Actually, IBM did this with their (3350?) series of disks for mainframes. It was a conventional disk, with the addition of a separate set of heads fixed over the (I believe) outermost two tracks of the disk. Very handy for swap space and system libraries.
  • I don't think that massive buffering on the disk is the answer, but I imagine there is a lot that could be done with more memory on disk controllers. If a controller could cache several hundred pages, it could more easily order writes according to locality. This could also extend to reads, to a lesser extent. We now have video cards with 16 and 32 MB of memory, why not a disk controller with a similar amount? Of course, there's a problem with system crashes causing "written" blocks to be lost, how long to copy 32M of memory to flash ram?
  • If the people working on this live up to their promises, we'll have terrabytes of data stored in a cube one inch on a side, at access speeds similar to those of RAM.
    It's in the labs, but who knows when it'll be available....
    --
  • What kind of crack do you smoke. All the Firewire stats I've seen report throughput in bits per second. SCSI stats are reported in Bytes per second. Ultra2 SCSI does 80 MegaBytes per second, Ultra3 SCSI does 160 MegaBytes per second. Firewire claims to support 800 Megabits per second, this is 100 MegaBytes per second, which is between Ultra2 and Ultra3. Also, I don't know if you can build a hardware raid array on the Firewire bus; at least I've never seen it done.

    Andrew N.
    (I probably own Apple money for saying Firewire)
    --
    ...Linux!
  • Could you give me a link to a RAIDED Firewire array? I've never seen one and don't think they exist.
    --
    ...Linux!


  • Argument: how about power failures?
    Reply: can you say UPS?



    Static RAM. NVRAM.
  • But if price is your consideration, then you
    aren't trying for a high-performance computer,
    which is what this discussion is about.
  • In all seriousness it's not the buss speeds that are really killing us for the most part (true a faster bus helps, but a pinto can only go so fast, even on the Utah Salt Flats.) Still the big slow down is that damned platter and head moving all over the place, which actually does a pretty good job considering the age of the base technology. One thing that really hurts us is fragmentation, people need to defrag, people don't realize how much this really helps. And yes, 5% fragmentation is real bad, think about it, you really only use a small percentage of your drive (depending on your size and habits) and that small percentage that is fragmented is in the most used part of the drive, so that head searching all over the place for stuff you use constatly really hurts. One thing that's been a good move is more chache on the drives. Then there's the cool move by quantum, the solid hard drives, -real- expensive, but damned neat, last I remember it was some tens of thousands per gig, but oh so fast. According to their site they seem to be up to 3.2 gig now (no price listed of course) so you could actually use these for something cool, no raid needed I'm sure, but wonder what it would do on a raid.....

    matguy
    Net. Admin.
  • ok, so I should read further down before I post, doe!

    matguy
    Net. Admin.
  • Far more detailed info at the subject URL.
  • Research has been underway for a few years on the "protein drive." The operative principle is a photosensitive protein which occurs naturally in swamps. I believe that it darkens upon exposure to light to cause the (light/dark)-(1/0) memory mechanism. Throughput is proportional to the time required for the protein to respond to the light condition, and storage density is proportional to molecular "resolution" and the energy of the impinging light waves. The storage capacity of the funny-looking-prototypes in the labs in '97 was on the order of 1GB/cm^2. The access time was on the order of 10^-3 s. Improvement should come from tweaking the molecularity of the protein, possibly by mutating the life-forms that produce it (speculation.)

  • This is of course not much help for small files, but for larger...

    Even if the seek time is not much improved, the total read/write time should be at least a half magnitude faster. Think of bigger programs, graphical files a s o.
  • The "problem" with the way hard drives have developed is that most of the engineering resources have been dedicated to making drives:
    • large
    • cheap
    • reliable
    You can see this has been teremendously successful, and hard disk technology has been progressing extremely quickly. I don't think it's outside of "the revolution" at all. Speed has merely taken a back seat. This is why you see things like RAID, to bring higher speed to storage.
  • by trb ( 8509 ) on Saturday June 26, 1999 @06:15AM (#1831405)
    John Mashey (SGI, was MIPS, Bell Labs) has been lecturing recently on "Infrastress" which discusses the problem of computer architecture not keeping up with CPU speed. Not just a problem of slow disks, which can be raided to get parallel speed increases, but bus and net bandwidths, etc. He says: "It's as if cars suddenly became 10 times cheaper, but the roads didn't change." See articles: here [theage.com.au] and here [fairfax.com.au].
  • btw, that's doing RAID 0 - if you want some fault tolerance, and go to the next highest performing RAID level, RAID 1, plan on doubling that.

    RAID 1 is mirroring and therefore gives no performance advantage. In fact, there is a slight performance hit for the overhead of writing the same data twice, once for each disk.

    RAID 5 is a good compromise between data integrity and speed. You should really go for at least 4 disks and more if you can afford it as the % of space lost to parity information goes down as you increase the number of drives.

    1. I said *next* best peforming RAID level, as in it will do 50% worse than RAID 0. (You do know you can do RAID 1 with an arbirary number of drives right? It's called RAID 1+0)

    RAID 0/1 requires 4 disks and it is configured with 1 pair of disks (the RAID 1 part) mirroring a striped pair of disks (the RAID 0 part).

    1. RAID 5 will do 50% worse again than RAID 1 for the same number of drives.
    That is just plain wrong. RAID 1 is mirroring. Because the controller has to write the same data to two disks, there is a performance decrease.

    RAID 0 is just a striped pair. The data is spread out over two disks so that reading and writing data can be distributed over both disks.

    RAID 5 is the addition of parity information to a striped set. The data and parity info is spread out over all the drives with the parity info never residing on the same drive as any other part of the file.

    Look at the following diagram (fig 1.1)

    drive #1 | drive #2 | drive #3
    -------------------------------
    file 1.1 | file 1.2 | file1.p (parity information)
    file 2.p | file 2.1 | file 2.2
    file 3.2 | file 3.p | file 3.1

    If you go to read file 1, the RAID controller will read part 1 from drive #1 and part 2 from drive #2 simultaneously. Read performance from RAID 5 is essentially the same as RAID 0.

    If you want to write file 1, the RAID controller will write part 1 to drive #1, part 2 to drive #2 and the parity information to drive #3. This is slower than RAID 0 because of the parity information but still faster that RAID 1 because you are not writing the entire file to two places. You are splitting up the write duties to 2 of the drives and taking some overhead time to calculate and write the parity information.

    1. Only if you have a server where performance isn't much of an issue, e.g. a file server, should you consider RAID 5.
    RAID 5 is excellent for applications where you need performance and data integrity. Add a hot spare to the array and you will have to have 3 disks fail simultaneously to take you down!

    Read a little bit more about RAID. It can be complicated but it'll be nice to know. Good luck.

  • The only performance benefit you might get from RAID 1 is that it is SCSI. The file exists on each drive and must be read from the primary drive. In order to appreciate any performance gain on reads in a RAID you must be reading from multiple sources simultaneously.
    1. 1) RAID 0 can either be striping, doing writes a chunk at a time to each disk in the stripe set, or concatenation, which is slapping multiple disks together to make one big virtual disk and then writing so you fill up the disks in sequence.

    You are correct. I wasn't trying to write a complete discourse on RAID, just point out details relevant to the original post. My mistake for intimating that it was only striping.

    1. 2) RAID 5 can only tolerate losing 1 disk in its set. The volume can continue in "degraded" mode where for every read off of the volume, the system has to calculate the parity bit to make up for the lost disk. If you throw in one hot spare you could lose 2 disks total. (The hot spare takes over for the 1st disk to go..the volume needs a period of time to sync the data/parity to it, but then continues normally. You can then lose 1 more disk..this will put you in "degraded" mode.)

    You will have to re-read my post as none of what you said contradicts me. I made the point that, with a hot spare, not until the 3rd disk fails will you be unable to use the system. Just a matter of how you phrase it, but the point is, you can lose 2 disks and still function. Granted, the performance is degraded by using the parity information.

    1. 3) I'm not sure where you've gotten the impression that RAID 5 is excellent for performance. Calculating parity bits for *every* read is the antithesis of performance. RAID 5 (or "poor man's mirroring") is good for systems that need the data redundancy but cannot afford the cost of doubling their disk usage for mirroring. However, no compentent SysAdmin would use the terms "performance" and "RAID 5" in the same sentence.

    RAID 5, according to everything I have been able do read on the subject, does give you increased performance over RAID 1, especially when reading data. Essentially, it is like reading from a RAID 0 striped set. Writing data does take some more time due to the parity bits as you mentioned. I believe it was pointed out elsewhere that file servers are perfect for RAID 5 due the the increase in performance coupled with the data redundancy but Database servers would not function very well with RAID 5. I was merely contradicting the previous statement that a file server should use RAID 5 because of the lack of performance. To repeat another poster, if money wasn't an issue, we'd all have RAID 0/1.

  • I'm shocked that no hard drive manufacturer has thought of this before, but I've had two simple solutions for years that will solve the bottleneck quickly and easily.

    Instead of one carrier arm with read/write heads, why not just two, three, or four with a simple multiplexing controller?

    The positioning commands would be synchronized, while data bits would be fed in alternate, optimally sized data packets to the heads.

    And if they're not already, the heads themselves, on alternate sides of the platters should be running with a split data stream, too. I think that they're not now, otherwise, multiple platter drives would have a much higher sustained data transfer rate than single platter drives.

    The old way of thinking has the formatting tracks, as viewed from above with theoretical x-ray vision, laid directly above and below each other on opposite faces of the drive. This is probably thought of as a challenge to maintain, but this is completely unnecessary.

    One set of tracks will be offset by a slight amount under my scheme, but that would be completely harmless, as the second head would always be fed and read by the controller with the exact same delay that was incurred by the drive while formatting. Thus the data remains in sync automatically, althought the formatting tracks are offset.

    Simple, cheap, effective, and very quick to develop and implement. If the controller were 100% efficient and the data chunks even, dual multiplexed controller arms would double the performance (and half the access time), and multiplexed heads would then again double the data transfer rate. Ta-da! 3ms or less access, and a theoretical potential to quadruple the sustained data transfer rate. All without increasing the platter speed any more.

    With further research, a bit of time, and some new logic circuitry, I'm sure that the drive head carrier arms could later be ran asynchronously, therefore reducing even further the random access time.

    All rights reserved, manufacturers are invited to contact me at mturley@mindspring.com for reasonable licensing terms. :)
  • If you have ever taken apart a hard drive (some of use have) you'd see that the arms are much too long to fit more than one in there without whacking each other. And if you start to say "make the arms smaller" stop and count to five, if the arm cant reach the inside of the disk then it's going to have a rough time reading the boot sector. And with more than one head you're going to have lots of fragmentation.

    I have taken apart several drives, and know that what I'm proposing is entirely possible with few changes to the current technologies in today's hard drives.

    Maybe you haven't been thinking about this long enough, or perhaps you're locked into the same "that's how it is" pattern that HD engineers are in now. There is more than one way to fit multiple actuator arms on a drive platter.

    This first is obvious. Mount the arms in opposite corners of the drive chassis. This would increase the length of the chassis a little bit, but if Quantum has no trouble selling 5.25" "Bigfoot" drives that don't perform any better than standard 3.5" drives, I don't see how they could have trouble selling 3.5" drives that are an inch longer if they perform significantly better.

    Keep in mind that the drive actuator has a fixed pivot point, which only allows the heads to travel in an arc across the surface. This arc can only sweep across a fixed portion of the total surface area. No drive head carrier arm known to me could ever sweep even close to 180 degrees of the 360 degree circle, always leaving room for at least one additional actuator arm. In the drives I've disassembled, there is more than enough unswept disk surface space to fit at least two more arms. With a synchronous controller, and/or arms made with an arc cut out of their inner span, there is enough platter space for at least four arms, perhaps as many as six, without a redesign of current platters.

    The second solution is just as simple, but would decrease the storage capacity of the drive. I see it as just another tradeoff to be faced in industrial design. Significantly increase the spindle diameter of the drive. This would allow multiple shorter arms without the potential for them to knock into each other.

    Sketch it out on paper with a compass. You'll see that what I'm proposing makes perfect sense.

    Further, regarding fragmentation, if the drive arms are ran from a synchronous controller, and the heads are fed a data stream that is composed of striped bits of the data being written, fragmentation is no more of an issue than it already is. (Think a moment about a two armed drive. The arms are synchronized, so when head 0 on arm 0 is at point A, head 0 on arm 1 is always exactly 180 degrees away, at the exact same distance from the spindle. One file divided into two data streams gets written simultaneously in two locations, just as it gets read back later. The file is intentionally fragmented into two equal parts, but because these parts are always accessed simultaneously, no detrimental fragmentation is induced.)

    I cannot think of any other computer subsystem that is so ready, so ripe for parallelization as the hard drives, yet no one seems to be thinking of parallel computing here.

    As stated before, all rights reserved, and manufacturers are still invited to contact me at mturley@mindspring.com for reasonable licensing terms for use of these ideas. :)

  • The problem with disks is not transfer times but seek times, which are still in the millisecond range. This fact feeds the rather energetic use of intelligent prefetching in all applications to hide seek latency.
  • Thays because scsi drives are aimed at higher end users / servers.

    They physcial side of SCSI is exactly the same as with IDE: new stuff is put into higher end drives first, and higher end drives have SCSI interfases.

    The reason why SCSI drivers are at 10k and IDE is at 7200 is because people who want/need/can afford 10K drives have SCSI, anyway.

  • Get away with you! There were never any hard drives fitting that description in 1970! In 1970 most computers still only had magnetic tape for mass storage, I don't think even drum storage was widespread yet. The biggest hard drives in the mid-1980's were about that size and capacity.

    How old are you? I suppose you think everything before 1980 was in black and white, etc. etc.
    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • I lack the hard (no pun intended) data to back this up, but I think that hard-drive raw transfer rates have been rising and access times have been falling. I will agree that they haven't gotten better anywhere near as quickly as other parts of a computer system though.

    That being said, the only thing I can see as being an eventual competitor to hard-drives is the drawing board idea of holographic storage. Whether or not any products based on it ever make it to market is anyone's guess.

  • The most problem I have with my computers or other computers is that the dang hard drives start dying!!! I still have a 386-40 and it works great execpt that it's had 5 different hard drives in it's life!! But then again I have a 400mb Maxtor that runs every day since it was the bigest drive on the market... It's all about quality, a lot of people hate Western Digitial and I know why, I see a lot of them die compared to other drives. All that is needed is companies to make good drives. Solid state storage is comming, but we'll have to hold on.

  • Couple of corrections:

    1) RAID 0 can either be striping, doing writes a chunk at a time to each disk in the stripe set, or concatenation, which is slapping multiple disks together to make one big virtual disk and then writing so you fill up the disks in sequence.

    2) RAID 5 can only tolerate losing 1 disk in its set. The volume can continue in "degraded" mode where for every read off of the volume, the system has to calculate the parity bit to make up for the lost disk. If you throw in one hot spare you could lose 2 disks total. (The hot spare takes over for the 1st disk to go..the volume needs a period of time to sync the data/parity to it, but then continues normally. You can then lose 1 more disk..this will put you in "degraded" mode.)

    3) I'm not sure where you've gotten the impression that RAID 5 is excellent for performance. Calculating parity bits for *every* read is the antithesis of performance. RAID 5 (or "poor man's mirroring") is good for systems that need the data redundancy but cannot afford the cost of doubling their disk usage for mirroring. However, no compentent SysAdmin would use the terms "performance" and "RAID 5" in the same sentence.

  • i've used the solid state devices shown here: http://www.mti.com/database/index.htm with a good deal of success. they have higher throughput than the above-mentioned, low latency, and have a battery and disk inside to help avoid unexpected loss of data when power fails... all in all, for ultra-heavily hit oracle tablespaces, these are a life-saver. Peter
  • Programmers have been getting lazy. Digital assistants have no hard drives and they have programs that are functional. They programs are not as bloated as the ones on the desptop.

    A nice laptop with 64 magabytes of memory, and half a gig of solid state disk would be enough. Espsically, if the VM scheme just mapped programs to the memory. And need a game, pop in a 64 meg solid state floppy with the game on it.

    Now all we need is for reasonalbly prices flashdisk memory.
  • Um...Apple didn't invent SCSI or FireWire...
  • If you have ever taken apart a hard drive (some of use have) you'd see that the arms are much too long to fit more than one in there without whacking each other. And if you start to say "make the arms smaller" stop and count to five, if the arm cant reach the inside of the disk then it's going to have a rough time reading the boot sector. And with more than one head you're going to have lots of fragmentation.
  • The chief reason why hard disks are "slow" (ever used a tape as your ownly means of storage? or how about floppies?) is the RPM of the disk itself. 10,000 RPM drives are much faster than 5400 RPM drives ect.. If you really want to increase performance of your hard disk use a RAID system, either EIDE or SCSI. SCSI will be much faster, but DMA/33-66 makes EIDE relatively fast. When more FireWire drives come out we'll probably see nice and fast FireWire RAID setups too, which would be alot faster than SCSI because all the drives run peer-to-peer which means you dont have to bother with the SCSI controller spreading data around. Until we start seeing affordable RAM drives we're going to be stuck with out solid state magnetic media. BTW, my bottleneck has usually been my internet connection, even my cable needs a modem for upstream.
  • I can't believe how many people responded with completely irrelevent replies about "faster hard drives". Improvements in memory & cpu have made my p2-750 ~1000x faster than my 8088. But that hard drive with the same basic technology is ~10x faster. Even if you throw memory at the bottleneck, you're still 100x slower than on-chip access. That's why future discontinuous improvements will be made with increasing local memory...but wait, isn't that how the brain work? The processing and memory use the very same architecture! This isn't so far off really; the hypercomputer (http://www.starbridgesystems.com/Pages/about.html ) will be cpu-instruction-set + memory programmable on the fly. By the way, check out http://www.sunsite.auc.dk/FreakTech/Hard.html for coming technology.
  • IBM a while ago issued a press release claiming
    they've developed the "holy grail" of memory:
    SRAM speed (10ns access)
    DRAM density
    nonvolatility

    If they realize the promise of this type of memory, many computers may not need hard disks at all in 10 years, when 4Gb chips are being manufactured.
  • DRAM access is 50ns, or so, and that is just
    to get stuff off of the chip once it's asked for
    at the chip.

    In reality, it takes more than 120ns to do this:
    1) Miss the cache
    2) Ask for the data on the bus
    3) Get the data off of the chip
    4) Ship the data back to the CPU
    5) stick it in a register

    6ns is just the clock period of the memory bus
    on the very fastest SDRAM interfaces. 133MHz
    memory does NOT imply a 6ns access time: it
    means that you can put data across it every
    6ns.
  • Gallium arsenide? I don't think so.. maybe something like indium phosphide.

    There was an interesting piece recently on sciencedaily [sciencedaily.com] about some research being done on electro-optical effects in photo-reactive crystals.

    If you like that link, Salamo has a (very brief) page [uark.edu] describing some of his other work as well.

  • Actually, SCSI drives tend to be quite a bit
    faster than IDE drives in their media transfer rate and average seek times. Also, note that the top-end scsi drives now spin at 10Krpm, while IDE is only now getting to 7200. With ATA/66, the interface speed isn't actually all that different.

    There is also the point that SCSI can generally queue commands to multiple devices, while ATA is just getting into that.
  • It's not that SCSI is expensive, it's that IDE is disgustingly cheap since Intel decided to stick it in their chipsets. If they had picked SCSI as the interface-of-choice, we'd all be packing UW storage now, and there'd be people evangelizing about some other format, say Firewire.

    Of course, (as mentioned elsewhere on this thread) the point is the actual storage speed, not the interface speed.

    -Chris
  • It's a shame that so many parts of PCs are still made with mechanical parts. And in my experience, those are the ones that fail. The hard drive, floppy drive, plethora of fans, and removable storage of all kinds. And these are all the technologies that "slow us down" on the path to Uber-computerdom. What we need is standardized, solid-state memory devices. Two kinds: a fast one that can be assembled into huge storage devices, and a cheap one that can be used to transfer data between machines. Forget floppies, CDs, Zip disks, and hard drives. We'll have the storage catch up with the memory, cpu, and system bus.

    The questions of course are: reliability, performance, and durability. Even using old 30-pin simms in huge arrays attached to a constant power source would probably be faster (overall, not bursting, unless there was a really nice cache on there) than =5400rpm drives, but how long is that going to last? And if you build a twenty-meg portable "disk" out of this, what happens when you drop it on the floor?

    Oh well. I think I'll go drool about that atlas 10K now... :)

    -Chris
  • Just Talking out my ass here, but havent we been hearing great things every now and then about holographic crystal storage? Didn't they want those out within 5 years? I can't remeber.

    As a second note, with faster/cheaper/denser ram comming all the time, ram based static hard drives might come to replay magnetic media. (might) I know I want one. :)


    -Crutcher
  • You can only make a mechanical device operate so fast before the laws of physics start destroying it - or making it prohibitively expensive and large.

    If we want to get out of the slow storage days, solid-state mass storage devices will have to become practical for everyone. Or some of the other fancy stuff that only exists in labs right now will have to become a parketplace reality. Hard disks as we know them appear to be coming to the end of their evolution, at least in terms of speed.
  • by molozonide ( 32559 ) on Saturday June 26, 1999 @06:21AM (#1831474) Homepage
    Litium Niobate is one cystalline candidate for holographic data storage. However, it is too expensive to compete with more conventional types of data storage.

    There's ongoing research ( http://nanonet.rice.edu/research/boy d_res.html [rice.edu]) to use photopolymers as a cheaper holographic medium. If such research comes to fruition, you're more likely to see CD like disks coated w/ a holographic layer than the typical science fiction "data crystal."

    Other problems w/ holograms:
    - materials are not totally transparent, so "cubes" might be out of the question

    -materials must be chemically resistant to the atmosphere (e.g. oxidation, humidity), which might necessitate that they are coated. Such a coating might have deleterious effects on the substrates optical properties.

    - storing a hologram changes the structure of the crystal, which can cause limits of data density and beam penetration.

    - multiple holograms can be stored at the same location by rotating the crystal, but each hologram attenuates the possible intensity of subsequent holograms in that location.

    - holographic "efficency" is a funciton of the difference between the refractive indices of the substrate components. photopolymers have a very small range of refractive indices as oppossed to inorganic crystals.

    Overall the medium might not be rewritable, but a high density, long lasting storage medium would be ideal for back-ups.

    Anyway, it's been awhile since I "got out of the business" of chemistry, but this is what I remember.

  • Have you ever seen a harddrive from 1970? I have one. It's has a silver, aluminum case, which is rather odly shaped. It's 6"x5"x"6. And it weighs quite a lot. It has three plastic screwes with heads that are about 3/4" in diamter. The data cable looks like those brown unsheilded, translucent cables that can be found in inkjet printers (the thing connected to the place where you put the ink). It has a capacity of 100MB. It was taken out of an IBM cluster controller (if you know what that is [was]).

    Hard Drives have come a long was since 1970. Now CARS. CARS have gone nowhere since 1970.
  • I heard about this stuff years ago too.. seems like a few times a year you hear about some really cool new storage device and then that's the end of it except occasional blurbs.. If i remember (and that's far from a certainty) correctly the storage space was determined by the number of facets and the clarity of the crystal medium.. making high-facet diamonds the most likely prospects. The problem here, obviously, is that flawless diamonds are in themselves expensive and having them cut in any large number of facets isnt cheap either. The other thing was data retrival. They could pump data in easy enough (dont ask me how.. it was something to do with lasers, but what these days Isnt something to do with lasers?) but retriving it was hard.. like i said i dont remember details, but the analogy i remember was trying to read from RAM without knowing the memory address, which could well be totally inaccurate.

    Anyways, i'd really like to see some new storage medium, using magnetic storage in the decade of the CD just seems wrong somehow. I remember a short segment on the news a while back (like 4-6 months) about some college lab developing a new storage medium using a silicon needle a few atoms wide to read and write on these little penny-sized wafers that was supposed to hold 100 times as much as a CD and be available in wristwatch-sized readers for music in like 5 years.. but havent heard a Word since then. Perhaps somebody wants to post some URLs for the more interesting data storage projects that never made it? (or are still struggling toward making it)
    Dreamweaver
  • http://www.spie.org/web/oer/june/jun98/opcwg.html

    I read the above link about two years ago, and was amazed by the technology. Quarter-sized 100TB holographic discs. They're supposed to have something viable in another year; here's hoping!
  • Basically, the ideal would be to have all computer parts run at core processor speed, but this just isn't goint to happen. We have registers, L1 cache, L2 cahce, (k6-3 L3 cache in some boards), main memory, hard drives...
    Sure, it would be ideal for all storage to be as fast as the processor, but the current cached model works pretty well provided you have enough rams and a OS that makes good use of that ram...
    If you really need more performance, use RAID...
    www.cdrom.com doesn't seem to be disk limited, that's because it has boku memory and raid arrays.
    Besides, hard drives today are about 10x as fast (10x more throughput, initial latency has been cut about 4x) as those of 10 years ago, which isn't too bad in machanical engineering terms...
  • Here is a web page to check they have basicaly a 90gig solid state hard drive that looks like it could plug into a PII slot one processor slot you would just need a funky mother board or a expansion card I would prefer if it was on the mother board or through and AGP slot for fastest throughput ofcourse they could attack it just like a cpu and get the fastest through put the storagge would run at the same speed as the cpu almost eleminating the need for ram.

    http://www.accpc.com/tcapstore.htm

% APL is a natural extension of assembler language programming; ...and is best for educational purposes. -- A. Perlis

Working...