Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Thanks For the Memories: Touring the Awesome Random Access of Old (hackaday.com) 89

szczys writes: The RAM we use today is truly amazing in all respects: performance, reliability, price; all have been optimized to the point you can consider memory a solved problem. Equally fascinating is the meandering path that we've taken over the last half century to get here. Drums, tubes, mercury delay lines, dekatrons, and core memory. They're still as interesting as the day electrons first ran through their circuits. Perhaps most amazing is the cost and complexity, both of which make you wonder how they ever manage to be used in production machines. But here's the clincher: despite being difficult and costly to manufacture, they were all very reliable.
This discussion has been archived. No new comments can be posted.

Thanks For the Memories: Touring the Awesome Random Access of Old

Comments Filter:
  • DRAM (Score:3, Insightful)

    by John Smith ( 4340437 ) on Thursday March 10, 2016 @05:49AM (#51670741)
    I suspect we may be nearing the end of DRAM, though only time will tell. DRAM is old and really a bottleneck these days, something is likely going to replace it. At the very least in the next few years the form factors will change from DIMMs to perhaps HBM stacked on-die and fiberoptic DIMMs. At least that would be my next guess, anyway.
    • Re: (Score:3, Interesting)

      by drinkypoo ( 153816 )

      At the very least in the next few years the form factors will change from DIMMs to perhaps HBM stacked on-die and fiberoptic DIMMs.

      We don't need fiberoptic links for memory because it is not inconvenient to provide a very broad path between the CPU and the RAM. They would provide literally no benefit.

      • Re:DRAM (Score:5, Interesting)

        by Rob MacDonald ( 3394145 ) on Thursday March 10, 2016 @06:40AM (#51670825)
        Yes and no, if you are thinking about your computer or single server sitting beside you. If you are thinking of next gen data centers and virtualized servers, being able to supply a bus to RAM over a fiber link is very interesting. Think of a server component or appliance you install into a rack, then fiber link to your Hosts to supply more ram. There is a limit to the amount of ram slots on a server, a physical limit. Fiber links would open up the ability to have external ram that doesn't actually need a slot. Fiberlinks take considerably less space. And if this was an option, I suspect you'd have a fiber trunk coming from the host. This could actually be genius.
        • Re:DRAM (Score:4, Interesting)

          by Anonymous Coward on Thursday March 10, 2016 @07:16AM (#51670873)

          Optical interconnection is very efficient and good fidelity and low interference, but ease of manufacturing complex interconnection and creating multiple permanent connections is still lacking, compared to electric/metal. In addition to that, drivers/receivers are bulky and dissipate too much power. Before photonics can replace electronics, there'll have to be a revolution in miniaturisation and low power for fiber drivers/receivers, as well as analogue mass production technological processes to board etching and component soldering of today.

        • I know. The main barrier to this is how fast you could clock the fiber (effectively). If you could push it even to a few gHz it would beat copper. The question, of course, being over whether or not your silicon-fiber converter could run that fast. If it's possible it could solve a lot of issues, especially since HyperTransport and similar copper-based tech is nearing the limit.
        • Re:DRAM (Score:4, Informative)

          by serviscope_minor ( 664417 ) on Thursday March 10, 2016 @08:21AM (#51671011) Journal

          Yes and no, if you are thinking about your computer or single server sitting beside you. If you are thinking of next gen data centers and virtualized servers, being able to supply a bus to RAM over a fiber link is very interesting.

          Infiniband essentially already does this: it's a high speed, low latency interconnect which provides remote memory access and works over copper or fiber. It's only moderately low latency though, since the speed of light is limited.

          Every meter gives 3 nanoseconds of latency, more like 4 because the signals are sub-luminal in speed. You won't have to have a long cable before you add a serious latency penalty compared to local RAM. That's never mind the protocol and networking overhead, which for infiniband (which is designed for low latency for supercomputers) is still 500ns, dwarfing the RAM latency.

          There have in fact been systems made to essentuially build virtual machines with distributed memory like this. The trouble is they suck beause the code is written assuming fast access to RAM.

          Big supercomputer codes which essentially have to deal with this all the time use MPI, so they can know about the high latency (i.e. 500nS) transfers and schedule them long in advance.

          • If I recall, though, Infiniband is still copper.
            • If I recall, though, Infiniband is still copper.

              It apparently has both modes, but fiber is only really useful for long runs. It's speed over distance where it really wins, not raw speed over short runs.

              • by lgw ( 121541 )

                Copper is faster - 4ns/m vs 5ns/m for fiber optic. Copper has other difficulties with long runs, since it's very dependent on the impedance of the transmission line remaining consistent, susceptible to interference and so on, so fiber is just more practical for long runs. People seem to prefer fiber for short interconnects in the DC because it seems high tech, but copper is faster.

                • by Bengie ( 1121981 )
                  Fiber in DCs is also nice to isolate surges or ground differentials. Copper may be faster for signal prorogation, but the complexity of processing the signal at high bandwidth adds much more latency than the signal over short runs.
        • by xanthos ( 73578 )

          Careful there. You are at risk of reinventing the mainframe.

        • This is one of the key features of HP's much-hyped "machine": direct, on-chip optical interconnects.

          (Frankly HP's marketing continues to suck: when I read the hype about the "machine" I just yawned. But then I ran into a friend who had moved to HP to work on it and learned that it has some pretty cool features. I guess things like optical interconnect and massive shared address space just don't make interesting news stories.)

          Some stuff on their optical work: http://www.hpl.hp.com/techrepo... [hp.com]

    • Read up on what IBM did for their AS/400 architecture. Very brilliant work.

  • by Anonymous Coward

    Core memory is difficult to manufacture, but flash drives are no big deal? Sure....

    • Re:Difficult? (Score:4, Interesting)

      by vtcodger ( 957785 ) on Thursday March 10, 2016 @10:36AM (#51671529)

      Apparently back in the day, core memory actually was a bit difficult to manufacture. Back in the 1960s, they wired the cores by hand and that apparently required quite a bit of manual dexterity. The first digital computer I ever saw was SWAC at UCLA (https://en.wikipedia.org/wiki/SWAC_(computer)). 256 37 bit words of Cathode Ray Tube memory. I have no real idea how it worked, but I recall that on days when it chose to work, there were a bunch of CRTs displaying an 8x8 matrix of zeros and ones. The professor in charge of the thing told us in his rather thick European accent that they were trying to augment the CRT memory with core, but that so far his graduate student(s) hadn't been able to thread the core wires well enough.

      • I actually have some core memory sitting in a box. I have no recollection now many bits, but it isn't all that many. When you (carefully) remove the cover, you see how small the individual elements are. The stories I heard back at the time were of Asian women with small fingers threading the wires through these things by hand.

        • Re:Difficult? (Score:4, Interesting)

          by GerryGilmore ( 663905 ) on Thursday March 10, 2016 @11:22AM (#51671801)
          Yep, I can verify that. Worked for Data General and the original Nova series used all core memory. The "core stacks" arrived in Mass from Asia and were then mounted on the memory card itself. Took the cover off of a dead one one time and it looked like velcro until I got it under a magnifying lamp. Even then, I was amazed at the dexterity it must take!
        • I have a 2" diameter jar with half a megabit of core memory, but it's not strung yet. They are extra-tiny cores. My coworker saved the core from a PDP-11 computer, I think it's 16k bytes. One board for the cores, one for the driver circuits.
      • There's a video in the linked article that shows how core memory is made, and it is indeed a finicky manual process manipulating things to small for even pliers.

  • by Anonymous Coward

    Yes, the RAM we use today is amazing. But it is never fast enough, never big enough and never cheap enough. Never. RAM access is still the performance bottleneck in many applications.

  • Look ahead (Score:4, Interesting)

    by SamuelProgin ( 2843799 ) on Thursday March 10, 2016 @06:49AM (#51670827)
    Saying that a problem is solved is risky. Remember that the physics was solved shortly before Einstein et all! The future might reshape our perception, with for instance RAM and ROM convergence: https://en.wikipedia.org/wiki/... [wikipedia.org]
  • Its Cosmic (Score:2, Interesting)

    by Anonymous Coward

    One thing we have forgotten about is the impact (literally) of cosmic rays on memory cells. The old core planes were not very sensitive to the effect of an alpha particle from space zipping through the little donuts and changing values. But solid state RAM certainly was. In the old days, funny things would occasionally happen as a result of cells having their stored values flipped from 0 to 1 or back. These were rare random events that became more frequent as the amount of memory and its density grew. High

    • Re:Its Cosmic (Score:5, Informative)

      by EmagGeek ( 574360 ) on Thursday March 10, 2016 @07:29AM (#51670897) Journal

      Alpha Particles from space do not penetrate the building that the computer is in, nor the computer case, nor the plastic package of the memory devices themselves.

      Alpha particle bit errors are caused by alpha particle emissions within the memory cell itself, as there is a minute amount of radioactive material in all semiconductor devices, including memory.

      However, radiation-induced bit errors are seldom actually caused by package alpha particle emissions. The more likely space-related culprit is neutron flux. It has been found that DRAM bit error rates increase dramatically with altitude, and that solar events increase the rates further.

      Fun stuff.

      • by Anonymous Coward on Thursday March 10, 2016 @08:54AM (#51671097)

        The bit flips aren't due to neutrons, but to other high energy particles (cosmic rays).
        And modern memory design tolerates this quite well (on chip EDAC, for instance).

        But that's not the dominant source of errors any more. It's more things like electrical noise (signal integrity is another term). As you reduce the size of the device holding a single bit, you're starting to get down to where the thermal noise is a significant fraction of the "signal" (i.e. the presence or absence of charge in that bit storage).

      • As I recall, if the memory itself was packaged in ceramic rather than plastic, there was potentially a higher error rate from that particular source, because of the greater chance of radioactive material in the ceramic used.
  • by Anonymous Coward

    Are you fucking kidding me? RAM is the SLOWEST part of the entire execution chain, and it's ORDERS OF MAGNITUDE slower than even the slowest CPU cache.

    Memory busses are horribly inefficient, slow, and subject to data corruption without taking extensive measures to prevent them (which slow them down even more).

    Even assuming we use the entire ~12MB of L3 cache as instruction cache (which is impossible really unless those instructions don't require any data access, which is utterly implausible), any modern CPU

    • Re: (Score:3, Informative)

      by BitZtream ( 692029 )

      slower than even the slowest CPU cache.

      CPU cache IS MEMORY, so how can it be slower than itself?

      And before I quote the rest of your trash and make you look stupid, lets point out the most important fact here:

      You can have RAM that runs as fast as CPU cache, you just can't afford it.. That CPU with 12MB of CPU cache is mostly expensive BECAUSE OF THE 12MB OF CACHE. and the difficulty in getting that much RAM to operate reliably at those speeds results in low yields, and increased consumer cost.

      Even assuming we use the entire ~12MB of L3 cache as instruction cache (which is impossible really unless those instructions don't require any data access, which is utterly implausible), any modern CPU can blow through that in much less time than it takes a DDRx memory controller to set up a RAS.

      Did you seriously imply that a Xeon CPU can blow thro

      • by AmiMoJo ( 196126 )

        CPU cache IS MEMORY

        Yes, but it's not RAM. The rest of your post is based on that misunderstanding.

        Random Access Memory (RAM) can only access one memory address at once. If you tried to use it for CPU cache, you would have to store the address of each cached word (collection of bytes) along with the value, and then search through every address individually every time. It would be extremely slow, a massive performance down-grade.

        Cache memory is different. When accessing it the CPU presents an address, and the cache memory insta

        • "Random Access Memory (RAM) can only access one memory address at once. "

          Random Access Memory can access memory randomly. The term includes no prior restraint on the number of ports. Internal caches are (typically) specifically SRAM, where you will note the "RAM" portion of the acronym.

          "If you tried to use it for CPU cache, you would have to store the address of each cached word (collection of bytes) along with the value"

          Caches already store the address of each cached word. This is the Tag RAM. Although onc

          • by AmiMoJo ( 196126 )

            The term includes no prior restraint on the number of ports.

            True, but DRAM in particular doesn't actually allow simultaneous access from two ports. It multiplexes. SRAM can do two simultaneous accesses, but isn't widely used in PCs due to cost. Good point though, I did forget about dual port RAM.

            Caches already store the address of each cached word. This is the Tag RAM.

            Not in the modern sense of the word "CPU cache". Tag RAM is just normal RAM, so while fast it does have to be searched by polling every used address. So if you had a 256 word tag RAM, you would have to make 256 accesses to check every address in your cache. Rather inefficien

            • " SRAM ... isn't widely used in PCs due to cost. "

              All on-die CPU caches are SRAM.

              " it does have to be searched by polling every used address"

              Entries in a tag ram are mapped in the same way the entries in the related caches are mapped. The Tag ram give you the complete address for the store cache line that is then compared to the desired address to determine if the desired address is actually in the cache.

              "a CPU cache today uses content-addressable memory"
              At most, selection of a line out of a set from an ass

            • You seem to be talking about fully associative caches. Most CPUs do not use this as general memory caches. Tag lookup is trivial, but then you have 1 to N ways/sets to look for. The 1 set "direct mapped" caches are very simple. A 4 way set associative cache though only has to check 4 possible entries, and that's not expensive. And 4 or 8 way set associative caches are extremely common. A fully associative cache just isn't practical unless the amount of memory is relatively small, thats's why on a PC w

      • by Bengie ( 1121981 )

        You can have RAM that runs as fast as CPU cache, you just can't afford it.

        Not really. Cache is only fast because of the huge number of traces. Unfortunately, you have to choose between one of two things for larger caches. n^2 increase in traces as your cache size increases or reduce the size of n at the expense collisions, which means your "memory" is now lossy. That also ignore the nasty increase of latency as cache gets larger. After a certain point, well within the range of current memory sizes, cache will have higher latency than DRAM.

      • by Agripa ( 139780 )

        The one and only thing slower than memory access is disk access, and even there we are closing the gap. Memory has not gotten appreciably faster in a decade, unless of course you ask marketing people.

        DDR3's base rate is 800MT/s. DDR4's base rate is 2133MT/s ... yea, 2.5x is not appreciably faster or anything.

        The latency has not significantly improved. They are not putting massive amounts of cache on these processors to improve bandwidth but to reduce the penalty do to latency.

        Bandwidth can be increased usi

  • Uhh whaaaa? (Score:4, Funny)

    by Dunbal ( 464142 ) * on Thursday March 10, 2016 @07:34AM (#51670905)

    But here's the clincher: despite being difficult and costly to manufacture, they were all very reliable.

    That was kind of built into the design spec. The guy who build unreliable memory (you know, the one who came up with the Alzheimer Machine) - well he went bankrupt pretty quick right alongside the guy who invented a horseless carriage that only needed a horse half the time.

    • szczys is like a 15 year old who doesn't really understand the world real well yet and when he makes these amazing discoveries that we've all known for years ... some how some idiot at slashdot ... who is entirely unqualified to be anywhere near a 'news for nerds' site posts it to the front page because they aren't nerds and don't actually know that this shit is rather common knowledge among ACTUAL nerds and geeks.

      If you look at his submission history, its something you would expect from your high school te

    • Re:Uhh whaaaa? (Score:5, Interesting)

      by hey! ( 33014 ) on Thursday March 10, 2016 @09:45AM (#51671289) Homepage Journal

      On the other hand the relationship between a system's reliability and the reliability of the system's components isn't one-to-one. You can build unreliable systems out of reliable components, and more surprisingly, you can build reliable systems out of unreliable components. That latter principle is the basis for the Internet, which provides end-to-end communication that is more reliable than any of the possible paths between those endpoints.

      Every component is unreliable to some degree; as it becomes increasingly reliable it moves from unusable, to something you can work around, to something whose potential failure you can ignore in many applications.

      • by Dunbal ( 464142 ) *

        and more surprisingly, you can build reliable systems out of unreliable components. That latter principle is the basis for the Internet

        Closer to home it's also the basis for human bodies. I remember my the head of the Anatomy department at my school seemed to be obsessed with the notion that human bodies were perfect. Then again she was a religious nut. I never challenged her on this of course (there's no point arguing with a religious nut), but merely thought to myself "ok if we're so perfect then explain disease to me, and explain aging..."

        I also agree that having unreliable components helps to "build in" redundancy. The human body, fo

      • I wish I had a supply of mod points.
    • "Reliability" is a bit iffy. Many were prone to failure for various reasons. For example the drivers for the memory would be complex, be on separate cards from the memory, involve vacuum tubes, and so forth. The actual memory and it's ability to reliably store and retrieve the data may be good though.

      There's also the issue of that memory not having to store so many bits. If you've got 1 failure per billion bit accesses then it seems pretty solid if you've only got 1000 bits but it would be useless if yo

  • Most of the technologies TFA describes were experimental, but that mainframe mainstay, core memory, came down in cost during its run from $1/bit to one cent. Even at that price, a memory stick would cost $10 million per gigabyte and would require a room of its own. I love living in the future.

    • You misunderstand, *ALL* memory systems in the article were used in commercial systems. can't call any of them "experimental" after that.

  • by T.E.D. ( 34228 ) on Thursday March 10, 2016 @09:54AM (#51671327)

    I've heard some really interesting stories about Drum memory.

    Since you had to wait for your desired read/write location to rotate under the head, and since this was back in the CISC era when the execution time of every instruction was known and published, developers would "optimize" their memory accesses by placing their data on the drum in the exact spot that each byte would be under the head when the instruction to read or write it was processed.

    Even more interestingly, at least one platform made use of this architecture by using an assembly language that effectively had a goto at the end of every instruction. That way you could scatter your code on the drum to perform the same optimization.

    I saw another story about early rotating-drum systems being put on USN ships. Supposedly the first time they tried to turn at sea the navigators discovered the hard way that the designers failed to account for the gyroscopic property of having a large rotating metal drum on board...

  • ..was built on perfboard, and had 256B (yes, not a typo: 256 bytes) of static RAM memory. The next incarnation of it had a whole 4096B (yes, again, not a typo: 4096 bytes) of static RAM. When you're writing everything in machine code, it's amazing how much you can get done. Even when I graduated up to a Z80-based computer running CP/M, and had a whole 56kB of program space to work with (on a 16-bit address bus; the other 8kB was the OS.. the entire OS, mind you!), you could accomplish an amazing amount of f
    • by Tablizer ( 95088 )

      built on perfboard, and had 256B (yes, not a typo: 256 bytes) of static RAM memory...When you're writing everything in machine code, it's amazing how much you can get done.

      TFA: from the book "Build Your Own Working Digital Computer" in 1968. The main program storage was an oatmeal container covered in foil...

      One oatmeal container oughtta be enough for anyone. -Gill Bates

      • Are you mocking me, or are you just pointing out something funny?
        • by Tablizer ( 95088 )

          All in jest. Your statement reminded me of Bill's (alleged) statement. Technically he may have been somewhat right, but vendors find ways to bloat things up out of sloth or corner cutting or goofy standards, such as including an entire library or driver even if you only need 3% of its functionality.

  • Miss the days of the 6502 and Z80 with 8K of 2102 ram. Could heat my sandwich while I wrote assembler code on the card...

  • The Mercury Delay unit is certainly visually impressive. You can really impress and/or scare somebody with a gizmo like that on or near your desk.

  • Not really but an advertisement for programming software in 1980s magazine. Picture of a stone tablet and a hammer and chisel. Tablet had partial listing of "FOR, NEXT" statements with tagline, "still programming the old fashion way?" One thing certain stone tablets will last a millennia!

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...