Thanks For the Memories: Touring the Awesome Random Access of Old (hackaday.com) 89
szczys writes: The RAM we use today is truly amazing in all respects: performance, reliability, price; all have been optimized to the point you can consider memory a solved problem. Equally fascinating is the meandering path that we've taken over the last half century to get here. Drums, tubes, mercury delay lines, dekatrons, and core memory. They're still as interesting as the day electrons first ran through their circuits. Perhaps most amazing is the cost and complexity, both of which make you wonder how they ever manage to be used in production machines. But here's the clincher: despite being difficult and costly to manufacture, they were all very reliable.
Re: (Score:1)
Fucking drone bros trying to slashvertise
Re: aight (Score:1)
You're funny.
Re: (Score:2)
It's ok, 4.2BSD still works fine.
Re: (Score:2)
This is a great article in my opinion, definitely "News for Nerds" and definitely Slashdot-worthy.
DRAM (Score:3, Insightful)
Re: (Score:3, Interesting)
At the very least in the next few years the form factors will change from DIMMs to perhaps HBM stacked on-die and fiberoptic DIMMs.
We don't need fiberoptic links for memory because it is not inconvenient to provide a very broad path between the CPU and the RAM. They would provide literally no benefit.
Re:DRAM (Score:5, Interesting)
Re:DRAM (Score:4, Interesting)
Optical interconnection is very efficient and good fidelity and low interference, but ease of manufacturing complex interconnection and creating multiple permanent connections is still lacking, compared to electric/metal. In addition to that, drivers/receivers are bulky and dissipate too much power. Before photonics can replace electronics, there'll have to be a revolution in miniaturisation and low power for fiber drivers/receivers, as well as analogue mass production technological processes to board etching and component soldering of today.
Re: (Score:2)
Advancing the manufacturability of photonic technology is one of the goals of the the federally funded AIM Photonics program: http://www.aimphotonics.com/ [aimphotonics.com]
Re: (Score:1)
I'm sure this US government project will be every bit as significant as the Japanese government's Fifth Generation program.
You know, from back in the days of Japan, Inc.
Re: DRAM (Score:1)
Re:DRAM (Score:4, Informative)
Yes and no, if you are thinking about your computer or single server sitting beside you. If you are thinking of next gen data centers and virtualized servers, being able to supply a bus to RAM over a fiber link is very interesting.
Infiniband essentially already does this: it's a high speed, low latency interconnect which provides remote memory access and works over copper or fiber. It's only moderately low latency though, since the speed of light is limited.
Every meter gives 3 nanoseconds of latency, more like 4 because the signals are sub-luminal in speed. You won't have to have a long cable before you add a serious latency penalty compared to local RAM. That's never mind the protocol and networking overhead, which for infiniband (which is designed for low latency for supercomputers) is still 500ns, dwarfing the RAM latency.
There have in fact been systems made to essentuially build virtual machines with distributed memory like this. The trouble is they suck beause the code is written assuming fast access to RAM.
Big supercomputer codes which essentially have to deal with this all the time use MPI, so they can know about the high latency (i.e. 500nS) transfers and schedule them long in advance.
Re: DRAM (Score:1)
Re: (Score:2)
If I recall, though, Infiniband is still copper.
It apparently has both modes, but fiber is only really useful for long runs. It's speed over distance where it really wins, not raw speed over short runs.
Re: (Score:2)
Copper is faster - 4ns/m vs 5ns/m for fiber optic. Copper has other difficulties with long runs, since it's very dependent on the impedance of the transmission line remaining consistent, susceptible to interference and so on, so fiber is just more practical for long runs. People seem to prefer fiber for short interconnects in the DC because it seems high tech, but copper is faster.
Re: (Score:2)
Re: (Score:2)
Careful there. You are at risk of reinventing the mainframe.
Re: (Score:2)
This is one of the key features of HP's much-hyped "machine": direct, on-chip optical interconnects.
(Frankly HP's marketing continues to suck: when I read the hype about the "machine" I just yawned. But then I ran into a friend who had moved to HP to work on it and learned that it has some pretty cool features. I guess things like optical interconnect and massive shared address space just don't make interesting news stories.)
Some stuff on their optical work: http://www.hpl.hp.com/techrepo... [hp.com]
Overhaul the whole architecture (Score:3)
Read up on what IBM did for their AS/400 architecture. Very brilliant work.
Difficult? (Score:1)
Core memory is difficult to manufacture, but flash drives are no big deal? Sure....
Re:Difficult? (Score:4, Interesting)
Apparently back in the day, core memory actually was a bit difficult to manufacture. Back in the 1960s, they wired the cores by hand and that apparently required quite a bit of manual dexterity. The first digital computer I ever saw was SWAC at UCLA (https://en.wikipedia.org/wiki/SWAC_(computer)). 256 37 bit words of Cathode Ray Tube memory. I have no real idea how it worked, but I recall that on days when it chose to work, there were a bunch of CRTs displaying an 8x8 matrix of zeros and ones. The professor in charge of the thing told us in his rather thick European accent that they were trying to augment the CRT memory with core, but that so far his graduate student(s) hadn't been able to thread the core wires well enough.
Re: (Score:3)
I actually have some core memory sitting in a box. I have no recollection now many bits, but it isn't all that many. When you (carefully) remove the cover, you see how small the individual elements are. The stories I heard back at the time were of Asian women with small fingers threading the wires through these things by hand.
Re:Difficult? (Score:4, Interesting)
Re: (Score:3)
Re: (Score:3)
There's a video in the linked article that shows how core memory is made, and it is indeed a finicky manual process manipulating things to small for even pliers.
RAM problem solved? (Score:1)
Yes, the RAM we use today is amazing. But it is never fast enough, never big enough and never cheap enough. Never. RAM access is still the performance bottleneck in many applications.
Look ahead (Score:4, Interesting)
Re: (Score:2)
Actually - pretty much all of it is made from rare earths mined by slaves in Africa, old-fashioned whips-and-chains unpaid slaves.
Its Cosmic (Score:2, Interesting)
One thing we have forgotten about is the impact (literally) of cosmic rays on memory cells. The old core planes were not very sensitive to the effect of an alpha particle from space zipping through the little donuts and changing values. But solid state RAM certainly was. In the old days, funny things would occasionally happen as a result of cells having their stored values flipped from 0 to 1 or back. These were rare random events that became more frequent as the amount of memory and its density grew. High
Re:Its Cosmic (Score:5, Informative)
Alpha Particles from space do not penetrate the building that the computer is in, nor the computer case, nor the plastic package of the memory devices themselves.
Alpha particle bit errors are caused by alpha particle emissions within the memory cell itself, as there is a minute amount of radioactive material in all semiconductor devices, including memory.
However, radiation-induced bit errors are seldom actually caused by package alpha particle emissions. The more likely space-related culprit is neutron flux. It has been found that DRAM bit error rates increase dramatically with altitude, and that solar events increase the rates further.
Fun stuff.
not neutrons, it's cosmic rays (Score:5, Informative)
The bit flips aren't due to neutrons, but to other high energy particles (cosmic rays).
And modern memory design tolerates this quite well (on chip EDAC, for instance).
But that's not the dominant source of errors any more. It's more things like electrical noise (signal integrity is another term). As you reduce the size of the device holding a single bit, you're starting to get down to where the thermal noise is a significant fraction of the "signal" (i.e. the presence or absence of charge in that bit storage).
Re: (Score:3)
Solved Problem?!?!?? (Score:1, Insightful)
Are you fucking kidding me? RAM is the SLOWEST part of the entire execution chain, and it's ORDERS OF MAGNITUDE slower than even the slowest CPU cache.
Memory busses are horribly inefficient, slow, and subject to data corruption without taking extensive measures to prevent them (which slow them down even more).
Even assuming we use the entire ~12MB of L3 cache as instruction cache (which is impossible really unless those instructions don't require any data access, which is utterly implausible), any modern CPU
Re: (Score:3, Informative)
slower than even the slowest CPU cache.
CPU cache IS MEMORY, so how can it be slower than itself?
And before I quote the rest of your trash and make you look stupid, lets point out the most important fact here:
You can have RAM that runs as fast as CPU cache, you just can't afford it.. That CPU with 12MB of CPU cache is mostly expensive BECAUSE OF THE 12MB OF CACHE. and the difficulty in getting that much RAM to operate reliably at those speeds results in low yields, and increased consumer cost.
Even assuming we use the entire ~12MB of L3 cache as instruction cache (which is impossible really unless those instructions don't require any data access, which is utterly implausible), any modern CPU can blow through that in much less time than it takes a DDRx memory controller to set up a RAS.
Did you seriously imply that a Xeon CPU can blow thro
Re: (Score:2)
Glass doesn't have magnetic domains. It still has iron, which is the main component of rust.
Re: (Score:2)
Re: (Score:2)
CPU cache IS MEMORY
Yes, but it's not RAM. The rest of your post is based on that misunderstanding.
Random Access Memory (RAM) can only access one memory address at once. If you tried to use it for CPU cache, you would have to store the address of each cached word (collection of bytes) along with the value, and then search through every address individually every time. It would be extremely slow, a massive performance down-grade.
Cache memory is different. When accessing it the CPU presents an address, and the cache memory insta
Re: (Score:3)
"Random Access Memory (RAM) can only access one memory address at once. "
Random Access Memory can access memory randomly. The term includes no prior restraint on the number of ports. Internal caches are (typically) specifically SRAM, where you will note the "RAM" portion of the acronym.
"If you tried to use it for CPU cache, you would have to store the address of each cached word (collection of bytes) along with the value"
Caches already store the address of each cached word. This is the Tag RAM. Although onc
Re: (Score:2)
The term includes no prior restraint on the number of ports.
True, but DRAM in particular doesn't actually allow simultaneous access from two ports. It multiplexes. SRAM can do two simultaneous accesses, but isn't widely used in PCs due to cost. Good point though, I did forget about dual port RAM.
Caches already store the address of each cached word. This is the Tag RAM.
Not in the modern sense of the word "CPU cache". Tag RAM is just normal RAM, so while fast it does have to be searched by polling every used address. So if you had a 256 word tag RAM, you would have to make 256 accesses to check every address in your cache. Rather inefficien
Re: (Score:2)
" SRAM ... isn't widely used in PCs due to cost. "
All on-die CPU caches are SRAM.
" it does have to be searched by polling every used address"
Entries in a tag ram are mapped in the same way the entries in the related caches are mapped. The Tag ram give you the complete address for the store cache line that is then compared to the desired address to determine if the desired address is actually in the cache.
"a CPU cache today uses content-addressable memory"
At most, selection of a line out of a set from an ass
Re: (Score:2)
You seem to be talking about fully associative caches. Most CPUs do not use this as general memory caches. Tag lookup is trivial, but then you have 1 to N ways/sets to look for. The 1 set "direct mapped" caches are very simple. A 4 way set associative cache though only has to check 4 possible entries, and that's not expensive. And 4 or 8 way set associative caches are extremely common. A fully associative cache just isn't practical unless the amount of memory is relatively small, thats's why on a PC w
Re: (Score:2)
The cache array minus the tag and content elements is just an array of SRAM optimized for use as a cache with features like multiple ports. Before all cache became integrated, external SRAM was used for cache and it became increasingly specialized for cache applications as time went on.
Many processors with cache (or all of them?) including Intel X86 processors boot into a mode where the SRAM in the cache can be used in place of external memory until the external memory can be configured for access. Many p
Re: (Score:2)
You can have RAM that runs as fast as CPU cache, you just can't afford it.
Not really. Cache is only fast because of the huge number of traces. Unfortunately, you have to choose between one of two things for larger caches. n^2 increase in traces as your cache size increases or reduce the size of n at the expense collisions, which means your "memory" is now lossy. That also ignore the nasty increase of latency as cache gets larger. After a certain point, well within the range of current memory sizes, cache will have higher latency than DRAM.
Re: (Score:2)
The latency has not significantly improved. They are not putting massive amounts of cache on these processors to improve bandwidth but to reduce the penalty do to latency.
Bandwidth can be increased usi
Uhh whaaaa? (Score:4, Funny)
But here's the clincher: despite being difficult and costly to manufacture, they were all very reliable.
That was kind of built into the design spec. The guy who build unreliable memory (you know, the one who came up with the Alzheimer Machine) - well he went bankrupt pretty quick right alongside the guy who invented a horseless carriage that only needed a horse half the time.
Re: (Score:1)
szczys is like a 15 year old who doesn't really understand the world real well yet and when he makes these amazing discoveries that we've all known for years ... some how some idiot at slashdot ... who is entirely unqualified to be anywhere near a 'news for nerds' site posts it to the front page because they aren't nerds and don't actually know that this shit is rather common knowledge among ACTUAL nerds and geeks.
If you look at his submission history, its something you would expect from your high school te
Re:Uhh whaaaa? (Score:5, Interesting)
On the other hand the relationship between a system's reliability and the reliability of the system's components isn't one-to-one. You can build unreliable systems out of reliable components, and more surprisingly, you can build reliable systems out of unreliable components. That latter principle is the basis for the Internet, which provides end-to-end communication that is more reliable than any of the possible paths between those endpoints.
Every component is unreliable to some degree; as it becomes increasingly reliable it moves from unusable, to something you can work around, to something whose potential failure you can ignore in many applications.
Re: (Score:2)
and more surprisingly, you can build reliable systems out of unreliable components. That latter principle is the basis for the Internet
Closer to home it's also the basis for human bodies. I remember my the head of the Anatomy department at my school seemed to be obsessed with the notion that human bodies were perfect. Then again she was a religious nut. I never challenged her on this of course (there's no point arguing with a religious nut), but merely thought to myself "ok if we're so perfect then explain disease to me, and explain aging..."
I also agree that having unreliable components helps to "build in" redundancy. The human body, fo
Re: (Score:2)
Re: (Score:2)
"Reliability" is a bit iffy. Many were prone to failure for various reasons. For example the drivers for the memory would be complex, be on separate cards from the memory, involve vacuum tubes, and so forth. The actual memory and it's ability to reliably store and retrieve the data may be good though.
There's also the issue of that memory not having to store so many bits. If you've got 1 failure per billion bit accesses then it seems pretty solid if you've only got 1000 bits but it would be useless if yo
The best statistic in the article (Score:2)
Most of the technologies TFA describes were experimental, but that mainframe mainstay, core memory, came down in cost during its run from $1/bit to one cent. Even at that price, a memory stick would cost $10 million per gigabyte and would require a room of its own. I love living in the future.
Re: (Score:2)
You misunderstand, *ALL* memory systems in the article were used in commercial systems. can't call any of them "experimental" after that.
Drum Memory (Score:3)
I've heard some really interesting stories about Drum memory.
Since you had to wait for your desired read/write location to rotate under the head, and since this was back in the CISC era when the execution time of every instruction was known and published, developers would "optimize" their memory accesses by placing their data on the drum in the exact spot that each byte would be under the head when the instruction to read or write it was processed.
Even more interestingly, at least one platform made use of this architecture by using an assembly language that effectively had a goto at the end of every instruction. That way you could scatter your code on the drum to perform the same optimization.
I saw another story about early rotating-drum systems being put on USN ships. Supposedly the first time they tried to turn at sea the navigators discovered the hard way that the designers failed to account for the gyroscopic property of having a large rotating metal drum on board...
The story of Mel, a real programmer. (Score:1)
https://www.cs.utah.edu/~elb/folklore/mel.html
Re: (Score:2)
My first computer, ever.. (Score:2)
Re: (Score:2)
One oatmeal container oughtta be enough for anyone. -Gill Bates
Re: (Score:2)
Re: (Score:1)
All in jest. Your statement reminded me of Bill's (alleged) statement. Technically he may have been somewhat right, but vendors find ways to bloat things up out of sloth or corner cutting or goofy standards, such as including an entire library or driver even if you only need 3% of its functionality.
Do miss the days.. (Score:2)
Miss the days of the 6502 and Z80 with 8K of 2102 ram. Could heat my sandwich while I wrote assembler code on the card...
Got the Look (Score:1)
The Mercury Delay unit is certainly visually impressive. You can really impress and/or scare somebody with a gizmo like that on or near your desk.
programming on stone tablets (Score:2)