What Will Be The Next Generation Of RAM? 180
Wister285 asks: "I've been hearing a lot about new RAM technologies. Two of the main new forms seem to be RDRAM and DDRAM. Little known to a lot of people currently though is MRAM (magnetic RAM that works more like a hard drive than an electric memory saver, which means that RAM memory is never erased until the computer says so, even through power offs). MRAM seems to be the best form of RAM, but it might not be out for another 1 or 2 years. With these three choices, what is the next generation RAM?"
What about Flash ram memory (Score:1)
Um... (Score:2)
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
Bandwidth's great... (Score:2)
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
RAM? How about implementation? (Score:2)
Intel is struggling with 800MB/s? Sun's latest is at 1.9GB/s and SGI's is around 1.6GB/s. Maybe we should optimize what we've got more - without obsoleting everything we already have like RAM.
Sun hardware goes so far as to allow you to use SIMMs from the SparcStation 20 in the Ultra 2 and even the latest Ultra 60 and Ultra 80 systems!
Obviously, RAM isn't quite the bottleneck as much as Intel and others would lead you to believe.
Re:personally (Score:2)
And my uncle who runs a donkey farm says the thing really hauls ass.
ducks and runs for cover
--
Re:any programmer worth his salt (Score:2)
--
Re:Do we really want RAM that isn't erased? (Score:2)
Cheers,
da Lawn
Re:Do we really want RAM that isn't erased? (Score:1)
Well, There is actually a function in the BIOS for this, originally intended for 286's to get out of protected mode, because the cpu had too actually be rebooted. I remember playing with it back in my assembly days too see what I could do with it
Nothing to say (Score:1)
Re:Do we really want RAM that isn't erased? (Score:1)
What would happen if a virus was loaded into your memory and you wanted to shutdown and wipe the virus from memory, but your memory was permanent? I don't see that as a good thing at all.
main() { char *c=malloc(4096); int fd=open("virus.bin", O_RDONLY); read(fd, c, 4096);}
OH NO! MY COMPUTER HAS A VIRUS IN MEMORY! AAAH!
Here's a free clue for the clueless: memory is useless unless something refers to it. If you "reboot" a computer without powering down, the RAM isn't cleared. (until the BIOS walks it). Not that it matters, since until something actually jumps to that memory location it never gets executed. What'll happen to your "virus in static ram!"? It'll get overridden by w0rd 2005 when it uses 3-4 gig of system memory, of course. Duh.
Do they actually TEACH you anything in school anymore?
As for the people that think that powering their computer down is safe... Hah! Only if you're sure nobody gets to it for 20 minutes. If you use something more sensitive then a modern motherboard you can get bits off a chip for quite a while. Not that that's practical yet (not portable, so they'd have to get your SIMMS to a lab within 10-15 minutes) don't expect that to last forever.
At least memory isn't as bad as harddrives... when you overwrite memory it basically stays overwritten. Drives have some nasty ghosting of previous data that can be seen at high resolutions.
Besides, any security-concious app rewrites "critical" memory anyway. none of the OSs I've used zero memory before allocating it to a new process.... it's actually quite entertaining to malloc a few meg and read through it. memset(0) is so simple. Learn it. Love it.
--Dan
Re:The dizzying pace of change (Score:1)
Mmm, magnetic core. Core wars. Non protected mode. God, those were the days.
Anyone have a good place to send the kids to show them what CORE really was? Most of them have no idea what drum memory was...
--Dan
Re:Cray had it right (Score:2)
Um...any efforts at making SRAM more economical would have the side effect of making DRAM more economical. Each SRAM bit is implemented with 4-6 transistors of different types, whereas each DRAM bit is implemented with one n-type MOSFET. That's a huge decrease in size, and is why people put up with awkward timing schemes, address strobes, and pesky refreshes to use DRAM.
Technology... (Score:1)
It's just a matter of looking at the past. Everyone thought that the old add in cards of RAM that you put in your ISA slots to add another meg of RAM (Remember those days? *shudder*) would last forever. The cards would get bigger and the on-board chips would get larger, but nobody could've really said that SIMM's would take over until they came out and suddenly appeared everywhere. I think the next generation of RAM will be the one that nobody sees right now, the one that is in development at the bottom basement of some company, just waiting to be released. Sorta like DUST PUPPY!
Cray had it right (Score:2)
Resounding YES! (Score:1)
No Windows, no problem. Same old story...
It wouldn't be hard to park Linux "nicely" within a few milliseconds, running on power from the capacitors in the power supply just long enough to do this. When the machine is re-powered, Linux can simply reinit devices a la Two Kernel Monte and then pick up where it left off. That and journalling filesystems equals reliability heaven.
Re:Next advances wont be the memory cells (Score:3)
One of the ways you can avoid having a problem like this is to use a log-structured filesystem, which simply writes the data in one long loop around the device, rather than always starting at the beginning of the device. The exact details escape me, but the general idea is correct.
One of the new Linux filesystems, JFFS (journalling flash filesystem) does this, I believe. It was accidentally added to the 2.4 development kernel recently when one of the developers working on a flash driver submitted a patch to Linus, and forgot to remove the JFFS code from his patch... (Please, no flamewar about reiserfs here, there was enough on lkml already).
sounds like "core" memory (Score:2)
I wonder if this is where the name "core" came from in respect to *NIX systems.
--
PRAM (Score:2)
You write information on paper, then stick it inside the computer. Later, when you need to retrieve it, you quickly grab the paper and read it out loud. Fast and cheap solution to everyday computing needs.
Re:Do we really want RAM that isn't erased? (Score:1)
This means that RAM needs consistency verified bits like the ones used on hard-drives to tell fsck that things were shut down cleanly.
Someone mentioned that you could have the BIOS auto-detect when you purposely shut things down, or hit the reset button. Well, what happens if the BIOS is buggy and that function doesn't work? It's much better to have a bit that says that something claims that memory is in a valid state for shutdown than something that specially flags that you want to erase memory on startup.
This would cause problems in power failure situations, but that could easily (and cheaply) be solved by having a capacitor bank 'UPS' that could keep the machine running for about 5 seconds or so while the OS went through the motions to put itself in a hibernating state.
Re:Do we really want RAM that isn't erased? (Score:1)
Umm... I think you should read about how OSes actually work before you post again and embarass yourself further.
Programs free up memory all the time and it's cleared by the OS and given to other programs. That's part of the virtual memory subsystem. It's been that way for years and years. The only commonly used OS that didn't do that that I know of in the past 15 years is MS-DOS.
Re:Do we really want RAM that isn't erased? (Score:1)
Well.... When you boot up, your computer starts up from ROM, reads some stuff from disk into memory, runs that etc. So it doesn't really matter what was left in the ram from the previous session, it gets overwritten. And viruses rather infect stuff on your harddrive anyway. Why bother with the memory?
Actually, now that I think of it, if you can count on the ram contents being unchanged after a power cycle, you could just more or less continue where you left off when you turned off the computer. Sort of like normal hibernation, except way faster, because you don't have to save to/restore from disk. Boot up in like a second!
Re:What goes around comes around (Score:1)
3) You want to get some sleep and the thing is making noise!
About 90% of the time when I boot up it is because the computer was off, not because I rebooted it. If instead I could just hibernate it, that would save me a lot of time.
I think quite often people just leave their computer on all the time because it takes so much time for the thing to boot up again... With MRAM and an appropriate BIOS + OS support, by the time the monitor would finally be fully awake, the system would be up and running.
Actually it will power off completely. Imagine the improvement on notebook battery life if you could just completely turn it of if you wouldn't need it for just a few minutes, without having to wait for ages till it booted up again. (Even save to/restore from disk takes some time)
Scientific American article on MRAM (Score:2)
Not only that (Score:1)
Drool....
Some guy who works across the hall tried using optical interconnects, and got the performance of main memory up to nearly L2-cache levels. Xeon, we don't need no stinkin' Xeon
Re:Optical (Score:1)
Re:PRAM (Score:1)
Re:Do we really want RAM that isn't erased? (Score:3)
Look at it this way: you could program the BIOS to always erase the memory on POST, *unless* there was a power faillure (modern ATX supplies can already detect this I believe)
So when you reboot on purpose, everything will be erased, but when power fails, you'll lose nothing!
This also makes creating a hibernation function much, much easier - no more need for a large image file on your harddisk, just let the BIOS know it should *not* erase memory contents after next reboot.
Re:PRAM (Score:1)
PRAM is pr0n RAM. It's the next generation because it accesses your pr0n in current memory really, really fast. When you need it. You could probably even throw in some encryption to hide it from family or coworkers.
Short- and long-term predictions will be different (Score:1)
In the long term, however, we will see a transaction to bus-based memory, such as RDRAM. (I personally don't think RDRAM will ever fly; some other incarnation of the same idea will likely spring up, a few years down the road.)
Abstracting your core memory behind a memory bus gives the advantages that your chipset can talk to any kind of memory that supports the bus standard--it could be of any speed, implemented with any technology (for instance, holographic memory.) Its disadvantage--and few people seem to realize this!--is that it's quite slow, compared to SDRAM where the chipset (and the CPU) has direct access to the data lines coming from the RAM.
To compensate for this inadequacy, the makers of Rambus RAM pumped the ram bus's clock rate to some absurd speed--I recall hearing 400MHz mentioned. They should have realized that memory technology isn't sufficiently advanced yet, and left well enough alone.
NUMA? (Score:2)
I'd say eventually the industry is going to have to give up the idea of expandable RAM, and change the entire architecture of the motherboard so that the CPU and main memory are moved off it, onto a daughter card, like the graphics card is now.
The above, in particular, is extremely interesting. I can see it happening. Indeed, it would fit current trends. We had 30-pin SIMMs forever, but now you're lucky if you keep your memory across two CPU generations. So move all the fast-changing stuff onto a single expansion card, and keep the more stable PCI bus and basic I/O functions on a backplane/mainboard.
I don't think traditional expandable RAM has to go away completely, though. I think the solution would be further extending the NUMA (Non-Uniform Memory Access) concept of cache memory. We've already got very-high-speed L1 and L2 cache. Say this CPU+high-speed-memory card you propose has N ultra-high-speed on-die L1 cache, N*16 super-high-speed off-die L2 cache, and N*(2^10) of very-high-speed, CPU-local RAM. Then have an expandable main memory system of merely high-speed RAM, slower, but expandable and much larger, say, N*(2^12). To fill in some example numbers:
128 KB L1 cache
2 MB L2 cache
128 MB local RAM
512 MB main RAM
That way, you get the best of both worlds.
Static RAM is much bigger physically (Score:2)
The problem is, static RAM (SRAM) fundamentally takes up more space then dynamic RAM (DRAM). SRAM uses six transistors per memory cell (bit) where DRAM uses one little thingy (I forget what it's called) per cell. This means a much larger package. (It also means lower yields, since SRAM is harder to make.) After a while, trace lengths and the speed of light are going to get in your way.
Optical (Score:2)
Hmmmm. While the zero cross-talk is a big benefit, many optical systems actually have higher latencies then their copper counterparts. More bandwidth, but slower response time. This isn't a factor in networks, so fiber is the media of choice, but I imagine it would be inside your computer.
I'm confused (Score:2)
I'm confused. They use static RAM as cache memory, right? Because it is faster, right? So how can it also be slower?
Yup (Score:2)
Dead on. Unix geeks often refer to a program loaded into memory as "in core" for this reason.
More NUMA (Score:2)
I don't either, except some basic theory. But basically, good NUMA systems eliminate the duplication of data that traditional PC cache memory systems use. The hardware and OS know that some memory is faster then other memory, and put more frequently used pages in the faster memory.
NUMA is traditionally used in very large microcomputer systems (IBM, Sun, SGI) and clusters. SGI does a lot of this in their high-end number crunchers. Memory on the local processor node is significantly faster to access then memory on another node, even with a 2 GByte/sec backplane.
True, it does require OS-level support for this, and the circuitry is more complex, and blah blah blah, but I think it is worth it for systems where expandability is important (e.g., high-end workstations, servers). I don't see it happening in the home PC market, though. You might as well stick with the "onboard" very-fast-ram then.
And indeed, that would be the biggest problem with this scheme--the latencies.
Well, yes, but without this system, the latencies are even worse. If the data isn't in cache or "onboard" RAM, then it must be on disk. And even 70ns FPM SIMMs are faster then paging to disk!
But still, I think you'd find on such a system that upgrading the main DRAM--like enlarging your virtual memory swap file--wouldn't have the same effect on system performance as upgrading main memory does today.
I'm sure, but I still think the performance and expandability is worthwhile, especially for something like a web server, where some data (e.g., database engine and indexes) are going to be accessed frequently, while other data (random static web content) is less important (and limited by the speed of the pipe in any event).
Since N != NP ... (Score:1)
1) nasty heuristics, not guaranteed, but workable,
and
2) brute force, perhaps optimized, but still brute.
and since there are so very many brute force problems, software approaches change in KIND as the hardware scales up.
When I can take the time/space to do a brute force search on a problem, I can guarantee certain things about my answer, which is very valuable computationally.
Translation: software is a gas, it expands to use all the space/time given to it, and it will continue to do so.
If you disagree, well, I guess you wont be using any voice recognition software next year when it hits hard, because that is a clear example of the effect of increased resources.
-- Crutcher --
#include <disclaimer.h>
Re:viruses (Score:1)
_______
Scott Jones
Newscast Director / ABC19 WKPT
Re:RAM on the Processor Die... (Score:1)
* IIAC - If I Am Correct
_______
Scott Jones
Newscast Director / ABC19 WKPT
Nice to meet you, Captain Paranoid (Score:2)
I find your excessive concern for keeping secrets disturbing. You must be doing something illegal. That's why your computer (and all the other computers purchased with the "secure memory" feature) will, in fact, be equipped with a remote monitoring device which periodically broadcasts a memory dump and can be used to give you a paralysing shock through the keyboard.
We're out to get you, you know. All of us.
---
Despite rumors to the contrary, I am not a turnip.
Long term predictions (Score:1)
To begin with, may I just point out that most of what has been discussed here about what is, and isn't possable, is actually about what can and can't be manufactured economically.
For example, Ferroelectric DRAM. Basically, a DRAM is a switching capacitor, so stick a ferroelectric in there, and the size of the cappacitor can be made smaller for the same charge storage. The best material to use for this is probably BST (Barium Strontium Titanate). This is difficult to deposit in a standard fab.
It is easy (scientifically) to do. You just etch a flat surface on silicon, and grow a layer by MBE (Molecular Beam Epitaxy). Or deposit a layer by MOCVD (Metal Oxide Chemical Vapour Depositon). Problem is, to get these to work, on silicon, is expensive. It can still be done.
I spend my time surrouned by cutting edge scientific research. Every day I see things that most people would consider impossable, or miraculus. For example, I have seen pure [0] aluminium, as strong as steel. That's not specific (per weight) strength. That's per voulme strength.
Frequency tuneable solid state lasers. Sure. Colour tunable over half the visable spectrum, by rotating a part. Smaller than a drinks can.
A slight digression there, but the point is that to see what the future might hold, is not too tricky for the next the next 3 or so years. After that, you need to look at the skunkworks projects. And then in to the labs of the academics. Because that's where the future can be glimpsed.
[0] A4N standard.
Re:Do we really want RAM that isn't erased? (Score:1)
__
Re:Wider vs. Faster (Score:1)
Comparing MHz of a GPU, CPU and RAM module is generally fairly useless... saying that the GeForce2U only runs at 250MHz doesn't really mean a whole lot. A more effective comparison would be the throughput/bandwidth of the chip/memory modules, since that is more immediately relevant.
--
Re:Do we really want RAM that isn't erased? (Score:1)
There are solid state drives (HD form factor, basically filled with DRAM) that are rather fast, but one GB will set you back a few thousand $$. That also, of course, isn't preserved across power cycles, but for use as a large cache is rather exciting (see the new pop favorites "Swapping out to disk never felt so good" or "Is My Entire Database is RAM?" and of course the new craze, "Oops, I cached it again").
I gotta get more sleep...
--
Re:Cray had it right (Score:1)
--
Re:Do we really want RAM that isn't erased? (Score:1)
--
Re:What about Flash ram memory (Score:2)
The trouble is that the transistors don't turn off completely. There are always some thermal carriers in the channel. If the transistor has a high threshold voltage, the minority carriers are extremely rare and leakage is small. As the threshold voltage goes down, the number of minority carriers increases and the leakage current rises. The ultra-small devices needed for ultrahigh density memory have to have really low threshold voltages for a lot of reasons, so they leak. A lot.
(Speaking as a transistor-level CMOS designer.)
Ferroelectric RAM (Score:2)
DDRII (next-generation DDR) is targeted for cycle times of 2500 ps in large systems and 1250 ps in small ones. In contrast, current DDR runs 2500 in small systems (e.g., video controllers). One hopes that main memory running at 3.2 GB/s for 64-bit memory will stave off disaster for a little while. The truly greedy will just have to go to 128-bit memory.
WRT storage technology, I'm surprised that nobody has mentioned FRAM. Ferroelectric RAM is nonvolatile and much denser than flash; as dimensions sink, it's even denser than regular DRAM. Which is why the big memory houses are furiously searching for a way to reliably manufacture it.
What about VC RAM (Score:1)
Re:what ram? (Score:1)
Ryan
Re:Next advances wont be the memory cells (Score:1)
Next advances wont be the memory cells (Score:2)
There have been people adding things like fast static ram in dram chips for a while but it never took off.
With the widespread use of flash memory, I would love to see a flash package that is smart enough to remap bad blocks once they are detected. Its a real pain that my rio now can't write to block 0 most of the time because its developed a problem.
Re:PRAM (Score:2)
--
Re:Ferroelectric RAM (Score:1)
I share your frustration. FRAM is actually being researched and produced by big companies such as SAMSUNG [samsungelectronics.com] in densities as high as 4Mb. You are not correct, though, to say that FRAM is denser than flash. Remember that flash can store two bits in a very small memory cell. So far, flash has also proved more scaleable than FRAM, which is why you see flash densities today orders of magnitude better than FRAM even though FRAM is an older technology. A good reference for reading about non-volatile memory technologies can be found at EDN Access [ednmag.com]
Don't forget amorphous silicon RAM. (Score:3)
Amorphous silicon RAM works by melting the switch element and refreezing it, either so it crystalizes and is very conductive (but still resistive enough that you can remelt it) or becomes glassy and very resistive (but with a "breakover" voltage that lets you drive current into it to remelt it). Selection is by the length of time the write current is on, and thus the amount of heat deposited in the meltable bit.
Magnetic fields won't touch it. EMP strong enough to affect it will fry the whole box anyhow. Ditto heat.
Write time is in single-digit nanoseconds. Read is as fast as ROM.
(But will it ever come to market? Same question for MRAM.)
Re:No. MRAM is about FIVE years off, not 1-2 (Score:1)
Re:Pentium 4 Bus (Score:1)
I'm pretty sure I'm right here. For one thing, the maximum bandwidth of the P4 FSB is 3.2 GB/s, which is 2 "pumps" x 100 MHz x 128-bits. For independent proof of this, note that Intel's top-of-the-line P4 chipset, Tehama, uses dual PC800 RDRAM channels, yielding...3.2 GB/s. If it were quad-pumped, 100 MHz and 128 bits wide as you claim, the FSB bandwidth would be 6.4 GB/s.
For another, there's no way I've ever heard of to actually "quad pump" a clock signal. "Double pumping" works because a clock signal is actually made up of two signals--the so-called "rising edge" when the signal turns on, and the so-called "falling edge" when the signal turns off. In contrast, there's just no natural way to divide a signal into four without using a PLL and a seperate clock generator. How do we know Intel isn't doing that? Well...the question becomes: "seperate" from what? The FSB clock *is* the only clock in a chipset; if they wanted to make it go twice as fast, they would just clock it at 200 MHz.
And no, Kingston's "quad-pumped" SDRAM isn't really quad-pumped either; it's just DDR which is cleverly interleaved to essentially make it twice as wide.
Re:NUMA? (Score:2)
Thanks for actually reading the whole damn thing!
I don't think traditional expandable RAM has to go away completely, though. I think the solution would be further extending the NUMA (Non-Uniform Memory Access) concept of cache memory. We've already got very-high-speed L1 and L2 cache. Say this CPU+high-speed-memory card you propose has N ultra-high-speed on-die L1 cache, N*16 super-high-speed off-die L2 cache, and N*(2^10) of very-high-speed, CPU-local RAM. Then have an expandable main memory system of merely high-speed RAM, slower, but expandable and much larger, say, N*(2^12). To fill in some example numbers:
128 KB L1 cache
2 MB L2 cache
128 MB local RAM
512 MB main RAM
That way, you get the best of both worlds.
Interesting. I'd first point out that, to me at least, what you're proposing sounds a bit more like a gigantic (128 MB) L3 DRAM-cache than traditional NUMA. Of course, I don't know too much about NUMA, except that it's supposed to have a way to actually manage which data goes in which memory efficiently--which would be a very difficult problem in such a system.
For one thing, it's worth noting that under almost any concievable implementation, all 128 MB of data in the local DRAM/L3 cache would have to be mirrored in the 512 MB main DRAM--thus essentially "wasting" 1/4 of your main RAM capacity. There are ways around this (e.g. Thunderbird/Duron with their "exclusive" cache hierarchies) but from what I understand they would introduce tremendous latencies into such a system.
And indeed, that would be the biggest problem with this scheme--the latencies. If a program was looking for a piece of data, it would have to first check the L1 cache; then (if it didn't find it there) the L2 cache; then (if it didn't find it there) the very large local DRAM/L3 cache; and only then would it look for it in main memory (and god forbid not find it there either and have to pull it out of virtual memory!).
The upshot of this is that you'd get a pretty large miss-penalty every time you had to search all the way down to your main DRAM to find some data. On the other hand, it wouldn't be as large as the penalty currently associated with virtual memory, and we use that all the time. But still, I think you'd find on such a system that upgrading the main DRAM--like enlarging your virtual memory swap file--wouldn't have the same effect on system performance as upgrading main memory does today.
Could be wrong, though. Certainly an interesting idea.
Re:No. MRAM is about FIVE years off, not 1-2 (Score:4)
Just a clarification: you are completely wrong. The various types of RDRAM do in fact refer to their clock speed, not their bandwidth. PC800 does indeed refer to 800MHz; as RDRAM is 16-bits wide per channel, this means PC800 has a theoretical maximum bandwidth of 1.6 GB/s. By way of comparison, PC133 SDRAM is 64-bits wide and 133 MHz, and so it has a max bandwidth of 1.1 GB/s.
So, to reiterate, you're wrong. Now, however, it begins to get confusing. First off, PC800 RDRAM isn't really running at 800 MHz; it's running double data rate--transmitting twice per clock--at 400 MHz. As far as the PC industry goes, it's an acceptable fudge, and not nearly so bad as Intel saying the double-wide double data rate 100 MHz FSB on the P4 is "400 MHz".
Then it gets even more confusing. See, it turns out that PC 700 RDRAM actually runs at 2x356=712 MHz most of the time (good!) whereas PC 600 RDRAM actually runs at 2x266=533 MHz most of the time (bad!). This has to do with the vagaries of timing these cobbled together brands of RDRAM (only marketed because the yields on PC 800 were so awful) to run with 133 MHz FSB chipsets. If run on a 100 MHz FSB chipset--which they never are--they will run at their advertised 600 and 700 MHz rates.
So...in order to get rid of all this confusion but keep the handy-dandy "PC___" designation (and to one-up Rambus in the "my number's higher than yours" game), JEDEC has decided that from now on all its DRAM standards will be numbered based on their maximum bandwidth rather than their clock speed, actual or DDR or otherwise. Thus, the DDR we will see in DDR motherboards in a couple months will either be branded PC1600 (2 x 100 MHz x 64-bits) or PC2100 (2 x 133 MHz x 64-bits).
All done? Not hardly. It turns out that the first generation of PC2100 will have higher latency timings for the various stages of a random access than will PC1600, thus making it slower in certain situations while faster in others. Of course, within a couple months, lower latency PC2100 will be around, which may or may not be designated PC2100A. See how this all helps the customer and makes things easier???
Of course, the DDR for graphics cards is categorized neither by its maximum bandwidth nor its clock rate but rather by its clock period: i.e. 2x166 MHz DDR is called "6ns DDR" when its on a video card (because 1 second / 6 nanoseconds = 166 million); 2x183 is 5.5ns, and the new GeForce2 Ultra's are shipping with incredible 2x250 MHz 4ns DDR SDRAM.
And, of course, any and all of the above DRAM is overclockable to any speeds and latency timings you want; it's just only guaranteed to work at the marketed speed. Oh, and how fast any of this all is depends just as much on your chipset and, in the case of RDRAM, your power consumption settings. (Even if you're plugged into the wall, don't be too profligate with those power settings or the whole thing will overheat!)
And I forgot to mention VC SDRAM, which is available now, and FCSDRAM, eDRAM, DDR-II and DDR-IIe, any of which might/will make the jump to PC main memory in the coming years (at least before MRAM). Isn't it all so simple now?? Good.
DRAM myths debunked (Score:4)
Myth #1: It's Rambus vs. DDR vs. MRAM. It's been mentioned before, but bears repeating: MRAM will not be the next generation memory technology. It will at best be the next-next-next generation memory technology, as it's at least 5 years from commercial viability. However, I'd guess that even in MRAM's wildest dreams it will take longer than that before it ever makes it to PC main memory; first, it will be used as a replacement for what it is actually most like--not DRAM, but flash memory. While it has the potential to maybe one day be faster, smaller, and cheaper than DRAM, until then it will only be used in those places where its most important attribute--nonvolatility--is actually necessary.
Furthermore, there are any number of exotic competing technologies which are a) going to make it to market first and b) actually aimed at the PC main memory market. These include:
VC SDRAM: like SDRAM with a small SRAM cache--already available, but with disappointing performance, due to a bad implementation of a good idea; don't count it out in a future incarnation
FCSDRAM: which allows a more efficient ordering of access requests to cut down latency
DDR-II: the packet-based successor to DDR SDRAM, and the probable next standard
DDR-IIe: DDR-II with caching technology similar but superior to VC SDRAM's
and eDRAM: an exotic technique for putting DRAM directly on a microprocessor, which allows for extraordinary bandwidth and tiny latencies but requires an entirely new manufacturing process.
In any case, the above are not mutually exclusive (indeed, RDRAM is a DDR type of SDRAM), and I wouldn't be at all surprised to see some VC/e FCDDR-II be the PC main memory of choice in a couple years. (It'll have a better name, though
Myth #2: DRAM bandwidth is holding back the performance of today's PC's. Actually, the problem is not in the DRAM chips but rather in the bus that connects them to the CPU--that is, the Front Side Bus (FSB). The FSB on all current Intel chips is only 64-bits wide, single pumped. That means you only have 1.1 GB/s of bandwidth to the CPU with a new 133 MHz FSB P3, 800 MB/s with a 100 MHz FSB P3 or P2, and a measely 533 MB/s with a lowly Celeron. Not so coincidentally, the maximum bandwidth of the various standard types of SDRAM, PC133, PC100 and PC66 are...1.1 GB/s, 800 MB/s, and 533 MB/s respectively.
Ever wonder why 1.6 GB/s RDRAM wasn't any faster than 1.1 GB/s PC133 on all those P3 benchmarks earlier this year? At the time you probably either heard from someone else (or decided yourself) that it was just because "Rambus sucks," which, while true, isn't the whole story. Instead, the reason that the faster RDRAM didn't perform any faster is because its extra 533 MB/s of bandwidth is all dressed up with no place to go--it certainly can't go to the CPU, because the FSB is in the way, and it only lets through 1.1 GB/s. Now, there are couple edge conditions where that extra bandwidth can be utilized by sending some over the AGP bus and keeping some in buffers on the chipset to send later, but by and large the P3 is completely saturated by plain old PC133. This is the same reason why, when DDR chipsets finally come out for the P3 in a couple months, their performance is going to be a mite disappointing--all this extra bandwidth, no place for it to go. As for why the RDRAM system is actually slower most of the time...well, that's because Rambus sucks. (RDRAM has higher latencies than SDRAM, plus Intel's i820 RDRAM chipset is nowhere near as good as its BX or i815 SDRAM chipsets.)
Luckily, this is a bottleneck that is finally getting removed. AMD's Athlon and Duron CPU's both have double-pumped FSB's, meaning they'll be quite happy slurping up the extra bandwidth they get from their DDR chipsets, due out hopefully by October. Their FSB's can currently be set at either 2x100 MHz (1.6 GB/s) or 2x133 MHz (2.1 GB/s). And Intel's upcoming P4 goes a step further--it has a double-wide double-pumped FSB, allowing 3.2 GB/s @ 100 MHz core clock, and 4.3 GB/s @ 133 MHz.
These steps are, to put it mildly, vastly overdue, as the ratio of CPU-clock to FSB-clock has gone from 1:1 in the pre-486 days, to 2:1 with the 486DX2, to, for example, 3.5:1 on the two year-old P2-350 I'm typing on now, to a ridiculous 8.5:1 on the latest greatest (nonexistant) P3-1133 to a miraculously exorbitant 10.5:1 on a Celeron-700. What this means is that that CPU is spending a whole lot more of its time waiting every time it needs to access memory--10.5 clock cycles for every 1 cycle of memory access, to be exact. While the impact of this can and has been minimized through all sorts of tricks like bigger caches, out-of-order execution, and prefetching compilers, the overall performance impact is "damn."
So thankfully these ridiculous ratios will finally be brought down as the next generation of CPU's with decent FSB's ships.
Having read this, you're probably now lulled into believing our third myth. Unfortunately, you're wrong.
Myth #3: DRAM performance will hold back the performance of tomorrow's PC's. As it turns out, that's not true either. For proof, just take a look at the latest generation of PC graphics cards. The latest and greatest offerings from ATi and nvidia both include 64 MB of double-wide DDR SDRAM at speeds up to 2x183=366 MHz. That's 5.9 GB/s of bandwidth, way more than enough to saturate the FSB of top-of-the-line CPU's for at least the next 18 months. All this is available, plus a very complicated GPU, fast RAMDAC, and some other components, on a card selling for about $400--thus we can guess that that 64 MB of 5.9 GB/s RAM costs around $250--or, humorously enough, about the cost of 64 MB of 1.6 GB/s PC800 RDRAM! Furthermore, nvidia just announced the GeForce2 Ultra, with 64 MB of 2x250=500 MHz DDR. That's 8 GB/s!! The cost? Another $100.
But all of this is disregarding that little something called supply-and-demand. There are several legitimate reasons why such high-speed DDR costs more to make than normal-speed DDR (which costs a negligable amount more than plain-old SDRAM), but the main reason for its (not even so) high price is its scarcity and the incredible demand for it by graphics card makers. On the other hand, the main reason RDRAM has come down in price so much (6 months ago it cost around 3 times as much) is because there is a glut of it on the market. Everyone in the industry (except Dell) has realized that the i820 chipset is a dud, a bomb, already shuffled off to obsolescence. RDRAM on the PC is a no-go, at least until the P4 comes out. Thus, excess RDRAM is being sold-off at fire-sale prices. Once the P4 is out in enough volume to actually impact prices (i.e. January or February if Intel is lucky), expect another surge in RDRAM prices. Back on the other hand, in a year or so that 8 GB/s (!) 500 MHz DDR SDRAM in the new GeForce2 Ultra will be pretty mainstream stuff, going for but a modest premium over even bottom-of-the-line SDR SDRAM (which will still be around for some time).
"So great!" you might say. "Let's make chipsets with 8 GB/s FSB's, and all our problems will be solved!" Well...there's the rub.
See, the point of this story is, the problem in getting a high-speed memory subsystem in your PC is not the DRAM--they can get that damn fast already. (8 Gb/s!! Ok I'll stop now.) The problem is the stuff in between: the motherboard and the chipset. That is, the bus.
It turns out that it's easy to get a super-high-speed bus onto a graphics card, but an electrical engineer's nightmare to get one on a PC motherboard. Let's count the reasons why:
1) The traces (wires) on a motherboard are a whole lot longer than on a graphics card. The higher the capacity of a trace, the higher quality (read: more expensive) it has to be. The longer it is, the higher quality it has to be to have the same capacity. Eventually, it's just beyond the capabilities of our current manufacturing to make traces that are long enough and high enough capacity to work with high-speed DRAM on a big motherboard.
2) There's lots of other components on a motherboard. This means more interference ("crosstalk"). This means--you guessed it--the traces need to be even higher quality.
3) A motherboard has to be designed to work with almost any amount of DRAM--one DIMM, two DIMM's, three DIMM's, of varying amounts, made by anyone from Micron to Uncle Noname. Graphics cards are fixed configurations which can be validated once and forgotten about.
4) The DRAM in a graphics card is soldered to the board. The DRAM in a motherboard has to be removable and communicate through a socket, which adds to the electrical engineering complexity.
Plus there's probably a couple more I can't think of at the moment. The point is, the weak link in the memory subsystem is not the DRAM. Today it's the chip's FSB, next year it will be the motherboard and the chipset, but it's not the DRAM.
However, there are ways the DRAM might be changed to get around this limitation. (Disclaimer: I don't know as much about this part of the equation as I do about the rest.) Apparently the packet-based protocol used in RDRAM is one way to do this--for some reason, communicating in packets minimizes the danger of data loss due to crosstalk. Probably for the same reason it works for networks, the Internet, etc.
Great! The problem is, RDRAM isn't designed to maximize bandwidth, but rather to maximize bandwidth/pin. While this is real neat for itty-bitty embedded devices where you need to keep pin count to a minimum, the problem is that each pin is connected to its own trace...and thus RDRAM ends up requiring the motherboard to carry much more bandwidth/trace than DDR SDRAM. See above (#1) for why this is a bad idea.
So, the packet-based, but-otherwise-more-or-less-normal-DDR DDR-II, due out in 1.5-2 years, looks like a good candidate to solve this problem, at least for the time being.
In my opinion, though, even that is only a temporary solution. I'd say eventually the industry is going to have to give up the idea of expandable RAM, and change the entire architecture of the motherboard so that the CPU and main memory are moved off it, onto a daughter card, like the graphics card is now. That would mean you would have to buy your CPU and your RAM together--no more adding more RAM as a quick performance booster, which would be a considerable loss. However, it seems as if it would get rid of the tremendous memory bandwidth problem PC's are facing today in one fell swoop. In comparison to the performance gains realized, it would be an easy tradeoff for the vast majority of consumers, who never upgrade their RAM anyways.
The other possible solution is similar-but-different: a switch to eDRAM, which I discussed lo these long paragraphs ago (up near the top). This, however, would require an even bigger infrastructure change, although the benefits might be even greater.
Cleaner to make? (Score:2)
Re:Um... (Score:4)
Re:Do we really want RAM that isn't erased? (Score:2)
Just to elaborate. non-volitile memory would allow incredibly fast boot-times, since all of your drivers (and even your kernel) could remain resident across power-cycles. Assuming a robust enough OS that can withstand months of virtual-uptime (ruling out DOS-derived OSes), the boot-up process shifts from initializing your drivers to checking for HW-changes / crashes.
Just imagine the possibilities for OS robustness. Currently when we lose power (causeing an OS crash), rebooting involves checking for consistency on the file-systems (a painful process which usually involves loss of some information). Yes, you're supposed to UPS servers, but this is of no consolation to the millions of home-PC users who potentially lose hours of work. If memory was NV, then a copy of the write-buffer would still exist, and it would be possible to recover failed disk-updates.
Re:Do we really want RAM that isn't erased? (Score:2)
The simplest approach would be to have a massive ram-disk.
However, expanding on this possibility, there would be a fundamental change in database structure. Databases are optimized to allow tiny subsets of data to reside in memory. Most queries have to assume that only a couple pages will be addressable / comparable at a time. One of the biggest set-back with this is the lack of "data-pointers". You typically refer to data by it's primary key. Thus referencing data-items requires table-lookups. In most programming languages, you make use of references or even direct pointers (though in DB land, I'm sure references/handles would still be preferred). Thus, if table-joins were based on references (for primary / foreign keys), it would be trivial operation. I know that Oracle supports a sort of reference-type that performs just this sort of activity, but it has to still require disk-index-lookups, since the data is not in a static location.
Another big problems with relational databases is that it is very difficult to map them to program-code. A big push in DB land is to make Object Oriented DBs. Some systems have had more success than others. The biggest problem (as I see it) is to make these objects available to programming languages in a seamless fashion. In an all memory system, you might very well be able to have a local array of data-objects and use them with all the same performance as local objects. The DB would simply have triggers assigned to data-modification method which update internal relationships and enforce data-integrity rules. This much can already be done in a raw programming language, but it is impossible to separate the rule-set from the code (unlike in DB design).
This really does open up a whole new world of computing. Ideally, you have at least three completely independent designs (that can be changed independently of each other): The interface, the data-definition/rule-set, and the glue-code that makes it all work. Currently this is possible if the GUI designer (beit web pages or window-design) talks with the glue logic designer, and a relational DB is used. But there currently is not a seamless integration or high-performance connection between the data and the glue.
-Michael
SRAM and RAM drives... (Score:1)
As far as I know, SDRAM requires 1 capacitor and 1 transistor per cell minimum. SRAM requires about 4. With companies today manufacturing 256M SDRAM, could they easily build 64M SRAM modules for a similar price?
The latency of these SRAM (not reliant of capacitor discharge) could push memory bus clock rates up to several hundred MHz. Data from an SRAM is available in a nanosecond or less, not 3 to 5 ns.
The other part of my comment is on a RAM drive. Could a memory manufacturer revive obsolete memory technology (fastpage, 30-pin SIMMs) that is extremely cheap? If so, they could produce an inexpensive 1-2 GB memory module that sits on an IDE interface. You could easily use the full bandwidth of an UATA-66 channel.
Instead of resorting to (comparatively) super-slow hard drive for virtual memory paging, the OS could just use a slightly slower memory technology. Using RAM negates the seek times that are inherent to hard drives.
Re:The next RAM is IRAM (I hope) (Score:1)
A couple of extra points:
- What happens when you want to upgrade your CPU?
- Memory becomes more expensive as a processor is tightly coupled to it.
I shudder to think of the kiddies at primary school arguing over the benefits of a hypercube compared to a mesh interconnect structure.
viruses (Score:2)
Not to be a prick (Score:1)
thx
Speed (Score:3)
-----------------------
Some links: (Score:5)
EDTN [edtn.com]
Stanford [stanford.edu]
ABC News [go.com]
Hope this helps.
-----------------------
To clear things up... (Score:5)
MRAM is a new technology that stores data magnetically. I don't know too much about this, but I would be guessing it would be quite a while until we see this in every computer. It will probably be available in portable devices in 2 to 5 years, however, low production quantites (and high prices that go along with this) will almost certainly keep this memory technolgy from entering the desktop market for ten years or so. Then again, I could be wrong.
I have seen flash memory mentioned as a possibility. Flash works by storing (or not storing) a charge on a floating polysilicon gate. The charge is stored or removed by using a high voltage to tunnel through the silicon dioxide insulator. While flash can be read about as fast as any other memory technology, writing flash typically takes a long time (from 100's of microseconds to milliseconds). Also, the tunneling action erodes the silicon dioxide and can wear out flash cells after 1,000 to 1,000,000 rewrites (depending on the process).
So what is the next big memory technology? For now, I would say it is DDR SDRAM. However, DRAM technology will eventually fizzle out and I am sure that either SRAM (Static RAM), MRAM (if it is available), or some other new memory technology will take its place.
RDRAM?! Nah (Score:2)
Re:RDRAM?! Nah (Score:2)
The dizzying pace of change (Score:4)
Just goes to show how much things have changed...
It'll all depend on manufacturers... (Score:2)
I don't think the RAM companies are likely to switch from a technology they fully control to another they're less sure about. The only way I see they could do the switch from Silicon to something else is if they really have no other choice, eg. if some technology comes out and increases by at least a factor of 10 the performance/price ratio. Even in that case, I suspect they'd simply try to buy out the company that produces that.
The only way I see them abandon silicon is when it is no longer feasable to cram more transistors in a fixed area (10 nm? 1 nm? 1 A?).
Re:No. MRAM is about FIVE years off, not 1-2 (Score:2)
I'd like to get a confirmation on this, but I think the "next generation of RAMBUS" will be wider, not faster clocked. As for the rest, I agree that DDR-SDRAM is likely to be the next generation RAM... unless RAMBUS is in bed with more people than we think.
Re:MRAM and Microsoft (Score:2)
Just why do you think this will help Microsoft at all? The whole reason you had to reboot in the first place was beacuse Windows fscked up in memory and didn't know quite what. Rebooting the computer would mean this: After sitting through your BIOS test sequence, you are presented with a Blue screen.
Re:RAM? How about implementation? (Score:2)
Re:What about Flash ram memory (Score:3)
Flash memory is really not like SRAM or DRAM. It actually reminds me more of ROM because bits are actually defined by a single transistor being on or off. The way that flash memory makes a transistor stay on or off is the cool part. Each transistor (one per bit) has two gates. One gate is used when you write to the bit, the other gate is not actually connected to anything. The second gate (called the floating gate) is given a voltage (charge) when you wrote the bit through some interesting electrical effects (remember it is not connected to anything, you have to get the charge on there somehow). After writing the bit, the charge from then on actually stays on the floating gate because it is insulated from everything. The charge ends up making the transistor be on or off. Flash memory does have limited number of write/erase cycles but it is usually measured in hundreds of thousands so I'm not worrying about my Rio failing anytime soon.
Haiku (Score:5)
But no links in this story
Sites stay up today
Wider vs. Faster (Score:2)
But my point about RDRAM is that it has to be clocked at 800MHz to equal the performance of PC133 SDRAM, and that's horrid. Memory performance in SDRAM can also be increased by other means than clockspeed, too, but SDRAM has so much headroom in the clockspeed department that there's no need to worry about that for some time yet, whereas RDRAM clockspeed is horribly fast with so little room for MHz jumps. I mean, most people have CPUs that don't run at 800MHz, ferchrissakes. Even a top-of-the-line graphics processor like the GeForce 2 Ultra runs at a mere 250MHz, with on-card memory running at less than 500MHz DDR. AGP is running at a mere 66MHz. PCI is still the slowpoke at 33MHz, and really needs to be improved because it is a bottleneck. But the point is, the RDRAM is running so fast that it has so little room to increase its clockspeed at all, whereas SDRAM has so much room.
No. MRAM is about FIVE years off, not 1-2 (Score:5)
The next generation of RAM is clearly going to be DDR-SDRAM, and will be for some time. Cheap modules will be PC-200, but PC-266 DDR will be out at the same time, with very little use of the "mere" 200MHz (effective) variety. The tech is there right now, it's just that there's no demand yet since there aren't any chipsets out (VIA to the rescue, in a few months); so, regular SDRAM is tying up production right now, but the switch to DDR will probably be fairly smooth.
Face it, RAMBUS RDRAM is a terrible idea in the first place. When you have to make a new technology like RDRAM run at 800MHz to get similar performance to existing PC-133 SDRAM, that should be a sign that the new technology is worthless--do you really think it will be as easy to make RDRAM at 1.6GHz as it will be to make DDR SDRAM at 266MHz DDR? Hell no. I predict a quick demise for RDRAM within a few months of the release of VIA's forst DDR-SDRAM chipset.
MRAM and Microsoft (Score:3)
Oh no! Then what will the Linux advantage be? ;)
personally (Score:5)
with a supercab and a more powerful engine, you just can't beat the deals that most places are offering on it.
FluX
After 16 years, MTV has finally completed its deevolution into the shiny things network
But MRAM can do things the other RAMs can't. (Score:2)
<O
( \
XGNOME vs. KDE: the game! [8m.com]
The next RAM is IRAM (I hope) (Score:2)
Hence, if that was IRAM, you would also have four to 32 individual processors.
The idea, of course, is to distribute processing and increase performance by having the RAM and CPU on the same silicon, thus reducing the path length, eliminate the need to go through a motherboard bus and connectors and all that. More power efficient, lower EM interferance, etc.
The question would remain whether to have a central CPU coordinating all of the individual CPU's, or whether the system would be entirely distributed. I think if there was a central CPU, the system would be easier to make it look like a SIMD machine to software, which would make it easy to program for. That may be possible without the CPU, but the alternative is MIMD.
Who knows, with a kernel made for distributed processing like Mach, which is may see growing attention because GNU Hurd and MacOS X both use it, then a large part of the computer market may benefit from IRAM.
Re:What about Flash ram memory (Score:2)
Static RAM requires 2 gates to construct each bit of memory. As long as power is supplied to the gates, the value is held (but when power is taken away, the value is lost).
Dynamic RAM requires 1 gate to construct each bit of memory. With DRAM, the value stored 'erodes' over time, so a 1 would become a 0 after a certain time period. This isn't what we want, so we have a separate controller chip which keeps rewriting the DRAM cells continuously to keep them in the same state.
So given that DRAM is a pain and requires a separate controller to work it, why do we use it? Firstly, there's die size - it takes half as many gates to make DRAM, so you can get more on a wafer, which makes them cheaper. Secondly. there's performance - for SRAM to change state, one gate has to change and the other gate follows it, so it takes twice as long for a state change. This is all approximate, of course.
Neither of these RAM technologies preserve memory after power-off. For that, you need either battery-backed RAM, Flash or EPROM (eraseable programmable ROM), or the new MRAM, which all hold their contents on power-off.
Battery-backed RAM is fine, except eventually the battery runs down and then you lose your data.
EPROM is crap - it has to be erased by UV light and it's slow to reprogram. EEPROM (electrically-erasable PROM) is better - it can be erased with a voltage, but it's still slow to reprogram, and it has a limited number of rewrite cycles. Both hold their contents permanently though.
Flash is similar to EEPROM but has more rewrite cycles and is easier to rewrite. Flash is usually organised into "pages" or "blocks" though, so you can't erase an individual bit/byte, only a whole block of data. The rewrite cycles are still limited on Flash though, so you couldn't use a Flash cell to store a variable - 100,000 rewrite cycles would be up in a few seconds! Plus it does take time to program it - it's still nowhere near as fast as writing to RAM.
MRAM is a kind of "holy grail" of memory - one that can be changed on-the-fly like RAM, but which holds its value like EPROM/Flash.
Grab.
ok (Score:4)
Re:What about Flash ram memory (Score:2)
-----+|......|................
basically, one of the pins on the transistor acts as a capacitor plate hiding behind a layer of insulator. This sets up an electric field, but allows no current to flow through the insulator. A current passing through the other two pins, however, will recognise the change in potential because of the field. The charge can stay on the plate "forever" (we had leakage in the picoamps in the lab, so as long as the number of electrons is initially large, it won't matter).
And now, to come back on topic. I think that there really aren't too many limits for SDRAM, as long as you can make a semiconductor that can switch fast with low power consumption. Maybe Si-Ge manufacturing is going to pick up in the future. I know that there are more exotic semiconductors with better properties than Si, but nobody has dumped the money into figuring out a manufacturing process that could bring them within an order of magnitude of the cost of Si manufacturing.
Hmm, I've seen some sketches for all-optical RAM, but I don't know anything about the research that has gone on in that area. Anyone an expert on optical computers?
Re:personally (Score:3)
Re:No. MRAM is about FIVE years off, not 1-2 (Score:2)
RDRAM allows you to design a board with an absolute minimum of components and only a handful of interconnects. It does it's memory interface in around 16 wires, rather than the 80+ which parallel RAM interfaces require. It's a really, really good choice for embedded systems.
HOWEVER, the idea that RDRAM should be used in PCs is garbage and needs debunking. RDRAM is slower than other PC technologies, and on a PC motherboard an 80-wire memory interface is no problem.
Re:Do we really want RAM that isn't erased? (Score:2)
I'm assuming that MRAM is going to hit the market at around the same price as normal memory, so it's going to be a lot more $/Mb than hard disks, but it still presents some interesting oppurtunities.
G800 to use FCRAM (Score:2)
Do we really want RAM that isn't erased? (Score:4)
What would happen if a virus was loaded into your memory and you wanted to shutdown and wipe the virus from memory, but your memory was permanent? I don't see that as a good thing at all.
There are probably many arguments for why static memory is a good thing, but right now I am definitely leaning toward memory that can be erased by powering down.
Re:What about Flash ram memory (Score:2)
Also, flash memories have a limited number of write/erase cycles, which makes them even more impractical for a RAM.
--
How about QBM RAM? (Score:3)
I'm not sure if this has been mentioned yet or not, but Kentron Technologies [kentrontech.com] is developing a technology known as QBM [kentrontech.com], which, to put it in a nutshell, is basically Quad Bandwidth Memory, which means that it transmits twice each cycle, with overlapping cycles, effectively doubling the DDR effect. Their page on it says that memory running at 100Mhz clock could get memory bandwidth of approximately 3.2 Gigabytes/second.
Heh, that's the stuff I want when I build my Ultimate Gaming Machine (TM).
Re:PRAM (Score:2)
Re:RAM? How about implementation? (Score:3)
You know, this principle holds for software development too... The potential for a LOT of what we do with computers today was present in the humble old 486. Maybe this mad dash for better faster hardware spells our own doom. already people are buckling under the complexities of things like the psx2, x86 extensions, massive ram on video cards, etc. the stuff is going to waste just as fast as it can be invented.
it's simply too much to work with or take advantage of with the tools we have nowadays (in the time alotted us). I wish software could advance at the same rate as hardware, but it takes years of tinkering and developing new techniques to get anywhere near taking advantage of ALL of a given piece of hardware's potential.
Just look at an example like 3DStudio: version 3.0 is dramatically more sophisticated and powerful than version 1.0, and v3 runs better (is capable of more, easier to use, faster for certain tasks like modelling low poly stuff?)on p200 than 1.0 would on a pIII. all the hardware upgrades in the world don't help a bad app very much.
As hardware continues to advance by leaps and bounds, will the gap between it and software be growing much? what are the repercussions of this? lazy and imcomplete coding do seem to be becoming the standard rather than the exception...
maybe there'll be an 'Einstein' that springs up to turn the software engineering world on its ear. until then, the over-all essence of computer use will grow at a fraction of what the state of the art hardware is capable of.
:)Fudboy
Re:What about Flash ram memory (Score:2)
-----+|......|................
Hope this works; had to add an [slashdot.org]invisible link just to avoid setting off the lameness filter;
apparently, HTML tags (auto-converted to uppercase) count as lameness these days.
-- Sig (120 chars) --
Your friendly neighborhood mIRC scripter.
Re:No. MRAM is about FIVE years off, not 1-2 (Score:2)
Comparatively, if you look at www.crucial.com [crucial.com], they do the same thing with DDR SDRAM - 200MHz is PC1600 (1.6GB/sec), and 266MHz is PC2100 (2.1GB/sec).
-- Sig (120 chars) --
Your friendly neighborhood mIRC scripter.
Re:Speed (Score:2)