Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IBM Hardware

DRAM Almost as Fast as SRAM 115

An anonymous reader writes "IBM said it has been able to speed up the DRAM to the point where it's nearly as fast as SRAM, and that the result is a type of memory known as embedded DRAM, or eDRAM, that helps boost the performance of chips with multiple core calculating engines and is particularly suited for enabling the movement of graphics in gaming and other multimedia applications. DRAM will also continue to be used off the chip."
This discussion has been archived. No new comments can be posted.

DRAM Almost as Fast as SRAM

Comments Filter:
  • Trust IBM (Score:5, Funny)

    by Frequently_Asked_Ans ( 1063654 ) on Wednesday February 14, 2007 @11:24AM (#18012700)
    to go for title of most patents filed in 2007
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Wednesday February 14, 2007 @01:26PM (#18014402)
      Comment removed based on user account deletion
      • "Because everybody knows that companies should invest millions of dollars to develop technologies which should then be given away for free."

        Maybe they wouldn't cost millions if they outsourced the labour!
  • What's the point? (Score:2, Insightful)

    by ArcherB ( 796902 ) *
    With all these improvements in processor and RAM speed, when can I expect a faster HDD? A solid state drive would be nice.

    All chips wait at the same speed. Why not concentrate on the bottlenecks rather than what is already one of the fastest components in any system.

    • Re: (Score:1, Funny)

      by Anonymous Coward
      All chips wait at the same speed.

      Nuh uh! I guarantee my computer can do nothing a whole lot faster than yours can.
    • by Waffle Iron ( 339739 ) on Wednesday February 14, 2007 @11:40AM (#18012962)

      Why not concentrate on the bottlenecks rather than what is already one of the fastest components in any system.

      Firstly, system memory is not especially fast compared to the CPU, and the recent proliferation of multiple cores is making the situation worse because more CPUs are trying to bang on the same memory.

      Secondly, the most straightforward way to paper over problems with high-latency devices is to put a cache in front of them. Super fast DRAM would be one way to enable bigger caches that reduce the impact of various system bottlenecks. Sure we can hope to replace all hard drives with solid state devices, but since they still cost orders of magnitude more per megabyte, it will probably be quite a while before that happens. In the mean time, better caches couldn't hurt.

      • Hm, who just brought out a new product with a metric fuckton of cache on it... Oh, yes.

        Intel's Core 2 Duo.
        • by 10Ghz ( 453478 )
          4MB is a "metric fuckton"? That's less than 3 floppies of data!
          • Didn't you get the updated memo on Slashdot Weights and Measures?

            CACHE MEASUREMENTS
            MB to metric conversion table

            40k = 1 centifuckton
            64k = 1.6 centifucktons
            400k = 1 decifuckton
            640k = 1.6 decifucktons
            1 MB = 2.5 decifucktons
            1.44 MB = 3.6 decifucktons = 1 floppy
            2 MB = 5.0 decifucktons
            2.88 MB = 7.2 decifucktons =
        • Hm, who just brought out a new product with a metric fuckton of cache on it... Oh, yes.

          Intel's Core 2 Duo.


          You may be surprised to know this, but the ratio of on-chip cache to typical system ram has not changed much in the last 12 years.

          In 1995 I bought a mid-range 486 DX-2 50 computer. The chip had 4K cache on-die, and the system had 4MB ram (1/1024 ratio).

          Today, you can buy a mid-range machine with 2-4MB on-die cache and 1024MB ram (either a 1/512 or 1/256 ratio). That's a tiny improvement, and that's to
    • Re: (Score:2, Informative)

      I'm sure it's not far off... after all they've already made a DRAM hard drive [tomshardware.com]
    • Re:What's the point? (Score:4, Informative)

      by tomstdenis ( 446163 ) <tomstdenis&gmail,com> on Wednesday February 14, 2007 @11:43AM (#18013026) Homepage
      Because your spinning magnetic platter is a cheaper storage "solution" than edram, flash, whatever.

      Unless you want to pay $25 per GB [again...], I'd wait until things improve.

      And it isn't like they're not working on smaller/faster memory. Two years ago a 1GB flash was 99$ [in Canada], now they're ~40$ and you can get a 2GB flash for about the price of the 1GB. I imagine this year we'll see 4GB flash drives become more of a norm, and so on.

      Most likely, ten years from now 80GB flash drives will be common place enough and not super expensive. But until then, spinning platers!
      • Re: (Score:3, Informative)

        by maxume ( 22995 )
        Prices too high, sizes too small:

        http://www.newegg.com/Product/Product.asp?Item=N82 E16820163159 [newegg.com]
        http://www.newegg.com/Product/Product.asp?Item=N82 E16820220156 [newegg.com]
        http://www.newegg.com/Product/ProductList.asp?N=20 03240522+1309421175&Submit=ENE&SubCategory=522 [newegg.com]

        4GB flash for $40-$60, sd for $45, so $10-$15 per GB, right now. 1 GB cost $60 about 18 months ago(they are less than $15 now); extrapolate linearly, thats 64GB for cheap($60!) in 6 years, and 128+ in 8 years. That doesn't account for a slight depr
        • That's what's happening. Used to be 100KB was massive. Gates once said that 640KB was enough solid state memory and that the rest would be stored on disks. Later came 1.4MB floppies, 5MB hard drives, then 80MB, 120MB, not so long ago 500MB, and today >500GB. Magnetic drives will continue to carry the major, giga- and tera-byte data stores of the world, especially when low power consumption, ruggedness, and speed are not required. While solid state memory continues to move in from the small and fast(and e
          • by maxume ( 22995 )
            Yeah, when the 'huge premium' for a useful size solid state drive(100GB is somehow a lot more useful than 40GB) becomes $200(or whatever, cheapish), the features start getting a lot more important than the price.

            Also, FF 2.0 thinks cheapish is a word.
      • Re:What's the point? (Score:4, Interesting)

        by joto ( 134244 ) on Wednesday February 14, 2007 @01:45PM (#18014654)

        Most likely, ten years from now 80GB flash drives will be common place enough and not super expensive. But until then, spinning platers!

        I expect to see 80GB flash drives long before 10 years. Assuming a growth rate of doubled capacity every 18 months, true enough, we'd reach about 80 GB in 10 years, but so far, flash memory has increased much faster than Moores law. Also I assume that the amount of data our computers manipulate continue to increase with each version of windows/HD-DVD/whatever, so we still need larger/slower storage mediums in 10 years, such as harddisks.

        In fact, the whole idea of using a (set of) rotating platter(s) with magnetic coating and radially movable read/write head(s) for storage, has been so successful for so long, and continue to improve at such an astonishing rate, that I doubt it will go away any time soon. In the far future, it's more difficult to predict what would happen. But even today, wheels are important, fire is our main source of (non-food) energy, primitive cutting tools are regulary used in any household, and in general, assuming things fail to change, is rarely wrong (we still haven't got flying cars!)

      • Re: (Score:3, Interesting)

        by Björn ( 4836 )
        Flash drives are coming much quicker than that. Se this article [theinquirer.net] in The Inquirer.

        "PQI, WHICH IS showing an engineering sample of a 64GB flash-based hard disk drive at Computex says the price for the expensive, but desirable, storage devices could fall below $1000 before the end of this year. "It depends on the chip price, but maybe it can get below $1000 this year" said Bob Chiu of PQI's Disk on Module sales dept. A competitor confirmed that such a precipitous fall in price was a possibility."

        Because o

      • Most likely, ten years from now 80GB flash drives will be common place enough and not super expensive. But until then, spinning platers!

        But by then it wont matter. Windows Leasta will be out and require a Terabyte of install space and multiple terabytes to run, just for a jazzed up UI they licensed from someone else and vaporous claims of better security (this time) (we mean it) (really) (we promise) (but buy our Windows as long as you Live No Care, because you will need it).

        :-)

    • Re: (Score:3, Insightful)

      Why not concentrate on the bottlenecks
      In comparison to the processor, is RAM not a bottleneck? An improvement in an area that has less need is still an improvement.
    • Actually they've had "ram drives" for some time now - http://www.cenatek.com/ [cenatek.com] - is one company which makes them. I am sure there are others, but this would be the coolest.
      • Hesus. $1600. What a load of horse shit. Its cheaper and probably smarter to just jam an extra 4GB of ram and make a ramdrive than using a solid-state device at that cost.
      • Re: (Score:3, Interesting)

        by paeanblack ( 191171 )
        I am sure there are others, but this would be the coolest.

        They all run at similar temperatures.

        The Cenatek RocketDrive you link to is a very dated product...it's not even bootable. Here is a more practical option:
        http://www.gigabyte.com.tw/Products/Storage/Produc ts_Overview.aspx?ProductID=2180 [gigabyte.com.tw]

        It's $115 at Newegg and holds up to 4 x 1G of 184 pin DDR.

        4 gigs isn't much, but for certain situations, like holding a small database with heavy use, they work great. For random I/O, they are obscenely fast for the
        • They all run at similar temperatures.
          Yes, but since an old college buddy of mine runs the company, they're cooler. :P

          I actually am not involved with servers, and do little which requires fast processing - at least at home - so don't pay much attention to this market. I'm sure there are many options out there.

        • by joto ( 134244 ) on Wednesday February 14, 2007 @01:51PM (#18014734)

          For random I/O, they are obscenely fast for the price...about twice the speed of two striped Raptors with a good controller.

          Yeah, but wouldn't it be better to buy a real computer with room for more RAM, so you didn't have to use a hardware device to imitate another hardware device, so that you could use software to imitate the drivers of the other hardware device, so that you could use it as the first kind of hardware device, just with lower speed and convenience? Or in other words: wouldn't it be better to just run the database in RAM?

          • by Jeremi ( 14640 )
            Or in other words: wouldn't it be better to just run the database in RAM?


            Yes... as long as there was a way to ensure data integrity in the event of an unexpected shtudown. The one nice thing about a journalled filesystem on a persistent store is that it doesn't go away when the lights go out...

          • "Or in other words: wouldn't it be better to just run the database in RAM?"

            That does not operate on a battery. If you put - as the other poster suggested - a journalled filesystem on there (e.g. ZFS) then this device would not fail even on an unexpected shutdown, and there is little or no chance that it can be corrupted by the OS or another application. Unless the OS or the application mess with the filesystem of course. It's a bit of a shame that they don't allow more than 4 GB or ECC memory or hot swap, s
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      when can I expect a faster HDD? [...] Why not concentrate on the bottlenecks
      Ah, the eternal "why not cure cancer instead?". HDDs aren't the bottleneck for MANY applications, so this DRAM news matters greatly. DRAM engineers don't have the skills to improve HDDs, so you can't just have them work on your pet peeve.
    • Because that's a different division or a different company? Only the solid state or long term storage people can improve that, and they are working against major limitations in mass storage. Laying down transistors is expensive, and flash memory isn't necessarily faster than hard drives on anything except maybe latency, throughput is lower enough that the latency advantage doesn't make up for it.

      One problem is that many of the companies have nothing to do with solid state storage, so they can do nothing a
      • by Anonymous Coward
        Note that striping (RAID-0) gives no benefit at all for writes, and destroys *latency* for reads. So it's only beneficial for streaming large files (say, in video editing) -- and of course it doubles your risk of data loss (as one failing drive zaps *all* your data) so it's really only useful for a work/scratch space for your large video/audio/CAD files.

        Better to have a single 10K Rappy (or better a piece of 15K SCSI/SAS goodness -- where are 15K SATA drives already???) as a "system/apps/work cache" and the
      • by dave1g ( 680091 )
        I dont think you can really say that hard drive throughput is faster than flash drive. Maybe against a single chip. But it costs nothing to put the chips in parallel and access them as a bank, and you dont need to do any fancy raid, it just like memory banks. put enough of them in parallel and you will beat out disks.

        Of course you can do the same for disks, but its much more costly to have the raid controller with the XOR engine and the typical huge cache sitting in front of it.

        Also raids run rather slowly
    • Re: (Score:3, Insightful)

      by Lazerf4rt ( 969888 )

      Why not concentrate on the bottlenecks rather than what is already one of the fastest components in any system?

      RAM speed is one of the biggest bottlenecks on your system. It's called a cache miss. When your CPU tries to access data outside its local cache, it has to wait for that cache line to come from system RAM. Your CPU currently spends a huge fraction of its execution time doing that. If IBM can provide a significantly faster type of system RAM, they can reduce that huge fraction, which would noticea

    • Because Joe schmoe computer user doesn't care about the bottlenecks. He goes to the store with the impression of "Hey Faster Ram = Faster Computer" even if there's another problem elsewhere.

      This is how big corps make money - they keep improving the stuff the no-nothing wants and they make big bucks off minor 'improvements' that don't really help.
      • by joto ( 134244 )

        Because Joe schmoe computer user doesn't care about the bottlenecks. He goes to the store with the impression of "Hey Faster Ram = Faster Computer" even if there's another problem elsewhere.

        This is how big corps make money - they keep improving the stuff the no-nothing wants and they make big bucks off minor 'improvements' that don't really help.

        Apart from the fact that...

        1. RAM speed is a major bottleneck for computer performance
        2. Even if there are other bottlenecks elsewhere, reducing one as important
    • I wanted a solid state drive back when my floppy was just 1MB. Now, they are able to give each core its own 1MB cache.
    • by dfghjk ( 711126 )
      Hard drives get bigger and faster all the time. Solid state drives become more and more viable as well.

      Hard drives aren't the bottleneck in certain applications so it's irrelevant to those.

      Finally, why not improve the system everywhere it's possible? Why blow off CPU improvements only become some apps don't benefit?
  • IBM's new motto
  • SD-RAM (Score:1, Funny)

    by Anonymous Coward
    So SRAM and DRAM is fast? That's nothing... Wait until they combine it into SDRAM!
  • To those wondering (Score:5, Insightful)

    by kestasjk ( 933987 ) * on Wednesday February 14, 2007 @11:40AM (#18012974) Homepage
    To those wondering why it would be good to have DRAM as fast as SRAM: SRAM doesn't need to be "refreshed" constantly, and is faster, but takes up many more transistors and is therefore much less dense and more expensive for the same amount of memory.

    However with DRAM it takes quite a bit of power just to keep data in memory (because of the constant "refreshes"), which isn't the case with SRAM. So this discovery wouldn't take SRAM out of production for applications which require its low power usage.
    • by TheRaven64 ( 641858 ) on Wednesday February 14, 2007 @12:00PM (#18013292) Journal
      To add to this:

      Cache misses are expensive. Really expensive. There are two ways of getting around this:

      1. More hardware contexts so that you can switch to another thread instantly when a cache miss happens.
      2. More (SRAM) cache.
      The first one is better if you have highly parallel software, but isn't so good for single-threaded applications. The second is the more common approach. While SRAM uses six transistors per bit, DRAM uses one transistor and one capacitor. This could give something around three times the density, allowing CPU manufacturers to triple the amount of cache without increasing die size. Bigger cache means fewer cache misses, which means less time spent doing nothing.

      For reference, a cache miss typically costs something around 1-200 cycles.

      • Great for L3 caches (Score:5, Interesting)

        by flaming-opus ( 8186 ) on Wednesday February 14, 2007 @01:20PM (#18014340)
        There are 2 areas of latency for a cache, the first is the performance of the actual data cells, and the second is the speed of doing a lookup in the cache. The larger the cache, and the higher the degree of set associativity, the longer the lookup takes. Thus you're unlikely to see this eDRAM used for L1 caches, and probably not for L2 caches either, as more cache would slow them down, even if the cells are just as fast as SRAM. The sweet spot will probably be for L3 caches, that are already slow by cache standards, but a whole lot faster than system memory. Since L3 caches are large, the cost savings for switching to eDRAM would be largest there.

        As for power concerns, DRAM is higher than SRAM, but a larger L3 cache may reduce the traffic through the memory controller, and out to the DIMMs, which will probably more than make up for any increase in power density in the cache.
        • Comment removed based on user account deletion
          • by Tmack ( 593755 )

            Good lord! I've always wondered what happend to those COAST [wikipedia.org] (Cache On A STick) modules back in the Pentium 1 days. Brings back memories...

            Nah, CoaSt modules were the L2 cache, cause back then the CPU only had on-chip L1. PPro was the first to introduce on-die L2. P2 took a small step back by taking L2 back off the die, but leaving it on the cpu. Sun platforms and iirc Alpha (and probably a few others) used L3, but x86 did not. AMD just recently released info on their next cpu, which includes plans to implement an L3 that all CPUs can share. Makes sense when you think about it (L1 per core, L2 per cpu, L3 for all!). As for L2, most CPUs hav

            • by Tmack ( 593755 )
              btw.... I have a few CoaSt modules and Pentium CPUs laying around in anti-statics if anyone is interested ;)

              Tm

            • PPro was the first to introduce on-die L2.
              Cache on the PPro was not on die, it was on a separate die in the same package. This was a really bad idea, because you couldn't test the cache or the core until you had put them both in the package. The P2 put them in separate packages on the same daughter board, allowing them to throw away cores and cache chips if either didn't work.

              If you get hold of a Pentium Pro, you can actually see both dies in the package.

        • I agree with what you have written but just wanted to add a point.

          The power consumption of SRAM is actually increasing to the point where it doesn't offer any real benefits over DRAM. The problem arises from smaller transistors with greater leakage current. Older SRAM could sit there and draw almost no power - but no longer. Because SRAM requires more transistors then DRAM, the leakage current essentially offsets the power used during the refresh cycle on DRAM.

          Now I'm not claiming that DRAM currently use
          • by julesh ( 229690 )
            The power consumption of SRAM is actually increasing to the point where it doesn't offer any real benefits over DRAM. The problem arises from smaller transistors with greater leakage current.

            Note that both IBM and Intel have recently announced new processes that provide reduced leakage currents.
    • Since this is used for cache memory it may be possible to eliminate the refresh cycles. A cache row can always be re-fetched from main memory. All you need is some reliable method to tell if has expired. Any cache row which hasn't been accessed long enough for it to expire is, pretty much by definition, not very critical to performance anyway.
    • SRAM is MUCH more power hungry than DRAM. In SRAM, there are at least 6 transistors that must be powered continually to retain the memory, while in DRAM there is typically a single transistor and a capacitor which is charged up to retain the memory. DRAM only needs power occasionally, to refresh the capacitor. Size and power both favor DRAM. Speed has favored SRAM.
      • Hmm you might want to read up about SRAM and DRAM, because "SRAM is MUCH more power hungry than DRAM" is wrong. DRAM needs power constantly to refresh its capacitor, SRAM only needs a voltage to be maintained and doesn't require any current flow while idle.
        • Here is chapter and verse from a solid reference, not just my word. "So, all in all, designing with and using DRAM is more complex than SRAM. However, their much larger capacities and much lower power consumption make DRAM the memory of choice in systems where the most important design considerations are the keeping down of size, cost, and power." (from the text "Digital Design, Principals and Applications" by Ronald J. Tocci) If it just 'needs a voltage to be maintained' across any practical circuit, the
          • The current associated with keeping a voltage constant is just leakage current, and it's much lower than that needed for constant refreshes. The reason DRAM needs to be refreshed is because every capacitor leaks current constantly, at a much higher rate than transistor leakage.

            If each memory cell was a bucket and electric current was water DRAM would be a bunch of buckets with holes in them, with someone constantly running past them, checking whether they're above the half way mark, and refilling them if
  • doesnt the xbox360 already use Edram?
    • Re: (Score:1, Informative)

      by Anonymous Coward

      eDRAM is used in many game consoles, including Sony PS2 and PlayStation Portable, Nintendo Wii and GameCube, and Microsoft Xbox 360

      Source: eDRAM [wikipedia.org]

      Most of them being IBM processors, and one MIPS. The news is not the development of eDRAM, but that IBM seems to be eager to replace SRAM with it in their processors.

  • eDRAM is quite old (Score:4, Interesting)

    by Rolman ( 120909 ) on Wednesday February 14, 2007 @11:49AM (#18013134)
    I don't get why this is news. Embedded-DRAM has been in heavy usage for many years now.

    Both the title and the summary are quite misleading, since eDRAM is on-chip and that of course is much faster than external off-chip memory, be SRAM, DRAM or whatever.

    Some big examples? PS2, Nintendo Gamecube, Wii, Xbox 360. All these consoles use eDRAM for their GPU's on-chip framebuffers to enhance their performance, and that goes back to at least the year 2000 when the PS2 came out.

    Some will be quick to say "no, the Nintendo consoles use 1T-SRAM, not DRAM". Yeah, right, but even 1T-SRAM (despite its name) is a form of embedded-DRAM.
    • by stevesliva ( 648202 ) on Wednesday February 14, 2007 @12:14PM (#18013498) Journal

      Some big examples? PS2, Nintendo Gamecube, Wii, Xbox 360. All these consoles use eDRAM for their GPU's on-chip framebuffers to enhance their performance, and that goes back to at least the year 2000 when the PS2 came out.

      Some will be quick to say "no, the Nintendo consoles use 1T-SRAM, not DRAM". Yeah, right, but even 1T-SRAM (despite its name) is a form of embedded-DRAM.
      First, it is news because IBM is announcing that the performance is on par with SRAM, and because they have integrated their deep-trench eDRAM process with the SOI process used for their Power CPUs. The result? 3x the cache on the die. IBM has offered embedded DRAM with bulk technologies for a few generations, but this is the first real SOI annoucement.

      Second, the consoles that have issued PR about using "embedded DRAM" with their GPUs don't actually embed DRAM on the GPU die. The "embedded DRAM" is a process offered by NEC that is separate from the Sony and TSMC processes used to fab the GPUs that supposedly have "embedded DRAM." I am pretty sure that all of the consoles you mention include a separate custom DRAM chip in the same package as the GPU. I am certain this is the case for the XBox 360 [arstechnica.com]. I am unsure about Sony. That DRAM process substantially modifies the back end wiring to make room for a MIM cap between the FETs and the first level of metal.
    • There is a significant difference between DRAM used in a framebuffer and DRAM used in a cache. In a framebuffer the data is only needed for the time span of one frame, and refresh is not necessary. As long as it's truly used as a framebuffer, nobody cares if it loses a bit occasionally, it's just a blip on the screen. In a cache, errors are unacceptable and lifetime in the cache is somewhat uncontrolled. Accordingly, the data in a DRAM cache has to be refreshed.

      With small devices leakage is a problem, an

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday February 14, 2007 @11:54AM (#18013194) Homepage Journal

    If you could stick a crapload of this on the Cell, then those SPEs could have more than 256kB memory each, and utilizing them would become dramatically easier.

    I'd guess the next revision of Cell will have a shitload of eDRAM on it. And it will either have more SPEs, or a new bus that allows multiple Cells to be used. The latter would be more expensive to implement, but probably result in higher yields than substantially growing the Cell to support more coprocessors - the yields are already poor if they just turn all the SPEs on, or else why would they be disabling one?

    • You don't need a new bus to use more than one Cell. The Cell blades IBM sells have two Cells on board already and you can access all 16 SPEs. The blades can also cluster up for more resources. You just need code to manage it all.
      • Sounds good to me. I had heard that they had planned to make them network up, but I thought they dropped that by the end. I probably only thought that because no one is using more than one cell in a single box yet. Or at least, I didn't know anyone was :)
    • by DrYak ( 748999 )

      or a new bus that allows multiple Cells to be used. The latter would be more expensive to implement,

      Just make them support HyperTransport.
      And make them in BTX board. Or even better : make them Socket F/F+ and Socket AM2/2+/3 compatible and people will be able to drop them as vector accelerators in their existing multisocket motherboards.

      (... hum, technically, that'll require making both the front-end bus HyperTransport *AND* the memory controller DDR (2/3) instead of current Rambus controller).

      • . Or even better : make them Socket F/F+ and Socket AM2/2+/3 compatible and people will be able to drop them as vector accelerators in their existing multisocket motherboards.

        Not just that, but it would be a bitchin' way to write a PS3 emulator :D

  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday February 14, 2007 @11:58AM (#18013264) Homepage
    EE Times article. [eetimes.com] Today SRAM is used for processor caches, but new multicore chips need massive (i.e. expensive) cache. Because eDRAM is much denser than SRAM, it allows chip designers to fit much more cache in the same size chip, increasing overall performance. IBM and AMD use silicon-on-insulator (SOI) technology, while the rest of the industry uses bulk CMOS; eDRAM for bulk has been available for a while (it's used in Xbox 360 and BlueGene/L for example), but now IBM has developed SOI eDRAM that can be used in IBM's future processors (and maybe AMD's).
    • by Intron ( 870560 )
      From the EE TImes article:

      "The new design uses a three-transistor micro-sense amp that lets voltage current directly drive transistor gates."

      voltage current?
  • DRAM will also continue to be used off the chip.

    Oh, good! They had me worried that I could no longer keep my DRAM in the water cooler. And how could I get through my day without a bit of chipless DRAM floating in midair above my keyboard?

    Goodness. What next? They'll try to take away my off-chip flatware?
  • Well, it IS pretty late, but I read the headline as: "DRM faster then "SPAM". Quite a disappointment, really...
  • Seriously, what's with the price of RAM?

    Sure, we'll get 3THz RAM, and it will be $150 for a 1GB stick. That's not what I want, nor what I expect. What I expect is that I get a 2GB stick for what was the price of a 1GB stick 12-18 months ago. By now 4GB sticks should be $75.

    In the last couple years prices haven't dropped hardly at all and new stuff is no bigger than before. That doesn't happen in IT unless someone isn't playing fair. So who is it and how do we get them to stop?

    • Demand for RAM slacked off in recent years because of the delays in releasing Vista (seriously). Now that Vista is out, we can expect mainstream PCs will want 2GB of RAM, which should drop the price of 1GB DIMMs.
  • by thue ( 121682 ) on Wednesday February 14, 2007 @02:10PM (#18014916) Homepage
    I am in no way an expert, but I read about other upcoming types of RAM which also sound interesting:

    Z-RAM. One cell is a single transistor. Faster than SRAM, which uses 6 transistors per cell. http://en.wikipedia.org/wiki/ZRAM [wikipedia.org]

    TTRAM. One cell contains 2 transistors. As fast as SRAM, according to Wikipedia. http://en.wikipedia.org/wiki/TTRAM [wikipedia.org]
    • I think the Wikipedia is a bit short on details for Z-RAM, so I'll provide an additional link [innovativesilicon.com]. As you can see this leads to the company behind Z-RAM, so they may look a bit too much on the possitive side. It sounds very promissing, I must say. Only for SOI though, so Intel is left out a bit here, but very interesing for IBM or AMD. I wouldn't be surprised to see eDRAM or Z-RAM in chips pretty soon, since they don't seem to require too much rework.
  • DRAM, that's fast!
  • I'll wait for the development of SPeDRAM
  • ARAM is of course still fastest. However it's good to see DRAM get some distance from the horribly slow FRAM and GRAM.
  • I just read about an investigation into price-fixing among manufacturers of flash memory devices. It includes any USB memory stick or flash card businesses or consumers may have purchased.

    Apparently, if you purchased a memory stick, or flash memory device you could be included if the investigation leads to a class-action suit.

    The press release said go to www.hbsslaw.com for more information.

Whatever is not nailed down is mine. Whatever I can pry up is not nailed down. -- Collis P. Huntingdon, railroad tycoon

Working...