Forgot your password?
typodupeerror
Data Storage Hardware

New Device Puts SSD In a DIMM Slot 169

Posted by timothy
from the ram-disk-returns dept.
Vigile points out a new take on SSD from Viking Modular Solutions. The SATADIMM puts an SSD in the form factor of a memory module. "The unit itself actually uses a SandForce SSD controller and draws its power from the DIMM socket directly but still connects to the computer through a SATA connection — nothing fancy like using the memory bus, etc. Performance is actually identical to other SandForce-based SSDs though the benefits for 1U servers and motherboards with dozens of DIMM slots is interesting to say the least. Likely priced outside the realm for average consumers, the SATADIMM will likely stay put in the enterprise market but represents an indicator that companies are realizing SSDs don't need to be in traditional HDD form factors."
This discussion has been archived. No new comments can be posted.

New Device Puts SSD In a DIMM Slot

Comments Filter:
  • Disappointed (Score:3, Interesting)

    by wjh31 (1372867) on Thursday November 18, 2010 @02:46PM (#34271614) Homepage
    When i saw the headline, i was hoping that this would be a device that allowed an SSD to be connected to a RAM slot and used as RAM, rather than an SSD that takes up a RAM slot.

    Additionally, if they can squeeze a 256GB into a DIMM form factor, why the are even 4GB sticks of RAM still expensive
  • by bobjr94 (1120555) on Thursday November 18, 2010 @02:46PM (#34271632) Homepage
    I remember back before computers had onboard drive controllers and there was no such thing as a standard drive interface, they sold ISA hard drive cards, it was a drive & controller all in one. I dont see to much advantage running a drive on a ram slot, you can just dedicate a drive(s) to you work, swap or temp files. I typically do that when editing video, 1 drive holds the raw videos, one drive is a temp drive and one is what the final video files are outputted to when they are rendered. Much faster then using 1 drive or even a single raid to read/write large amounts of data at the same time.
  • Speedy servers (Score:5, Interesting)

    by $RANDOMLUSER (804576) on Thursday November 18, 2010 @02:48PM (#34271668)
    Certainly putting things like swap space and database journal files on SSD would speed things up wonderfully, but how about an OS hack where an SSD drive is a sort of L3 cache between core and traditional disk for dirty disk buffers? Also, I'm wondering about the power requirements between SSD and DIMM RAM.
  • by Anonymous Coward on Thursday November 18, 2010 @02:57PM (#34271816)

    Hell yeah, this could save a megaton of space. It seems most of the negative comments are from people who have never seriously used racks

  • Re:Speedy servers (Score:5, Interesting)

    by m.dillon (147925) on Thursday November 18, 2010 @03:01PM (#34271868) Homepage
    Not sure I would call it an OS hack but DragonFly has precisely that, called swapcache. Swapcache Manual Page [dragonflybsd.org]. It isn't so much making standard paging work better (systems rarely have to 'page' these days) but instead its ability to cache clean data and meta-data from the much larger terrabyte+ hard drive that makes the difference. Anyone who has more than a few hundred thousand files to contend with will know what I mean. -Matt
  • Re:Disappointed (Score:4, Interesting)

    by wierd_w (1375923) on Thursday November 18, 2010 @03:01PM (#34271872)

    While the write speed would be painful compared to real DRAM, the read speed would be comparable.

    For large static arrays, and for custom data applications, it could have uses in the form the GP suggests, though it WOULD be a nasty throwback to the days of user ROMs...

    However, I could definitely see the potential in having such a thing mapped directly to system memory, then loading a special block device driver to allocate all that "memory", so that memory IO could be used for data storage. It would eliminate the SATA controller's IO bottleneck, but would impose a slight CPU penalty. For systems with multiple CPUs, that wouldnt be much of a problem. You would need to allocate that memory fast though, to prevent the OS from trying to use it like RAM.

  • Mini Options! (Score:4, Interesting)

    by Falc0n (618777) <japerry.jademicrosystems@com> on Thursday November 18, 2010 @03:06PM (#34271944) Homepage
    Actually I find this potentially quite cool. Not as much for the power source, but the size. Since most mATX boards don't come with mini PCIe slots, if you want to use an SSD drive you need a 2.5" drive or a PCIe card with a mini-slot on it. Both are much larger than a DIMM option.

    And with 50gb, this would be very useful in a media box streaming from a server. Now only if the price could come down.
  • by ElectricTurtle (1171201) on Thursday November 18, 2010 @03:27PM (#34272268)
    They should have model variants with connectors staggered relative to the DIMM length. Have one with the connector in the first quarter, another with the connector in the second quarter, etc. So you could have a bank of four with no cable/connector overlap.
  • by phyrexianshaw.ca (1265320) on Thursday November 18, 2010 @03:38PM (#34272456) Homepage
    two words for all of you:

    custom cables.

    seriously: sata cables are cheep as hell to build, and doing a fan cable of a custom length to match up to the controller either on board or in the single 16/4x slot would only kind of make sense.
  • by h4rr4r (612664) on Thursday November 18, 2010 @03:51PM (#34272652)

    In a 1U server there no such space. The DIMM design lets you put it in a nice free space and not interrupt airflow too much.

  • by phyrexianshaw.ca (1265320) on Thursday November 18, 2010 @03:59PM (#34272786) Homepage
    the point is that you can instead of purchasing ram at ~$25/gb you can buy flash at ~$10/gb and still stay dense.

    I'm sure where you are there's room for things: but in much of the world this is not the case. try suggesting 4U storage cases for a customer wanting to host a 20TB database in Egypt. you may only get 4-6U in each building to work with, (with little cooling capacity) and $25K/building in hardware budget.

    There are cases for everything. I can think of a pile of customers of mine that only filled their Vmware hosts with 64GB (of the 512GB max) of ram (leaving twenty eight sockets free in each of the three hosts for something!) that's 33.6TB of space right there! (though personally I'd PREFER to stick RAM in there, that would only be another 1.344TB of ram)

"Life, loathe it or ignore it, you can't like it." -- Marvin the paranoid android

Working...