Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

New Device Puts SSD In a DIMM Slot 169

Vigile points out a new take on SSD from Viking Modular Solutions. The SATADIMM puts an SSD in the form factor of a memory module. "The unit itself actually uses a SandForce SSD controller and draws its power from the DIMM socket directly but still connects to the computer through a SATA connection — nothing fancy like using the memory bus, etc. Performance is actually identical to other SandForce-based SSDs though the benefits for 1U servers and motherboards with dozens of DIMM slots is interesting to say the least. Likely priced outside the realm for average consumers, the SATADIMM will likely stay put in the enterprise market but represents an indicator that companies are realizing SSDs don't need to be in traditional HDD form factors."
This discussion has been archived. No new comments can be posted.

New Device Puts SSD In a DIMM Slot

Comments Filter:
  • by pwnies ( 1034518 ) <j@jjcm.org> on Thursday November 18, 2010 @01:41PM (#34271524) Homepage Journal
    Why? If it's only drawing power from the DIMM slot, what benefit does that serve? Sure, in a 1U rack it *might* save a trivial amount of space. I just dont see a market for it.
    • by h4rr4r ( 612664 )

      Add drives to machines that lack enough hard drive slots but have extra dimm slots.

      • by sjames ( 1099 )

        Yes, but if it can be that small, just make it about that size but accepting power like a regular drive. Then it can be tucked away anywhere and the cable won't interfere with airflow.

        • by h4rr4r ( 612664 ) on Thursday November 18, 2010 @02:51PM (#34272652)

          In a 1U server there no such space. The DIMM design lets you put it in a nice free space and not interrupt airflow too much.

          • by sjames ( 1099 )

            I work w/ 1U servers all the time, and there certainly is such space. In the long ones, there's room behind the drive bays, and in a short one, tuck it in to the space between the PCI(e) card (if any) and the MB with a bit of double sticky tape. On older 1Us, put it where the floppy drive used to go.

            • I use Asus RS120-E5 series 1u boxes. Since they're all headless web servers and everything I need is already onboard, there is a considerable amount of unused space above the PCI slots and risers are already provided for power. These boxes only have 4 memory slots though, so in my application, a device built on a PCI card form factor would be a lot more useful.
    • by Monkeedude1212 ( 1560403 ) on Thursday November 18, 2010 @01:46PM (#34271608) Journal

      Sure, in a 1U rack it *might* save a trivial amount of space. I just dont see a market for it.

      If there's anything I've learned from calculus - it's that a whole lot of trivial values can add up to something significant.

      • If there's anything I've learned from calculus - it's that a whole lot of trivial values can add up to something significant.

        That's a good summary.

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        Hell yeah, this could save a megaton of space. It seems most of the negative comments are from people who have never seriously used racks

        • by tagno25 ( 1518033 ) on Thursday November 18, 2010 @02:03PM (#34271908)

          I have, and this would have a hard time fitting in a 1U case. The data cable comes out the top, but many 1U cases have the ram sticks at a 45 degree angle because they would be too tall. It would be OK in a 2U or larger and used as the boot disk.

          • by MBCook ( 132727 )

            That was my thought as well. In the article, they seem to have a 90 degree adapter on the SATA cable to plug into the DIMM. My immediate reaction (besides "that's kinda neat") was that RAM is stacked, so if you put 4 of these in a bank of RAM, the 2-4th's SATA cables would hit the cable from the 1st. You'd need cables that connect at 90 degrees in one way and 45 in another.

            If you have empty RAM slots and you want to add one or two, it's not that bad. The idea of using banks of it to put terabytes in a 1U c

            • by ElectricTurtle ( 1171201 ) on Thursday November 18, 2010 @02:27PM (#34272268)
              They should have model variants with connectors staggered relative to the DIMM length. Have one with the connector in the first quarter, another with the connector in the second quarter, etc. So you could have a bank of four with no cable/connector overlap.
              • by phyrexianshaw.ca ( 1265320 ) on Thursday November 18, 2010 @02:38PM (#34272456) Homepage
                two words for all of you:

                custom cables.

                seriously: sata cables are cheep as hell to build, and doing a fan cable of a custom length to match up to the controller either on board or in the single 16/4x slot would only kind of make sense.
                • And also custom 1U racks filled with powered memory slots for such drives...

                  But I seriously think the point to this whole exercise is that with SSD drives we don't have to be tied to any single layout or size... they could be made to go anywhere. They could make them into stackable cubes like Lego's (with sufficient cooling, of course).

                  • omfg. Lego storage drives would be awesome.

                    who want's to get on that? I'm sure we could find somebody to sell them to.
                    • The MindStorm brick is very hackable and comes with an ARM7 processor. I agree there would be a market for after-market addons for more advanced robotics.
                • 3 letters for you: SAN
                  If you can't fit your storage into the case you ordered the wrong server or need dedicated storage.
                  Use the right tool for the right job. Memory slots are for memory. Servers have extra memory slots because they often need more.
                  *gasp* what a concept.
              • or just move the connector on a side.

    • by pushing-robot ( 1037830 ) on Thursday November 18, 2010 @01:50PM (#34271716)

      I guess it would be a quick way to add storage to a server that has a bunch of unused memory sockets. And the design uses off-the-shelf components which is always nice.

      But there was getting to be a need for a proper SSD package, as sticking them inside HDD housings was both limiting and an inefficient use of space. Viking's solution probably won't take off, though, since Apple/PhotoFast/Toshiba just stole their thunder. [arstechnica.com]

    • by alen ( 225700 )

      i've read where putting the tempdb in MS SQL Server and whatever the Oracle and DB2 equivalents are on SSD is a huge performance boost for queries that rely on it. things like sorts and joins.

      you can easily have multi-terabyte databases on a 1U/2U servers these days and with 16GB DIMM's enough memory in a few slots for them. but if you have idiots running select queries for hundreds of millions of rows at once then this will be a big help. i've seen queries like this run for days

      • If that is true, wouldn't it be better to populate the DIMM slots with RAM and use a ramdisk instead of SSD for this purpose?

        • Re: (Score:3, Insightful)

          sure, but you know what the price of ram vs the price of flash is?

          16GB dimms run me about $900 each, whereas I can get 64GB X25-E's for $700.
          and tit for tat, the performance won't be THAT bad by comparison.
          at ~$55/GB for Ram, or ~$10/GB for flash, at 1000GB quantities... that's a pretty easy call to make personally. :P
    • OK lets assume 1ru's come in two basic flavors the fully integrated products from Dell HP IBM and the like these rarely have any standard power connectors let alone internal SATA ports. Then there are the custom built these normally have free standard power connectors and free SATA ports. In the first case there is nothing to plug it into unless you add a sata raid card at which point why not just get the power from the PCI-E slot? Custom servers don't need to draw power from a dimm slot. In either case

      • by phyrexianshaw.ca ( 1265320 ) on Thursday November 18, 2010 @02:59PM (#34272786) Homepage
        the point is that you can instead of purchasing ram at ~$25/gb you can buy flash at ~$10/gb and still stay dense.

        I'm sure where you are there's room for things: but in much of the world this is not the case. try suggesting 4U storage cases for a customer wanting to host a 20TB database in Egypt. you may only get 4-6U in each building to work with, (with little cooling capacity) and $25K/building in hardware budget.

        There are cases for everything. I can think of a pile of customers of mine that only filled their Vmware hosts with 64GB (of the 512GB max) of ram (leaving twenty eight sockets free in each of the three hosts for something!) that's 33.6TB of space right there! (though personally I'd PREFER to stick RAM in there, that would only be another 1.344TB of ram)
      • by amorsen ( 7485 )

        HP's 1U servers tend to have a couple SATA slots left over, especially if you forego the optical drive (and with PXE or iLO, you don't need it). The actual hard drives tend to run from a SAS RAID controller which often takes up a valuable PCI-E-slot.

    • 1U servers lack space for enough HDDs very often, only 3x3½" can ultimately be had. I haven't seen 4x2½" cases neither.

    • by amorsen ( 7485 )

      Blade servers. They usually have 2 HDD slots at best. The challenge is that they tend to be low on SAS sockets, so you'd need a very small SAS multiplexer as well.

      • by drsmithy ( 35869 )

        Blade servers. They usually have 2 HDD slots at best. The challenge is that they tend to be low on SAS sockets, so you'd need a very small SAS multiplexer as well.

        If you're trying to put a lot of local storage into a blade server, You're Doing It Wrong.

  • So you don't have to run a molex or other power connector to the SSD, it's easier to put in, I suppose.

    I wonder if there are significant gains to be had by inserting these in place of existing RAM?

  • If your using a DIMM slot for power, and SATA for data transfer, why not use the power supply for power instead of losing a memory slot?

    • Are you thinking in a desktop or a server environment? Because I have only ever seen ONE server ever use every single one of its memory slots full of the Max amount of memory for a stick at the time.

      Often times, its trivial to upgrade RAM to get a spare slot.

      It's Not as trivial to have to unplug absolutely everything because you switched out the power supply.

    • The power still comes from the power supply... where else would it come from? I guess it'd be useful if you have memory slots you're not using, but no extra drive bays.
      • The power still comes from the power supply... where else would it come from? I guess it'd be useful if you have memory slots you're not using, but no extra drive bays.

        The distinction the GP was making was the power -- yes, from the power supply -- delivered through the pins of the DIMM slot rather than the cable connected directly to the PSU. And I'd have to agree with both of you in asking what the point of this is.

    • by Intron ( 870560 )

      If your using a DIMM slot for power, and SATA for data transfer, why not use the power supply for power instead of losing a memory slot?

      Power from a cable has to be regulated to be clean enough to run the flash drive. Motherboard power is already clean and the correct voltage. This saves power regulation and an unneeded drive housing.

  • Disappointed (Score:3, Interesting)

    by wjh31 ( 1372867 ) on Thursday November 18, 2010 @01:46PM (#34271614) Homepage
    When i saw the headline, i was hoping that this would be a device that allowed an SSD to be connected to a RAM slot and used as RAM, rather than an SSD that takes up a RAM slot.

    Additionally, if they can squeeze a 256GB into a DIMM form factor, why the are even 4GB sticks of RAM still expensive
    • Re:Disappointed (Score:4, Informative)

      by Rob the Bold ( 788862 ) on Thursday November 18, 2010 @01:49PM (#34271686)

      Additionally, if they can squeeze a 256GB into a DIMM form factor, why the are even 4GB sticks of RAM still expensive

      Because using flash memory as system RAM would be rather disappointingly slow.

    • by Sycraft-fu ( 314770 ) on Thursday November 18, 2010 @01:57PM (#34271820)

      The price of flash has nothing to do with the price of RAM. They are completely different constructions, and for different tasks. Flash is faster than magnetic storage but still dog slow compared to RAM. For flash you talk access time in 2-3 digits of microseconds. For RAM you talk access time in single digit nanoseconds. For flash transfer rates are in the 100s of MB/sec with anything over 200 being rather exceptional. For ram transfer rates are in the 10+GB/sec.

      Same sort of transition again when talking DRAM (what you put in your system) to SRAM (what processor cache is made out of). Again the price goes up massively so instead of 8GB, you are talking maybe 12MB. However again the speed goes way up and access time way down.

      • This is kinds like comparing apples to oranges. Think of the SSD as another intermediate fully random-accessible cache layer that is slower than ram but much faster than a hard drive. Consider the cost of placing, say, 40G of ram in a server. That's a lot of DIMM slots, a more expensive mobo, lots of expensive high density dram, verses the cost of a 40G SSD ($115 from Intel). So even though the SSD is more expensive per gigabyte than a normal HD, it is considerably less expensive per gigabyte relative t

    • by h4rr4r ( 612664 )

      4GB sticks are cheap as hell, kiddo. Even the server stuff as a double kit, meaning 2X4GB is $250.

      How much cheaper can it get?

      And flash as ram would be slow as hell.

      • by Mashiki ( 184564 )

        Depends on if you're using DDR2 or DDR3, DDR2 is dirt cheap. DDR3 is expensive as hell.

      • Typically 2x2G sticks are cheaper than 1x4G stick, particularly when it has to be ECC memory and DDR3. If you are talking about non-ECC memory then you aren't talking seriously. non-ECC memory is just fine for a consumer desktop (though even that is arguable when one is talking about storage in excess of 4GB), but in a server environment ECC is pretty much required. As of about a year ago I've started buying only ECC memory for desktops too.

        Google did a study on memory in 2009, it raised a lot of eyebrow

    • by DarkOx ( 621550 )

      When i saw the headline, i was hoping that this would be a device that allowed an SSD to be connected to a RAM slot and used as RAM, rather than an SSD that takes up a RAM slot.

      Well I don't know why you would want to use slow flash as your primary storage rather than fast DRAM or SRAM.
      Now this is not what I think you had in mind but having primary storage that does not need refresh would permit you to have a machine that could be powered on and off and remain in a consistent state. Well there were a few more things you'd need to do like preserve the content of CPU registers but there are ways to solve those problems. Such a machine also could have only primary storage be

      • by surgen ( 1145449 )

        Well I don't know why you would want to use slow flash as your primary storage rather than fast DRAM or SRAM.

        Maybe rather than just taking up the slot use it for communication too? Appear as RAM to the computer and then create a RAM drive to mount the SSD? Though it does seems like a rather roundabout way just to avoid using a SATA cable.

  • by bobjr94 ( 1120555 ) on Thursday November 18, 2010 @01:46PM (#34271632) Homepage
    I remember back before computers had onboard drive controllers and there was no such thing as a standard drive interface, they sold ISA hard drive cards, it was a drive & controller all in one. I dont see to much advantage running a drive on a ram slot, you can just dedicate a drive(s) to you work, swap or temp files. I typically do that when editing video, 1 drive holds the raw videos, one drive is a temp drive and one is what the final video files are outputted to when they are rendered. Much faster then using 1 drive or even a single raid to read/write large amounts of data at the same time.
  • Speedy servers (Score:5, Interesting)

    by $RANDOMLUSER ( 804576 ) on Thursday November 18, 2010 @01:48PM (#34271668)
    Certainly putting things like swap space and database journal files on SSD would speed things up wonderfully, but how about an OS hack where an SSD drive is a sort of L3 cache between core and traditional disk for dirty disk buffers? Also, I'm wondering about the power requirements between SSD and DIMM RAM.
    • by hjf ( 703092 )

      ZFS L2ARC

    • Re:Speedy servers (Score:5, Interesting)

      by m.dillon ( 147925 ) on Thursday November 18, 2010 @02:01PM (#34271868) Homepage
      Not sure I would call it an OS hack but DragonFly has precisely that, called swapcache. Swapcache Manual Page [dragonflybsd.org]. It isn't so much making standard paging work better (systems rarely have to 'page' these days) but instead its ability to cache clean data and meta-data from the much larger terrabyte+ hard drive that makes the difference. Anyone who has more than a few hundred thousand files to contend with will know what I mean. -Matt
    • An SSD usually uses considerably less power even during writing than RAM. Consider that a stick of RAM is going to have to continually refresh each of its 16 or 32 chips, while flash is only going to power up those that it is currently accessing at that time.
    • Adaptec's like of MaxIQ controllers are the cheapest I know of, Intel also has it on their high end rebadged LSI controllers though you have to pay extra to add the feature. The controllers use an SSD as an additional layer of cache (they also have a RAM cache) for the array to speed things up. Works quite well apparently, if a bit costly.

  • From the article:

    Final Thoughts: Taking power (and space) from free DIMM slots is certainly a novel idea, and is beneficial to overly cramped installations. I can easily see these being used for embedded and other custom systems where high storage performance is needed without the wasted space.

    So the entire purpose of this hyper-expensive convoluted creation is to save a power cable...? The whole article reads more like an advertisement + some benchmarks. I see no benefit to this thing whatsoever. Unless I am missing something, it sounds more like Viking was trying to make a non-volatile memory chip (that would be kinda cool) and realized it wasn't going to work, so they had the engineers rip out everything novel about it and just use the DIMM slot to save a power cord.

    • by Chirs ( 87576 )

      It's aimed at 1U servers that have no free drive bays or PCI slots.

    • I wouldn't be surprised if the purpose were to confuse the buyer. Imagine, you see an SSD that plugs in to a DIMM slot. "Woah, that's got to be faster than normal SSD! Or it's got to be doing something that makes it better than this other one that only connects to a little ribbon cable.
  • Mini Options! (Score:4, Interesting)

    by Falc0n ( 618777 ) <japerry@nosPam.jademicrosystems.com> on Thursday November 18, 2010 @02:06PM (#34271944) Homepage
    Actually I find this potentially quite cool. Not as much for the power source, but the size. Since most mATX boards don't come with mini PCIe slots, if you want to use an SSD drive you need a 2.5" drive or a PCIe card with a mini-slot on it. Both are much larger than a DIMM option.

    And with 50gb, this would be very useful in a media box streaming from a server. Now only if the price could come down.
    • But not more useful than actually putting ram in that slot, since most small form-factor motherboards are also going to have a minimal number of DIMM slots anyway. One rarely sees more than 4 slots and I don't know about everyone else but I always populate all my DIMM slots so I don't have to purchase ultra-high density sticks (which cost a premium).

      There is a very good reason why Apple had to use this sort of thing... a custom-fit SSD in a custom, basically non-upgradeable item (people who buy Apple stuf

  • by PatPending ( 953482 ) on Thursday November 18, 2010 @02:08PM (#34271992)

    I can't recall a /. story that has this many ignorant replies.

    Aside from the usual lack of RTFS and not reading TFA, I wonder if it's due to ignorance of hardware?

    • I'll just speak for myself but to me it seems like a strange concept to use a memory slot for power... apparently memory slots are commonly abundant in servers but not molex cables? I guess niche concept and 20 acronyms leads to mistake making territory in my case.. whatevs....
  • by ferrocene ( 203243 ) on Thursday November 18, 2010 @02:20PM (#34272144) Journal

    This device seems backwards with today's trends. With virtualizaion gaining ground fast, the ideal setup is to have as much RAM as possible with a SAN back end for storage - iSCSI, FC, whatever. Most local disks on servers today are RAID1 mirrors for the small hypervisor.

    So, yes, this device wastes a valuable DIMM slot to give you a less-valuable SATA drive?

    I can't think of any scenario where this would be useful unless you're talking about handheld devices - a MacBook Air or tablet of some sort.

    • Re: (Score:3, Insightful)

      by h4rr4r ( 612664 )

      DB servers in a leased rack. Doing DB IO over FC or iSCSI adds latency that local disks are not going to have. This gets you fast local storage without having to pay more each month for leased rack space.

      Virtualizing high performance DBs is a stupid move.

  • by zmooc ( 33175 )

    Rack? Who cares about racks. It's not like there's not enough room in 1U servers. What this is awesome for, though, is for small form factor PCs. With video on the mobo or cpu the only thing left that stuck out, was the harddrive or ssd. Not anymore. Awesome! :-) Now I can go get myself a proper 17x17x5cm quad core PC:-)

  • I can see this in an environment when you need to stick a lot of 1U rack systems all over the place, and can't spread out over a larger footprint in any one location. But when else am I going to use this? Didn't we decide a long time ago that large amounts of internal storage wasn't really a good way to handle increasing storage needs?

    I'd much rather see a big ol' SAN full of SDDs than put together something like this, unless someone else is seeing an advantage that I don't.

  • ... into a different electronic orifice of the motherboard than what is customary?

    This is exciting news, indeed!

    I will join this game-changing revolution by using file descriptor 3 for standard output!

  • We need a new standard form factor or two. Clearly making an SSD in the size of a pattered hard drive makes no sense, but this product makes no sense either. It's just a way to steal power from another sort of slot. In addition to the form factor, I'm not sure SATA even makes sense anymore, so it may be time for a higher level rethink.

    I'm not sure the best way to go, but there are some semi-obvious starting points. What about MiniPCI for SSD's? One or two on the motherboard could work well. Maybe a mo

    • The new formfactor introduced in the Macbook Air sounds interesting, and already announced to be available from a couple of different manufacturers. It's basically the same size as a DIMM, but with the pins at the end instead of along one edge.

  • It's not a very exciting use of non-volatile memory. It makes sense, though, to package non-volatile devices for vertical slots like DRAM, and have motherboards that have slots for them. But not DIMM slots - something that actually carries the drive data. The thing announced in the article still needs a drive cable; all it gets from the DIMM slot is power. This looks like an interim product until server motherboards go to that form factor and eliminate drive bays. The near future for server farms prob

  • New? (Score:4, Funny)

    by Lumpy ( 12016 ) on Thursday November 18, 2010 @02:45PM (#34272568) Homepage

    I've had such things in the embedded world for over a decade.

    What's next? NEW! small cards serve as memory devices!

    • It'll never happen. Floppy and CDs work just fine in digital cameras. Why would you even want something so small you might lose?

  • Let me get this straight, you want us to sacrifice valuable RAM slots, and more so, valuable RAM, to run an SSD device? What would make more sense would be to have a completely seperate 1U unit hooked up to the unit with nothing but SSD devices (or hard drives). Wait, don't they already have those?

    Likely priced outside the realm for average consumers,

    I also doubt the average consumer will want these. With most consumer motherboards only supporting two or four slots of RAM, I REALLY don't see sacrificing ram slots for SSD. Especially when they top out at, what,

  • Everything old, is new again.
    Sun did this in the sparc 10 &20 line, by enabling an optional NVRAMM SIMM in the primary memory slot.
    A whopping 4 megabytes of RAM max, I think :) so it was used for caching the "metadata" of things like NFS, rather than direct storage.
    But putting something directly in memory, and accessing it through the memory bus 'normally' like a basic RAMdisk) sounds a whole lot more efficient than just sucking power from the slot, but looping back around through the SATA bus, so you c

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...