New Device Puts SSD In a DIMM Slot 169
Vigile points out a new take on SSD from Viking Modular Solutions. The SATADIMM puts an SSD in the form factor of a memory module. "The unit itself actually uses a SandForce SSD controller and draws its power from the DIMM socket directly but still connects to the computer through a SATA connection — nothing fancy like using the memory bus, etc. Performance is actually identical to other SandForce-based SSDs though the benefits for 1U servers and motherboards with dozens of DIMM slots is interesting to say the least. Likely priced outside the realm for average consumers, the SATADIMM will likely stay put in the enterprise market but represents an indicator that companies are realizing SSDs don't need to be in traditional HDD form factors."
I suppose the real question here is... (Score:5, Insightful)
Re: (Score:2)
Add drives to machines that lack enough hard drive slots but have extra dimm slots.
Re: (Score:2)
Yes, but if it can be that small, just make it about that size but accepting power like a regular drive. Then it can be tucked away anywhere and the cable won't interfere with airflow.
Re:I suppose the real question here is... (Score:4, Interesting)
In a 1U server there no such space. The DIMM design lets you put it in a nice free space and not interrupt airflow too much.
Re: (Score:2)
I work w/ 1U servers all the time, and there certainly is such space. In the long ones, there's room behind the drive bays, and in a short one, tuck it in to the space between the PCI(e) card (if any) and the MB with a bit of double sticky tape. On older 1Us, put it where the floppy drive used to go.
Re: (Score:2)
Re:I suppose the real question here is... (Score:5, Funny)
You know, there are many decaffeinated coffees on the market that are nearly as good as regular coffee without all the jitters....You should drown yourself in a vat of it.
Re: (Score:2)
We've secretly replaced the Enterprises dilithium crystals with Foldger's Crystals. Let's see if they notice!
Re:I suppose the real question here is... (Score:5, Insightful)
Sure, in a 1U rack it *might* save a trivial amount of space. I just dont see a market for it.
If there's anything I've learned from calculus - it's that a whole lot of trivial values can add up to something significant.
Re: (Score:2)
If there's anything I've learned from calculus - it's that a whole lot of trivial values can add up to something significant.
That's a good summary.
Re: (Score:3, Interesting)
Hell yeah, this could save a megaton of space. It seems most of the negative comments are from people who have never seriously used racks
Re:I suppose the real question here is... (Score:4, Insightful)
I have, and this would have a hard time fitting in a 1U case. The data cable comes out the top, but many 1U cases have the ram sticks at a 45 degree angle because they would be too tall. It would be OK in a 2U or larger and used as the boot disk.
Re: (Score:2)
That was my thought as well. In the article, they seem to have a 90 degree adapter on the SATA cable to plug into the DIMM. My immediate reaction (besides "that's kinda neat") was that RAM is stacked, so if you put 4 of these in a bank of RAM, the 2-4th's SATA cables would hit the cable from the 1st. You'd need cables that connect at 90 degrees in one way and 45 in another.
If you have empty RAM slots and you want to add one or two, it's not that bad. The idea of using banks of it to put terabytes in a 1U c
Re:I suppose the real question here is... (Score:4, Interesting)
Re:I suppose the real question here is... (Score:4, Interesting)
custom cables.
seriously: sata cables are cheep as hell to build, and doing a fan cable of a custom length to match up to the controller either on board or in the single 16/4x slot would only kind of make sense.
Re: (Score:2)
And also custom 1U racks filled with powered memory slots for such drives...
But I seriously think the point to this whole exercise is that with SSD drives we don't have to be tied to any single layout or size... they could be made to go anywhere. They could make them into stackable cubes like Lego's (with sufficient cooling, of course).
Re: (Score:2)
who want's to get on that? I'm sure we could find somebody to sell them to.
Re: (Score:2)
Re: (Score:2)
If you can't fit your storage into the case you ordered the wrong server or need dedicated storage.
Use the right tool for the right job. Memory slots are for memory. Servers have extra memory slots because they often need more.
*gasp* what a concept.
Re: (Score:2)
or just move the connector on a side.
Re:I suppose the real question here is... (Score:5, Informative)
I guess it would be a quick way to add storage to a server that has a bunch of unused memory sockets. And the design uses off-the-shelf components which is always nice.
But there was getting to be a need for a proper SSD package, as sticking them inside HDD housings was both limiting and an inefficient use of space. Viking's solution probably won't take off, though, since Apple/PhotoFast/Toshiba just stole their thunder. [arstechnica.com]
Re: (Score:2)
i've read where putting the tempdb in MS SQL Server and whatever the Oracle and DB2 equivalents are on SSD is a huge performance boost for queries that rely on it. things like sorts and joins.
you can easily have multi-terabyte databases on a 1U/2U servers these days and with 16GB DIMM's enough memory in a few slots for them. but if you have idiots running select queries for hundreds of millions of rows at once then this will be a big help. i've seen queries like this run for days
Re: (Score:2)
If that is true, wouldn't it be better to populate the DIMM slots with RAM and use a ramdisk instead of SSD for this purpose?
Re: (Score:3, Insightful)
16GB dimms run me about $900 each, whereas I can get 64GB X25-E's for $700.
and tit for tat, the performance won't be THAT bad by comparison.
at ~$55/GB for Ram, or ~$10/GB for flash, at 1000GB quantities... that's a pretty easy call to make personally.
Re: (Score:2)
OK lets assume 1ru's come in two basic flavors the fully integrated products from Dell HP IBM and the like these rarely have any standard power connectors let alone internal SATA ports. Then there are the custom built these normally have free standard power connectors and free SATA ports. In the first case there is nothing to plug it into unless you add a sata raid card at which point why not just get the power from the PCI-E slot? Custom servers don't need to draw power from a dimm slot. In either case
Re:I suppose the real question here is... (Score:4, Interesting)
I'm sure where you are there's room for things: but in much of the world this is not the case. try suggesting 4U storage cases for a customer wanting to host a 20TB database in Egypt. you may only get 4-6U in each building to work with, (with little cooling capacity) and $25K/building in hardware budget.
There are cases for everything. I can think of a pile of customers of mine that only filled their Vmware hosts with 64GB (of the 512GB max) of ram (leaving twenty eight sockets free in each of the three hosts for something!) that's 33.6TB of space right there! (though personally I'd PREFER to stick RAM in there, that would only be another 1.344TB of ram)
Re: (Score:2)
HP's 1U servers tend to have a couple SATA slots left over, especially if you forego the optical drive (and with PXE or iLO, you don't need it). The actual hard drives tend to run from a SAS RAID controller which often takes up a valuable PCI-E-slot.
Re: (Score:2)
1U servers lack space for enough HDDs very often, only 3x3½" can ultimately be had. I haven't seen 4x2½" cases neither.
Re: (Score:2)
Blade servers. They usually have 2 HDD slots at best. The challenge is that they tend to be low on SAS sockets, so you'd need a very small SAS multiplexer as well.
Re: (Score:2)
Blade servers. They usually have 2 HDD slots at best. The challenge is that they tend to be low on SAS sockets, so you'd need a very small SAS multiplexer as well.
If you're trying to put a lot of local storage into a blade server, You're Doing It Wrong.
Just an SSD that uses memory slot power? (Score:2)
So you don't have to run a molex or other power connector to the SSD, it's easier to put in, I suppose.
I wonder if there are significant gains to be had by inserting these in place of existing RAM?
Re: (Score:3, Informative)
though yes, you're right: compared to the equivalent AMOUNT of RAM, they suck. compared to the same dollar value of ram: that's another story.
Whats the point (Score:2)
If your using a DIMM slot for power, and SATA for data transfer, why not use the power supply for power instead of losing a memory slot?
Re: (Score:2)
Are you thinking in a desktop or a server environment? Because I have only ever seen ONE server ever use every single one of its memory slots full of the Max amount of memory for a stick at the time.
Often times, its trivial to upgrade RAM to get a spare slot.
It's Not as trivial to have to unplug absolutely everything because you switched out the power supply.
Re: (Score:2)
Interestingly we don't have any of our ~500 servers that don't have max RAM in them.
Re: (Score:2)
They may be using all of their slots - but is every slot full of the Biggest size stick of Ram?
Re: (Score:2)
Re: (Score:2)
The power still comes from the power supply... where else would it come from? I guess it'd be useful if you have memory slots you're not using, but no extra drive bays.
The distinction the GP was making was the power -- yes, from the power supply -- delivered through the pins of the DIMM slot rather than the cable connected directly to the PSU. And I'd have to agree with both of you in asking what the point of this is.
Re: (Score:2)
If your using a DIMM slot for power, and SATA for data transfer, why not use the power supply for power instead of losing a memory slot?
Power from a cable has to be regulated to be clean enough to run the flash drive. Motherboard power is already clean and the correct voltage. This saves power regulation and an unneeded drive housing.
Disappointed (Score:3, Interesting)
Additionally, if they can squeeze a 256GB into a DIMM form factor, why the are even 4GB sticks of RAM still expensive
Re:Disappointed (Score:4, Informative)
Additionally, if they can squeeze a 256GB into a DIMM form factor, why the are even 4GB sticks of RAM still expensive
Because using flash memory as system RAM would be rather disappointingly slow.
Because RAM isn't Flash (Score:4, Informative)
The price of flash has nothing to do with the price of RAM. They are completely different constructions, and for different tasks. Flash is faster than magnetic storage but still dog slow compared to RAM. For flash you talk access time in 2-3 digits of microseconds. For RAM you talk access time in single digit nanoseconds. For flash transfer rates are in the 100s of MB/sec with anything over 200 being rather exceptional. For ram transfer rates are in the 10+GB/sec.
Same sort of transition again when talking DRAM (what you put in your system) to SRAM (what processor cache is made out of). Again the price goes up massively so instead of 8GB, you are talking maybe 12MB. However again the speed goes way up and access time way down.
Re: (Score:2)
This is kinds like comparing apples to oranges. Think of the SSD as another intermediate fully random-accessible cache layer that is slower than ram but much faster than a hard drive. Consider the cost of placing, say, 40G of ram in a server. That's a lot of DIMM slots, a more expensive mobo, lots of expensive high density dram, verses the cost of a 40G SSD ($115 from Intel). So even though the SSD is more expensive per gigabyte than a normal HD, it is considerably less expensive per gigabyte relative t
Re: (Score:2)
4GB sticks are cheap as hell, kiddo. Even the server stuff as a double kit, meaning 2X4GB is $250.
How much cheaper can it get?
And flash as ram would be slow as hell.
Re: (Score:2)
Depends on if you're using DDR2 or DDR3, DDR2 is dirt cheap. DDR3 is expensive as hell.
Re: (Score:2)
Typically 2x2G sticks are cheaper than 1x4G stick, particularly when it has to be ECC memory and DDR3. If you are talking about non-ECC memory then you aren't talking seriously. non-ECC memory is just fine for a consumer desktop (though even that is arguable when one is talking about storage in excess of 4GB), but in a server environment ECC is pretty much required. As of about a year ago I've started buying only ECC memory for desktops too.
Google did a study on memory in 2009, it raised a lot of eyebrow
Re: (Score:2)
When i saw the headline, i was hoping that this would be a device that allowed an SSD to be connected to a RAM slot and used as RAM, rather than an SSD that takes up a RAM slot.
Well I don't know why you would want to use slow flash as your primary storage rather than fast DRAM or SRAM.
Now this is not what I think you had in mind but having primary storage that does not need refresh would permit you to have a machine that could be powered on and off and remain in a consistent state. Well there were a few more things you'd need to do like preserve the content of CPU registers but there are ways to solve those problems. Such a machine also could have only primary storage be
Re: (Score:2)
Well I don't know why you would want to use slow flash as your primary storage rather than fast DRAM or SRAM.
Maybe rather than just taking up the slot use it for communication too? Appear as RAM to the computer and then create a RAM drive to mount the SSD? Though it does seems like a rather roundabout way just to avoid using a SATA cable.
Re:Disappointed (Score:4, Interesting)
While the write speed would be painful compared to real DRAM, the read speed would be comparable.
For large static arrays, and for custom data applications, it could have uses in the form the GP suggests, though it WOULD be a nasty throwback to the days of user ROMs...
However, I could definitely see the potential in having such a thing mapped directly to system memory, then loading a special block device driver to allocate all that "memory", so that memory IO could be used for data storage. It would eliminate the SATA controller's IO bottleneck, but would impose a slight CPU penalty. For systems with multiple CPUs, that wouldnt be much of a problem. You would need to allocate that memory fast though, to prevent the OS from trying to use it like RAM.
Re: (Score:2)
Access times, though much better in flash over the last few years are still an order of magnitude slower. and just imagine writing to ram only to find out that the process must wait while the old blocks are re-allocated due to bad sector remapping. (potentially causing micro or even millisecond access times!)
sorry, flash has a L
Re: (Score:2)
Also, while the problem of flash wearing out has been vastly exaggerated, imagine how quickly a contended lock would wear out the 100.000 write cycles. You could easily do that many in a second, and no wear levelling can cope with that.
Re: (Score:2)
It's actually more around 10,000 cycles for consumer grade MLC flash (which is what you find in most SSDs). SLC flash runs around 100,000 cycles. There's been a lot of misinformation on this topic but the easiest way to think about it is to consider the actual write durability that people have been experiencing with SSDs. Take an Intel 40G SSD for example. The vendor-specified write durability is 35TB (I always say 40TB just to make the numbers easy), or 1000x, which assumes a very high write amplificat
Re: (Score:2)
Write amplification is basically due to the fact that a MLC flash chip uses a 128KB write/erase block. Smaller writes either have to be write-combined or otherwise eat a ton more durability due to having to write the whole block than larger writes would.
I'm fairly certain that the write block size in every SSD on the market right now is not the same as the erase block size...
..so its not fair to consider small uncombined writes as equivalent to a future erase on a 1:1 basis.. its actually 32:1.. or about 3% of small combined writes will lead to a mandatory erase.
In other words, that 128K block is segmented into 4K blocks (32 of them,) and each 4K block can be written once per erase cycle.
Re: (Score:2)
While the write speed would be painful compared to real DRAM, the read speed would be comparable.
No it wouldn't. RAM speeds are measured in 10s of GB/sec. SSD speeds are measured in hundreds of MB/sec.
This is before even getting into the access times, which are similarly disparate.
Reminds of of the old hard cards (Score:3, Interesting)
Speedy servers (Score:5, Interesting)
Re: (Score:3)
ZFS L2ARC
Re:Speedy servers (Score:5, Interesting)
Re: (Score:2)
You can get RAID controllers that do that (Score:2)
Adaptec's like of MaxIQ controllers are the cheapest I know of, Intel also has it on their high end rebadged LSI controllers though you have to pay extra to add the feature. The controllers use an SSD as an additional layer of cache (they also have a RAM cache) for the array to speed things up. Works quite well apparently, if a bit costly.
What purpose does this serve? (Score:2)
From the article:
Final Thoughts: Taking power (and space) from free DIMM slots is certainly a novel idea, and is beneficial to overly cramped installations. I can easily see these being used for embedded and other custom systems where high storage performance is needed without the wasted space.
So the entire purpose of this hyper-expensive convoluted creation is to save a power cable...? The whole article reads more like an advertisement + some benchmarks. I see no benefit to this thing whatsoever. Unless I am missing something, it sounds more like Viking was trying to make a non-volatile memory chip (that would be kinda cool) and realized it wasn't going to work, so they had the engineers rip out everything novel about it and just use the DIMM slot to save a power cord.
saves space primarily (Score:3, Informative)
It's aimed at 1U servers that have no free drive bays or PCI slots.
Re: (Score:2)
Re: (Score:2)
But this makes even LESS sense for large-form-factor mobos. If you want to maximize density you buy a high-density SSD (they come in terrabyte sizes now, after all). You don't buy a hundred custom-fit DIMMs with discrete SATA connectors. It doesn't even make any sense for 1U FF, since 2.5" drive bays trivially fit in that form factor.
-Matt
Mini Options! (Score:4, Interesting)
And with 50gb, this would be very useful in a media box streaming from a server. Now only if the price could come down.
Re: (Score:2)
But not more useful than actually putting ram in that slot, since most small form-factor motherboards are also going to have a minimal number of DIMM slots anyway. One rarely sees more than 4 slots and I don't know about everyone else but I always populate all my DIMM slots so I don't have to purchase ultra-high density sticks (which cost a premium).
There is a very good reason why Apple had to use this sort of thing... a custom-fit SSD in a custom, basically non-upgradeable item (people who buy Apple stuf
Why so many ignorant replies? (Score:3, Insightful)
I can't recall a /. story that has this many ignorant replies.
Aside from the usual lack of RTFS and not reading TFA, I wonder if it's due to ignorance of hardware?
Re: (Score:2)
Re: (Score:2)
And don't forget that often times it is people such as myself modding the conversation. I try my best to moderate well, but on a subject like this I am outside of my knowledge base. I think that is a reason that so few comments on this story are modded much at all, besides the quality of some of the posts are just saying wtf without bothering to reading the article.
Slashdot does seem a bit desperate for moderation, I'm getting 30 mod points on some weeks, 15 at a time.
Useless with virtualization? (Score:5, Insightful)
This device seems backwards with today's trends. With virtualizaion gaining ground fast, the ideal setup is to have as much RAM as possible with a SAN back end for storage - iSCSI, FC, whatever. Most local disks on servers today are RAID1 mirrors for the small hypervisor.
So, yes, this device wastes a valuable DIMM slot to give you a less-valuable SATA drive?
I can't think of any scenario where this would be useful unless you're talking about handheld devices - a MacBook Air or tablet of some sort.
Re: (Score:3, Insightful)
DB servers in a leased rack. Doing DB IO over FC or iSCSI adds latency that local disks are not going to have. This gets you fast local storage without having to pay more each month for leased rack space.
Virtualizing high performance DBs is a stupid move.
Re: (Score:2)
Are you saying that a FC SAN will give you fewer IOps than this DIMM SSD?
He's talking about latency, not throughput.
Rack? (Score:2)
Rack? Who cares about racks. It's not like there's not enough room in 1U servers. What this is awesome for, though, is for small form factor PCs. With video on the mobo or cpu the only thing left that stuck out, was the harddrive or ssd. Not anymore. Awesome! :-) Now I can go get myself a proper 17x17x5cm quad core PC:-)
So... wait. (Score:2)
I can see this in an environment when you need to stick a lot of 1U rack systems all over the place, and can't spread out over a larger footprint in any one location. But when else am I going to use this? Didn't we decide a long time ago that large amounts of internal storage wasn't really a good way to handle increasing storage needs?
I'd much rather see a big ol' SAN full of SDDs than put together something like this, unless someone else is seeing an advantage that I don't.
Wow, someone put a piece of hardware ... (Score:2)
... into a different electronic orifice of the motherboard than what is customary?
This is exciting news, indeed!
I will join this game-changing revolution by using file descriptor 3 for standard output!
We need a new standard. (Score:2)
We need a new standard form factor or two. Clearly making an SSD in the size of a pattered hard drive makes no sense, but this product makes no sense either. It's just a way to steal power from another sort of slot. In addition to the form factor, I'm not sure SATA even makes sense anymore, so it may be time for a higher level rethink.
I'm not sure the best way to go, but there are some semi-obvious starting points. What about MiniPCI for SSD's? One or two on the motherboard could work well. Maybe a mo
Re: (Score:2)
The new formfactor introduced in the Macbook Air sounds interesting, and already announced to be available from a couple of different manufacturers. It's basically the same size as a DIMM, but with the pins at the end instead of along one edge.
Reasonable packaging (Score:2)
It's not a very exciting use of non-volatile memory. It makes sense, though, to package non-volatile devices for vertical slots like DRAM, and have motherboards that have slots for them. But not DIMM slots - something that actually carries the drive data. The thing announced in the article still needs a drive cable; all it gets from the DIMM slot is power. This looks like an interim product until server motherboards go to that form factor and eliminate drive bays. The near future for server farms prob
Re: (Score:2)
actually quite a few embedded platforms use flash as swap.
New? (Score:4, Funny)
I've had such things in the embedded world for over a decade.
What's next? NEW! small cards serve as memory devices!
Re: (Score:2)
It'll never happen. Floppy and CDs work just fine in digital cameras. Why would you even want something so small you might lose?
Sacrifice memory for storage space? (Score:2)
Let me get this straight, you want us to sacrifice valuable RAM slots, and more so, valuable RAM, to run an SSD device? What would make more sense would be to have a completely seperate 1U unit hooked up to the unit with nothing but SSD devices (or hard drives). Wait, don't they already have those?
Likely priced outside the realm for average consumers,
I also doubt the average consumer will want these. With most consumer motherboards only supporting two or four slots of RAM, I REALLY don't see sacrificing ram slots for SSD. Especially when they top out at, what,
PrestoServe NVRAM, circa 1996 (Score:2)
Everything old, is new again. :) so it was used for caching the "metadata" of things like NFS, rather than direct storage.
Sun did this in the sparc 10 &20 line, by enabling an optional NVRAMM SIMM in the primary memory slot.
A whopping 4 megabytes of RAM max, I think
But putting something directly in memory, and accessing it through the memory bus 'normally' like a basic RAMdisk) sounds a whole lot more efficient than just sucking power from the slot, but looping back around through the SATA bus, so you c
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
there are 32 dimm slots on a modern 4 socket server board. memory comes up to 16GB/dimm densities, and few companies (well, few of the ones I sell to globally) have any need to max the boards. 8 slots gives you 128GB of RAM, leaving 24 slots available.
at 400GB usable densities per slot with this product, the machine can then host 9.6TB of SSD storage BEFORE connecting to a drive array. being that most cases will only allow 8XSSD's to be mounted, th
Re: (Score:2)
If you have a molex connector you can easily split it into as many as you want. Flash drives use negligible power.
Re: (Score:2)
Re: (Score:2)
I try not to post about bad moderation but how the fuck is that a troll?
Do, or do not. There is no 'try.'
Re: (Score:2)
His OP was (to use your words) "so blatantly stupid" that if he can't figure out why he got a bad mod, well, too bad. He should realize the moderation system is imperfect, as explained here [slashdot.org]. Specifically:
I found a comment that was unfairly moderated!
Lemme know and I'll look at it. Sometimes I might agree and revoke access to a moderator. Usually I disagree and let it go. Its difficult to be the judge on this stuff since it is so subjective.
His OP was (to use your words) "so blatantly stupid" that my response was too. Tough.
Your last paragraph is undeserving of comment.
Re: (Score:2)
You are either a clueless idiot with no social comprehension, or you are a pedantic idiot looking for something to be a pedant about and feel superior. Neither is very likable. I am going to go out on a limb and assume that you don't have many friends (if any) and most of your co-workers find you difficult to talk to and unapproachable.
1. Are you assuming that there is a direct correlation between my on-line, anonymous posts on an open forum and my face-to-face, interpersonal relationships? If so, citation needed. (Regardless, it isn't true in my case.)
2. (Directed to the OP) It's the Internet (in general; Slashdot in particular); if you don't have a sense of humor or are easily offended you shouldn't be on open forums.
3. I think your diatribe (and--ooooh, you used "pedanti
Re: (Score:2)
Think Mac Minis or Nano-ITX boards. You could make a damn small box which for many (most?) people is more desirable than expansion room. The case could also be dead simple with the most complicated thing being the holes to attach the board.
Re: (Score:2)
I think its a bit silly too. For one thing, there is ALREADY a suitable form factor: mini-PCIe. And two, DIMM slots change every year. Anyone buying this dimm-based SSD is basically buying a custom part with no resale value (because its form factor will become obsolete very quickly) and wasting a memory slot that they might actually want to use in the future. Bad news all around.
-Matt
Re: (Score:2)
I guess you think a brain-dead one-liner comment like that is meaningful. Try again. I'm sure if you actually spend more than 5 seconds thinking about it you can come up with something better.
-Matt
Re: (Score:2)
Think about the server room is what he was trying to tell you. 1U servers just about always have only two PCIe slots but tons of extra ram slots. Not everything is sold to the consumer market.
Re: (Score:2)
I think you are misapprehending the density argument. These DIMM slot SSDs are NOT ultra-high-density items. A standard form factor 2.5" or 3.5" SSD is much higher density when you are talking about more than a few gigabytes worth of SSD. They can fit 1TB+ into a 3.5" form factor already. Also, 1U rack mount boxes have no trouble fitting a whole crapload of 2.5" or 3.5" front-loaded slots into the box. It is after all a fairly deep form factor.
In a server room these DIMM SSDs are the last thing you wou
Re: (Score:2, Informative)
Re: (Score:2)
Not sure why you think it would be viable. Small server racks still have very deep footprints and already have plenty of front-loaded 2.5" and/or 3.5" hot-swappable slots. You are advocating that this DIMM thingy would somehow be an improvement? It isn't hot-swappable, it still needs a separate mobo SATA connector (and cable), you actually have to PULL the freaking server out of the rack to change it out. It essentially can't be upgraded. It isn't commodity hardware. AND it is low density compared to
Re: (Score:2)
You're an idiot. It's in the summary AND the article, and if you looked even briefly at the actual photo of the device, you'd have seen that it has a sata port on it.
This isn't aimed at desktops dumbass, this is aimed at servers where iop/m^3 is important
Is it impossible for you to interact with others without insulting them?
Re: (Score:2)
This isn't aimed at desktops dumbass, this is aimed at servers where iop/m^3 is important
In those environments your storage isn't in the local machine anyway, it's in the SAN.