Intel Intros 310 Series Mini SSDs 122
crookedvulture writes "Intel has added a couple of tiny 310 Series solid-state drives to its storage lineup. Measuring just 51 x 30 x 5.8mm, the mini-SATA SSDs are about a tenth the size of a standard notebook hard drive. Impressively, their performance ratings track with full-sized SSDs. Intel is pushing the 310 Series as a solution for dual-drive notebooks that combine solid-state and mechanical storage to give users the best of both worlds. Next-gen notebooks just got a little more interesting."
"Mini Series" (Score:1)
Drat (Score:5, Interesting)
Re: (Score:1)
Re: (Score:3)
Why is SATA a disappointment?
Because slightly older laptops might have an emtpy Mini PCIe slot but not an extra SATA connector? To me it's not a dissapointment but perhaps to the poster you replied to it is.
Re: (Score:2)
The MiniPCIe standard includes SATA lines, as well as USB. So if you have an open full MiniPCIe connector, it probably has SATA capability. What you have to watch for, though, are slots that are physically MiniPCIe but which are wired for USB only (many notebooks and netbooks with WWAN connectors), or use non-standard pinouts for PATA (Dell Mini 9, for example.)
What is not clear, for the add-on user, is if the SATA lines are visible to the chipset. Usually mobile chipsets have 1 or maybe 2 ports for SATA
Re:Drat (Score:5, Insightful)
SATA 2.0 (3.0 Gb/s) is currently keeping the industry down.
SATA 3.0 (6.0 Gb/s) isnt widely adopted yet, but even when its finally popular enough that too will just keep the industry down.
SATA-IO should be ashamed of itself for implementing 3.0 with such bullshit specs given the obvious reality of the situation.
Thats why many people want PCIe to become a standard interface for SSD's. That wont happen until low cost/capacity SSD's use it.
Re: (Score:3)
Re: (Score:2)
Or pack the pci slots
Re: (Score:2)
RAID
Good SandForce
As long as it's not on Sandy bridge, with it's gimped PCI-E, maybe you've got a point.
Re: (Score:2)
Re:Drat (Score:4, Informative)
For cards in the price range you are talking about, OCZ delivers 1400MB/s on its 512GB card.
You seem to be less informed than you realize.
Re: (Score:3)
Re:Drat (Score:4, Informative)
Yes, we've been evaulating the OCZ Cards - and they are much slower in real life then the benchmarks suggest. Note that the FusionIO has a FusionIO Duo - which pulls 1.5GBytes a sec. This seems to be the holy grail of speed atm.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nothing wrong with that.. but its not realistic to expect the market as a whole to also think that way. The market is more concerned with average case.
Re:Drat (Score:4, Informative)
Neither card met the published performance numbers. But the Fusion I/O card came closer to it's published numbers then the OCZ card in basic benchmarks making the Fusion I/O card quite a bit faster for raw throughput. Both cards were blazingly fast though pushing MBps and IOps like no tomorrow.
Real world performance suffered greatly with the Fusion I/O cards due to their software driven architecture. The CPU overhead was significant, even on a powerful multi CPU Xeon server. The OCZ cards did not have this problem.
The Price/performance ratio in real world made OCZ the winner overall. The competition was closest when excluding CPU overhead, but once you include CPU overhead the OCZ cards win hands down.
Support was highly disappointing from Fusion I/O. With OCZ you expect minimal support, but I expected something better from the "premium" Fusion I/O brand (and price point). Unfortunately, their support was no better then OCZ.
We originally evaluated the original Zdrive model which was kindof a rough implementation of the technology. If you are going to buy one now, avoid the old Zdrives...there are several problems with their design. The new R2 Zdrives have fixed these problems and are sold at basically the same price point for similar specs.
We eventually returned the Fusion I/O cards due to their ridiculous CPU penalty. We still have the OCZ cards, but have stopped using them in favor of normal SAS controllers with hot swap SSD drives. It's just not convenient to shut down a server and crack open the case just to replace a failed SSD...and SSDs do fail:) At this point, PCIe SSD cards seem better suited to high end workstation applications where it's not as big of a deal to crack open the box for maintenance.
Re: (Score:2)
Re: (Score:2)
Are those the best case numbers or worst case? OCZ has a history of claiming huge numbers and terribly under-delivering. Oh, and at least for my use case MLC is a non-starter so the only OCZ card I'd be interest in is the Z-Drive e88 R2 which is ~$10k so 30% more for a two card solution (RAID1) and I only need ~120GB for the OLTP tables.
Here's another data point that agrees with you: I recently specc'd a machine for a client with a Core i7, 12GB RAM etc., including an OCZ RevoDrive. This machine is a massive beast, easily the most powerful I've worked with, yet it doesn't feel noticably faster than a machine equipped with a simple SSD such as one of Intel's offerings or even an OCZ Vertex2.
My experience with OCZ mirrors yours exactly; all mouth and no trousers. The reviews of the product don't help much because although the numbers look
Re: (Score:2)
No SATA SSD pushes even 250MB/sec for continuous reads in the real world, even when connected to a 6Gbps SATA controller. See the latest comparison benchmarks [techreport.com].
This is because the entire SATA controller typically gets a single PCI Express lane, which is a 500MB/s max. The OCZ cards use 4 lanes, so 550MB/sec or so (which are the actual benchmarks [guru3d.com]) is pretty poor use of a 2GB/sec max bandwidth.
Re: (Score:2)
As far as your second link, since they benchmarking the slowest OCZ card (and showing that the benchmarks agree with the advertised speed), why are you declaring that its making poor use of PCIe x4 given that fact?
Its the slowest card that OCZ offers. Think about it.
Don't be so dishonest with your presentation.
Re: (Score:2)
Re: (Score:2)
Hardware review site benchmark porn is less than useless.
Re: (Score:2)
Its as if you think that other drives don't suffer the same degradations in 'real world scenarios'
Do you think that there is something magical about SSD's that makes their real world performance degradation significantly worse than other technologies, or is your invokation of unknown magic specific to OCZ?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And why don't you think SSDs for PCIe (or indeed just PCI for standard desktops) have caught on yet?
Re: (Score:2)
Re: (Score:3)
For the moment
Re:Drat (Score:4, Insightful)
For reasons that I can only imagine had something to do with "somebody pinching pennies until their pecuniary ichor flows", the trend somehow started of using the mini-PCIe connector, without so much as the decency of different keying or anything, to handle what are, electrically, SATA signal lines plus power. There would be nothing wrong with this if these things were actually storage-oriented mini-PCIe cards(like the HDD PCI cards of yore, with a controller chip+flash, capable of acting like a normal PCIe device; or if they were just using some 'sub-mini SATA' connector; but using a straight mini-PCIe connector for something electrically and logically completely different is plain hostile.
I get this sense that users aren't really supposed to touch these things, or the innards of the devices in which they will end up, or such a confusing and potentially damaging connector misuse would likely have not taken place...
Re: (Score:2)
So that's a Mini PCIe connector, not SATA
Re:Drat (Score:5, Insightful)
Re: (Score:2)
Yes ... won't plug in to virtually any SATA connectors, apart from the mini-SATA connectors. Just like mini-USB doesn't connect with virtually any USB connectors (apart from mini-USB), right?
And it's not like mini-SATA is a new connection either.
September 21, 2009 08:00 AM Eastern Time ; SATA-IO to Develop Specification for Mini Interface Connector mSATA Extends Benefits of SATA Interface for Small Form Factor Applications [businesswire.com]
Obviously Intel is trying their best to screw over people by dreaming up some complet
Re:Drat (Score:4, Insightful)
I just strongly object to the use of an identical connector for two completely different, non-interoperable protocols. Were it some chintzy once-off by a bottom feeding netbook monger, trying to pinch every last nickel off production costs, it would be understandable, if distasteful; but the fact that they've gone and made a standard out of it, without adding so much as a cheap keying change to the mSATA version of the mini PCIe connector, pisses me off.
My displeasure isn't Intel specific; but aimed at the unmodified reuse of a connector intended for a completely different protocol. It's sloppy and user hostile.
Re: (Score:2)
Are you entirely sure about that?
Intel themselves writes this in their product brief [intel.com]:
Re: (Score:1)
Yes ... won't plug in to virtually any SATA connectors, apart from the mini-SATA connectors. Just like mini-USB doesn't connect with virtually any USB connectors (apart from mini-USB), right?
And it's not like mini-SATA is a new connection either.
September 21, 2009 08:00 AM Eastern Time ; SATA-IO to Develop Specification for Mini Interface Connector mSATA Extends Benefits of SATA Interface for Small Form Factor Applications [businesswire.com]
Obviously Intel is trying their best to screw over people by dreaming up some complet
Re: (Score:3)
Well, that should at least allow notebook manufacturers to use the same physical design if they decide to switch to a PCIe interface. For the current generation (and probably the next SATA-3 generation as well), the SATA standard is fast enough. End users won't notice and more importantly, it won't influence the BIOS or operating system at all.
Re: (Score:1)
This appears to be a "PCI Express Mini Card"
http://en.wikipedia.org/wiki/Pci_express
This form factor has 1 PCI-e lane, so it's either 2 Gb/sec or 4 Gb/sec
From the article:
"pipes Serial ATA signaling over a mini PCI Express connector."
Neither is all that shabby for such a tiny card
The 200 MB/sec read bandwidth is probably limited by this bus.
Really this is QUITE a nice number if you compare it to a standard notebook rotating media drive, I would not complain.
As others have posted, this thing is tiny! Put two
raid? (Score:1)
10 of them in a raid in a laptop?
Re: (Score:2)
try finding a laptop with 10 pci-e links. maybe if you add a pci-e switch to X16 lanes for the video chip
Re: (Score:3)
Perfomance vs size (Score:4, Interesting)
Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?
Re: (Score:1)
Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?
Because the bigger it is the more smoke it can hold and we all know that letting the smoke out totally kills performance.
Re: (Score:3)
What does the size have to do with anything relating to these performance benchmarks?
Perhaps because of the whole decades of history related to rotating bulk storage? Without increases in spindle speed (and, thus, price), larger storage has always been faster.
Don't you remember the Quantum Bigfoot?
Get off of my lawn!
Re:Perfomance vs size (Score:5, Interesting)
Given Intel's formidable fab expertise and capital resources, it would not surprise me if two and three are at play here...
Re: (Score:3)
Why is it impressive that a smaller solid state drive performs as well as a standard size one?
Is it the size of the ship or the motion of the ocean? (Sorry couldn't help myself.)
Otherwise, good point!
Re:Perfomance vs size (Score:5, Interesting)
Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?
The speed of SSD's is linearly correlated with the number of flash chips they contain, because the flash chips are operated in parallel (think RAID0, only its implicit in the design)
Smaller would usually mean less flash chips, so less parallelism.
Re: (Score:2)
It does seem from the picture that they have used new packaging for the chips. If I remember correctly, there are more chips on my Intel SSD than there are on the picture, so they've probably paired them. That is quite a bit of effort to go through for just introducing a smaller form factor. This may also be a drawback from competitors that don't have a direct influence over the production facilities. Note that this is pure speculation from what I see from the picture.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?
Ask a woman and they might be able to tell...
Re: Performance vs. size (Score:1)
the mini-SATA SSDs are about a tenth the size... (Score:1)
Hard drive caddy (Score:2)
Just a quick note for you guys that like to fiddle with miniature screw-drivers and such: you can always replace your optical drive with an SSD or HDD. It seems that newmodeus has this market cornered for a while, restricting you to a higher priced product, but it is certainly a viable option. I've left my HDD where it is at because of possible heat issues (although there is quite a lot of spare room in the caddy) and possible problems with warranty. The only drawback is that you have to put your movies on
The real benefit is an SSD and HDD in a laptop (Score:2)
Re: (Score:2)
Re:Windows (Score:5, Interesting)
Re:Windows (Score:4, Informative)
Now, just to get back to the bigotry and one-upsmanship, any setup that forces the user to think about how best to allocate filesystem stuff between block devices, or forces them to commit to one inflexible configuration, is arguably underutilizing the capabilities of this sort of technology.
Machines are, unless the human really wants to, supposed to handle the grunt work(not to mention, keeping accurate track of file accesses, speed and latency of multiple devices, etc. properly is really beyond the capabilities of a human, at least in realtime).
What you really want is an FS arrangement that can seamlessly present you with a single logical volume, silently handling the details of what to commit to flash and what to platter, for optimal performance and responsiveness without the cost of going all Flash.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:1)
The "join" command in DOS could do this too... way back in the 90's...
Re: (Score:2)
Re:Windows (Score:4, Insightful)
Directory linking goes back to Windows 2000 but mapping c:\Users to it is a bit more difficult as the currently logged in users profile is always in use thus locking the folder.
There are quite a few ways to deal with this issue:
There are also tools from Microsoft designed to automate installs that will allow the mapping to be set at install time.
Re: (Score:3)
Re: (Score:2)
You don't really need tools for it. MS allows you to use the WINNT.SIF file for that purpose.
I don't think you can do the job with just WINNT.SIF, since you need a disk with two partitions.
The config options in WINNT.SIF allow you to tell Windows Setup to either wipe the disk completely and use the whole drive or to use first empty space, but you can't do anything else. If you want a custom partition layout, that has to happen before WINNT.SIF is being parsed.
Re: (Score:2)
You should be able to do what you're describing with group policy [tech-faq.com]. It's designed more for roaming profiles, but it should work for moving c:\users off of an SSD.
Re: (Score:1)
This has been a feature since microsoft introduced NTFS. Which is long before vista.
Re: (Score:2)
Correct. They are called Volume Mount Points [wikipedia.org], and they were introduced in Windows 2000 (ten years ago). You can mount non-NTFS drives as a folder on an NTFS drive. It even works on USB drives and CD/DVD drives (so you could have /dev/cdrom).
I have a feeling that it may have been possible to do with the filesystem in NT 6, but there was no user interface for it.
Re: (Score:3)
Only thing is, it doesn't work that way in Windows simply because the damn / folder is part of the /user data folder. Due to that, you have to map each users /home folder on an individual basis and mapping more then a couple of folders/drives will slow boot/shutdown times considerably. Another issue is "Unlike *nix" MS didn't see fit to isolate the damn /root "/admin" folder from the /home "/user data" folder, meaning you simply can't relocate /home "/user data" to another drive.
Another issue is that the fo
Re: (Score:2)
There are limitations to the mounting of drives in a separate directory under XP and Server 2003. Not sure if it also affects Vista/7/Server 2008. Some software insists that the destination directory only correlates to the boot drive and not the physical disk forcing a duplicate mount as another drive letter so they see that I have enough space. Overall, it's better than having many drive letters.
Re: (Score:2)
Its funny because years ago MSFT briefed us DEC guys on their brand new WNT OS which had so much DEC technology in it. And I did ask why the disk device names had to follow Windows95 (and DOS). I didn't get a good answer but I suppose the reason was backwards compatibility.
Re: (Score:1)
I can't believe it either ... but there is a whole industry dedicated to dealing with windows. But it's the way our world works, sadly.
We create artificial scarcity, force people to use an inferior and limited technology, that has ridiculous drawbacks, and requires a tremendous workforce around it just to keep it functional. And we keep people using it even when there are cheaper, infinitely better, more reliable and future-proof technologies. The reason is simple: Through artificial scarcity, we keep the m
Re: (Score:2)
With grown up OSs that aren't stupid enough to map the physical drive layout to the logical file layout, these hybrid drives are a no brainer, just change the fstab to point /home(/Users for macheads :P) to the hd and / to the ssd. Done! However in Windows you now would have to contend with your drive being divided amongst 2 drive letters and all the registry hell that goes along with it.
Except that your / is full of small files and your /home/[user]/Documents are also full of small files that'd be much better off on a SSD, while all the help files on / that I hardly ever use and media files in your home folder should go on the HDD.
P.S. While the TARDIS tricks you can pull off on "grown up" OSs can be useful, they're hell to make sense of and make very simple questions have very complicated answers. Like for example, do I have the space to copy in these 30 GB of files? Well that depends, yo
Re: (Score:2)
Figuring out free space inside a *NIX system isn't that hard. Just because you lack algorithmic imagination doesn't mean it's difficult.
Re: (Score:2)
Like for example, do I have the space to copy in these 30 GB of files? Well that depends, you only have 10 GB free on / but it's bigger on the inside and there may even be more disks being mounted somewhere under /home again.
The df command will tell you how much space is available on each block device and lists the mount point for the device. If you pass it the "-h" argument it conveniently gives you the sizes in the more human readable MB, GB, etc abbreviations instead of listing the number of 1k blocks.
Re:Windows (Score:4, Insightful)
Sorry to repost this, but I accidentally posted it as AC, and nobody is going to see it at -1.
I can't believe it either ... but there is a whole industry dedicated to dealing with windows. But it's the way our world works, sadly.
We create artificial scarcity, force people to use an inferior and limited technology, that has ridiculous drawbacks, and requires a tremendous workforce around it just to keep it functional. And we keep people using it even when there are cheaper, infinitely better, more reliable and future-proof technologies. The reason is simple: Through artificial scarcity, we keep the money flowing in a certain direction, we keep control in the same hands, and we create hugely profitable but completely pointless industries.
Think about it, we could be running 100% on clean, future-proof, secure and cheap nuclear energy. Instead, we rely on oil. The infrastructure that oil demands is huge, the drawbacks are incredible, we are polluting the environment, drilling the oceans to get some more black juice out of the earth at a huge risk.
We could also have moved all of our communications to ip-based networks, cutting down costs, and removing the need for so many different networks. We could have a single infrastructure that would provide us with high-bandwidth, low-latency internet everywhere, and put everything from phone calls to TV through that network. Instead, we are running different networks for each purpose, and within each purpose different networks for each provider. If we re-purposed all cellphone towers from all providers to give us just internet access, we could have 100% coverage everywhere in the world. Instead, we have huge overlapping (areas serviced by several providers), and huge areas with no coverage at all.
We could also be using just Free Software. It's open, transparent, reliable, cheap, and ethical. Instead, most people use windows. That means triplicating new hardware purchases, cutting 70% on hardware's lifespan, spending incredible resources in pointless activities like antivirus production/sale/deployment, and an IT structure several times bigger than required, not to mention all the lost time and profit due to preventable downtime.
But it's the way the economy works. It's the way the usual people keep getting richer, while keeping the majority of the world in line, quite and productive.
It's absolutely sad, but it's not just something that happens only in software, and it's certainly no accident.
Re: (Score:2)
Re: (Score:1)
Are you sure you can't get the printer to work on GNU/Linux?
Maybe there's no exact driver for that printer, but a similar one might work.
I've yet to come across a printer that won't work right with GNU/Linux and CUPS.
Try connecting it to a GNU/Linux install and use similar drivers for it, odds are, it'll work just fine.
Re: (Score:2)
Actually, what we really need is an OS that maps all memory the into one contiguous map, from fastest, to slowest and put the files rarely used on the slowest media and the ones used towards the fastest. But also include knowledge of memory that is temporary and fast verses slow (tape) and/or even unavailable (network shares) seamlessly as one huge pile.
Windows supported TRIM before anyone else (Score:4, Interesting)
I wonder how much that primitive joke of an "operating system" will derail the widespread adoption of these hybrid technologies.
The primitive joke of an operating system that introduced USB-flash based application acceleration (no such similar feature for any free operating system, and supported SSD TRIM commands before any other operating system? (OS X still doesn't and there are no announced plans to; Linux 2.6.32+, I believe, does only on a kernel level, but support amongst various filesystems seems inconsistent or not present; it's hard to tell. hdparm supports manually running TRIM using areas reported by the filesystem as free, but that's hardly equivalent to Windows, which "just works".)
Re: (Score:2)
You can tell just about every operating system in use today where to put a swap file. Your "new feature" is as relevent as "it also comes in pink". It was also a horrible kludge to get around memory usage issues and disk space limitations since most USB flash disks at the time (and many now) are horribly slow. I've turned "stupidfetch" off on some Vista
Re: (Score:2)
You can tell just about every operating system in use today where to put a swap file.
Not the same thing. At all.
Your "new feature" is as relevent as "it also comes in pink". It was also a horrible kludge to get around memory usage issues and disk space limitations since most USB flash disks at the time (and many now) are horribly slow.
No, it was adding a caching layer to improve performance. Exactly the same principle used by NetApp, EMC, Sun, et al. *Exactly* how flash/SSD disk _should_ be being us
Re: (Score:2)
The only difference is the default behaviour was changed (I suppose avoiding "horrendous manual hacks" such as, OMFG, ticking a box). There were some other caching changes that made a mess of Vista for a while but they should have been patched out by now. We can argue about this all day without even taking a step off the Microsoft platforms, I'm pretty sure even including Windows CE and flash devices there.
Re: (Score:3, Insightful)
I know you're replying to a rather trollish parent, but still I'd like to remind you not to let facts get in the way of your biased presentation.
Assumingly you refer to ReadyBoost (which was introduced in Windows only around 2006): isn't that about the fastest way to trash your USB drive? Further assuming you are inclined to do so on a UNIX-like system, say Ubuntu:
- unmount the USB volume /dev/sdX1 /dev/sdX1
- sudo mkswap
- sudo swapon -p 32767
- increase swappiness to be on Windows levels so your disk gets ag
Re: (Score:2)
For the record, recent Windows versions won't even install on FAT32, so it doesn't really count. In fact, it's becoming encreasingly difficult to even format a drive as FAT32 from the GUI, which makes sense as removable drives are becoming large enough that it's worth using a more advanced filesystem.
Also, the main difference between ReadyBoost and just using flash storage for swap is SuperFetch, the (admittedly marketingly named) feature that caches data before you're expected to need it. Most caching sche
Re: (Score:2)
I'm not sure exactly what you're talking about, wrt "application acceleration". Why would I need (want) that? My OS already starts from a cold boot to a desktop with launched in about 3 seconds, and my hardware is nothing to write home about (no flash memory/SSD).
In all fairness, TRIM is only needed because/when filesystems suck. That's hardly a boasting point.
ZFS doesn't need TRIM.
Re: (Score:1)
Re: (Score:2)
I've had plenty on many machines (Score:2)
Re:Performance (Score:5, Insightful)
Re: (Score:2)
You could mount them flat in a 1U configured for five tall and fifteen wide for a grand total of 75 hot swappable units. Using the current 80 GB unit you'd have 6000 GB. With a simple RAID 5 array with, lets say two hot spares for good measure, you'd have 5760 super redundant usable GB and at a theoretical sustained write speed of 5760 MB/s. If one dies it's replaced by a hot spare in 17 minutes (80 GB at 80 MB/s) and you replace the failed one for a new 'chip' at your earliest convenience.
I've never been i
Re: (Score:2)
Even if you were shooting for '2 post' friendly depth you could
Re: (Score:2)
What really makes me excited about that is the smaller chunks of RAIDed disk. Recovery by hot spare has always made me nervous due to the length of time it takes to repair a 1-2 TB hole in your redundancy. As long as the failure rate of the individual drives isn't such that you're incurring multiple failures or encountering them very frequently, this faster return-to-full-health time would be a real boon.
Re: (Score:3)
Depends on the RAID type.
RAID 5 (and 6?) rebuild/recovery windows tend to scale linearly with the number of drives in the array. So a very large array with very large drives can takes hours/days to rebuild.
RAID 1 and RAID 10 rebuild/recovery windows scale with the size of an individual spindle, not
Re: (Score:2)
I doubt these units could be hot swapped. They don't have staggered power pins for a start. They'd need to be loaded into some sort of caddie with the proper connectors, at which point you may as well just use the 1.8" form factor.
Re: (Score:2)
OK, I'm sorry. I just hadn't heard that one in a while. I'll be on my way now.