Are SSD Accelerators Any Good? 331
MrSeb writes "When solid-state drives first broke into the consumer market, there were those who predicted the new storage format would supplant hard drives in a matter of years thanks to radically improved performance. In reality, the shift from hard drives (HDDs) to SSDs has thus far been confined to the upper end of the PC market. For cost-conscious buyers and OEMs, the higher performance they offer is still too expensive and the total capacity is insufficient. SSD cache drives have emerged as a means of addressing this situation. They are small, typically containing between 20-60GB of NAND flash and are paired with a standard hard drive. Once installed, drivers monitor which applications and files are accessed most often, then cache those files on the SSD. It can take the software 1-2 runs to start caching data, but once this process is complete, future access and boot times are significantly enhanced. This article compares the effect of SSD cache solutions — Intel Smart Response Technology, and Nvelo Dataplex — on the performance of a VelociRaptor, and a slow WD Caviar drive. The results are surprisingly positive."
bcache (Score:5, Informative)
For Linux users: http://bcache.evilpiepirate.org/
Lets you use any SSD as a cache in front of another filesystem.
Re:bcache (Score:5, Informative)
Lets you use any SSD as a cache in front of another filesystem.
It would be niftier if it would let you use it as a block cache in front of any filesystem, instead of just one located on a specially-prepared partition. dm-cache will do this but isn't up to date.
Re: (Score:3)
Lets you use any SSD as a cache in front of another filesystem.
It would be niftier if it would let you use it as a block cache in front of any filesystem, instead of just one located on a specially-prepared partition. dm-cache will do this but isn't up to date.
Maybe Flashcache [github.com] would be a better choice for some. I use this as a read-cache on several VPS nodes and the results are impressive.
Re:bcache (Score:5, Informative)
Re:bcache (Score:4, Informative)
For Linux users: http://bcache.evilpiepirate.org/
Lets you use any SSD as a cache in front of another filesystem.
For ZFS users:
* read cache: zpool add {pool} cache {device}
* write cache: zpool add {pool} log {device}
Re: (Score:2)
well this is great.
Re:bcache (Score:5, Interesting)
Why bother? I have an HDD mounted as /, and an SSD mounted as /usr on my Gentoo system. Using atop I consistently see the HDD receive 10-20 times the writes the SSD receives but only about 2x the reads. In other words, on Linux the SSD is already serving primarily as a read-only caching filesystem just by mounting it correctly.
Re: (Score:3)
Re: (Score:3)
Because then you'll waste writes on /tmp and /var.
Re:bcache (Score:4, Interesting)
For Linux users: http://bcache.evilpiepirate.org/ [evilpiepirate.org]
Lets you use any SSD as a cache in front of another filesystem.
Solaris and Windows have been shipping with production ready L2 FS cache for years already, L2ARC/ReadyBoost. I'll give Apple a pass because their systems are mostly not designed for adding drives, and they were apparently betting on high capacity SSDs coming down in price by now. Desktops have less of a need for caches in the tens of GB anyway. Linux, as a server OS doesn't have much of a good excuse, why wasn't L2 cache worked out years ago when everyone was racing for TRIM support? Using smaller cheaper SSD drives as L2 cache almost makes too much sense. It covers up the short write cycle lifetime and poor sequential read performance. 60 some odd GB of cache starts to look pretty dang good for a lot of server workloads.
I feel I should point this out because these cheesy Linux +1 MeToo posts are _really_ aggravating to people who use it professionally. It's a tool. We're not in love with it.
http://arstechnica.com/civis/viewtopic.php?f=21&t=1114013 [arstechnica.com]
The developer apparently didn't even know what the ARC algorithm is... which is just bizarre, like developing a race car without knowing what variable valve timing is. Not saying it is needed, but what level of quality do you expect out of this?
Re: (Score:3)
Mac OSX is basicly highly modified FreeBSD/NetBSD, so it might actually already have ZFS support, therefore L2ARC.
Knowing apple tho they have probably disabled it and gives you no means to even try using ZFS.
Besides that they've probably locked down SSD support to few select drives as well.
Linux does not by default have these options but several are available. I bet some vendors do include these supports.
For ZFS you don't need kernel mods in many distros.
Comment removed (Score:4, Interesting)
Re: (Score:3)
Likewise I'm using a 16GB USB3 stick for Readyboost. It's certainly speeding up Saints Row the Third's otherwise atrocious loading times: something like 5 seconds to load my campaign versus a minute or so without.
Re: (Score:3)
You should not expect much speedup from using a 10Mbyte/s memory card in front of a standard 150 MByte/s sustained transfer drive.
If the small files are stored in your cache, you might save some seek time. But you can't compare some ultra-slow USB / SDHC card to a 2-300 Mbyte/s SSD.
I tried the SDHC, did not work well. A fast USB 3.0 stick in a USB 2.0 port was way better, but still does not compare to SSD.
Comment removed (Score:5, Informative)
Re:bcache (Score:5, Informative)
USB latency is actually rather high. Infact, rather VERY high.
Absolute minimum latency for a fetch is 16ms on USB port. It seems this has had some work on it, now being 125Hz rate by default, instead of 90.
But still 8ms for sending request for file, device gets it, let's assume it's ultra fast and takes just 3ms to find, fetch and prep reply packet (and assuming fits on 1 packet), it means 16ms has been spent BEFORE the data can be sent back, 24ms for the whole round trip.
HDDs seek faster than this, SO if your HDD is not having other activity, for single fetch your HDD is faster. Unless it's Caviar Green.
Re: (Score:3)
While I don't know the technical details as much, readyboost does help in some situations - so there's got to be something else happening on a hard drive that is even higher latency than what you're describing about one link in the chain of USB. I have an older 4gb CF card in my machine at home - no other use, so I cranked it into a readyboost drive. It can peak out USB sustained speeds (it's a UDMA5 capable drive) and I do have it going through a USB controller, but once the OS is loaded things are just
Say what? (Score:3)
Absolute minimum latency for a fetch is 16ms on USB port
Where are you getting this number? I develop USB devices (Cypress FX2 Hi-Speed and a PIC for Full-Speed), and a USB Hi-Speed microframe is transmitted every 125 microseconds. When I initiate transfers, they almost always go out during the next microframe. I can and have sent two packets back and forth in a single millisecond, and that's without sending multiple packets per microframe (I believe I've seen up to 17 bulk packet transfers in one microfr
No. (Score:5, Insightful)
Hybrid drives or mixed mode setups kinda suck ass now that actual ss drives are getting to a reasonable price/size.
SSD for os/programs.
Giant TB+ drive for storage and media files.
Re:No. (Score:5, Insightful)
Re: (Score:3)
Bingo. I've been manually managing the shifting of files back and forth between my HDD and SSD for a couple years now, and while it's not particularly hard, it's not something I'd want to guide a non-techie through. Getting the OS on one drive and the user folders (my documents, videos, music, etc.) on the other isn't particularly well documented, and moving individual Steam games seems to require console commands, a rarity in Windows.
Even though I can manage it all myself, I would absolutely switch to ha
Re: (Score:2)
I have an SSD in my new work machine, with a big disc drive as D:. For this, though, it was pretty simple to install data files (p4 sync, PCB projects) on the HDD and use the SSD for OS and applications.
Re: (Score:2)
After getting it initially set up (which was, I will concede, a pain in three hundred asses), my own dual-drive system has required me to think about which drive to put something on precisely once, when installing the Unreal Anthology off CD (which was a case of picking "D:\Unreal Anthology" instead of "C:\Unreal Anthology" when installing).
Maybe it's because I could afford an SSD big enough that I don't worry *too* much about space, and having a hard drive faster than normal (it's a 7200rpm drive in a lapt
Re: (Score:3, Informative)
Not just for power users ...
In Linux it's simple enough to, say, mount your root (OS data) folder on an SSD and /home (user data) on a HDD, but Windows 7 isn't so flexible.
What most people (power users) end up doing under Windows 7 is to install the OS on an SSD, then use a "junction point" (cf Linux hard link) to redirect the /Users folder to a HDD (and reconfigure the Windows TEMP directory to be on the HDD to avoid killing the SSD with excessive temp file create/delete cycles). The trouble with this is t
Re: (Score:3)
Odd, the hybrid drive I just bought doesn't suck ass. It in fact is faster than ANY hard drive you can buy that has a 750gb size. Unless "suck ass" is the new hipster slang for "really fast". It made my old out of date quad core i5 laptop a whole lot faster.
Re:No. (Score:4, Interesting)
I assure you that there are MANY 800 GB SAS/SATA SSD's that can beat your hybrid drive, they just cost more than most people will spend on their entire computer =)
Re: (Score:2)
Also, a Quad Core i5 isn't out of date, last time I checked. :)
BTW, a pair of 512GB SSDs are no longer crazy expensive, they can be purchased for less than $1,000 total, which is less than many gamers spend on their computers.
Re: (Score:3)
I can see why read-caching would be a lot simpler to implement, but I bet decent write-caching will really make hybrid drives as fast as SSDs for most desktop use.
Copying files from one location to another at SSD speeds till the write-cache fills up. Then while you do something else the drive flushes the cache to disk.
Re: (Score:2, Interesting)
I'd rather spend my money on RAM. Up the system memory from 8 GB to 32GB, and you eliminate the slowdown caused by hard drive accesses.
Re:No. (Score:4, Interesting)
That was true back when you were talking about MB numbers. It's definitely not true when talking in that rage of GB.
To use windows as an example, it will try and cache what it thinks you're going to load based on I think a fairly simple algorithm. Which means it's usually wrong. If you never access more than 32 GB worth of data from your HDD then sure, 32 GB of Ram will do the trick. But, for example, if you play WoW or SWTOR (both of which flutter around 20GB), any other game + web browser + windows you could fairly easily waltz past 32 GB of data, at which point you're into 'cache misses'. And yes, this is conceptually the exact same problem as cache hit ratios, just working at a different level (logical files or directories rather than lines of memory).
I had virtually no performance increase going from 12 to 24 GB of RAM on general disk use.
You can get a big boost from an SSD, and especially, getting something that will actually work a SATAIII connection at full speed. My x58 board is lucky to pull more than 200MB/s from even a very good SSD, whereas the same drive on a sandy bridge board will do 450-550 range.
Now keep in mind, a regular HDD is about 70MB/s for sustained data. Put that in a raid 1 (mobo hardware, or software) and you can see 120-130, so an SSD on a bad connection may not be that much better than much less expensive RAID.
Re:No. (Score:4, Informative)
You're ignoring a very important fact.
An SSD is at least an order of magnitude faster still because the seek time is in the microsecond range. So even an SSD on a bad interface can easily peg the interface for random I/O, while the super RAID array doing the same accesses can bog down to a halt.
The reason? Seek times. A typical hard drive is around 7ms or so, which means if you're doing lots of seeks, you'll never get that sustained transfer rate. Worst case, you can easily get less than 1MB/sec if the hard drive is reading 4kB blocks at random locations.
A hard drive is great for long reads and writes. An SSD excels at random I/O and OS/application usage tends to be random I/O. It just makes the whole system feel "snappier" purely because read requets are fulfilled immediately versus skittering the head over the platters.
In fact, Windows 7 does a quick test to determine if it's running on an SSD - it does a bunch of random I/O. If the drive is capable of more than 50MB/sec, it's an SSD because no spinning rust can meet that requirement due to seek time.
Re:No. (Score:4, Interesting)
I'm well aware of seek times. And honestly, they don't matter all that much. For a very small file they take you from 1-2 seconds to effectively instant yes, but for a significant file you're throughput limiting yourself anyway.
The problem comes when you try to read a small file while reading the large file. If you want to preview 1,000 files (say thumbnails for a directory full of pictures), that's 1000 reads. At 10ms each (hdd), that's 10 seconds. At 10ns each (ssd), that's 10ms.
Sure, keep your read-only media on a large hard drive, as you'll tend to be pulling it off as a single stream, with only one open file, but keep the majority of your files on an SSD.
Re: (Score:3)
Comment removed (Score:4, Informative)
Re: (Score:2)
I built a system two weeks ago, and I put in a 1TB drive and a 60GB cache SSD. The SSD definitely makes certain things nice--Windows boot time is down to ~7 seconds (I don't use Intel Rapid Start), and loading games (primary purpose of this machine) is also very quick, particularly on games with long load times, such as SWTOR.
That said, for common, everyday use, SSD cache drives are kind of meh. Shaving off a second from Chrome launch time, or even 20 seconds off of Windows boot time, isn't that big a dea
Re: (Score:2)
Agreed. The hybrid drives arrived at the market too late. What more, the 'caching' mechanism is a bit of a joke; a smarter, but more technically complicated approach, would have been to implement two drives in one package, one flash, one standard mechanical; however, I do not know if the SATA spec is cool with that. One drive, two partitions then?
As it is right now, the lack of control over what get put on that all too small cache is killing the market for these things. Yes, your most often accessed files a
Re: (Score:3)
Hybrid drives or mixed mode setups kinda suck ass now that actual ss drives are getting to a reasonable price/size.
SSD for os/programs.
Giant TB+ drive for storage and media files.
I have a laptop with a single drive bay you insensitive clod
Re: (Score:2)
I may have to look into that.
I'm using a 60GB SSD for my boot drive with a 1TB sata drive for storage.
I have anything requiring mass storage mapped via symlink to the sata drive.
steamapps, origin games, certain large program files too.
works decent enough, and faster than sata alone. Windows boots in about 15 seconds.
Re:No. (Score:5, Insightful)
I'm sick of this myth. The math I've done indicates that, presuming that the drive is doing a halfway decent job of spreading the writes around, most cheap SSDs are rated to allow you to write the entire volume of the drive every day for about 30 years. Now personally I don't even come close to doing that, and your average physical HDD is rated for about 5 years, with 10 being a seriously long life.
If you buy a reasonable quality SSD at present your drive will not last long enough to see a significant level of NAND failure and what will kill it will be one of the million things that kills HDDs on a regular basis.
Comment removed (Score:4, Interesting)
Re:No. (Score:4, Informative)
> SSDs have a propensity to just die like a normal HDD.
No, normal drives tend to become flaky, then get super-slow, then start to make grinding noises and die outright a short time later. SSDs just commit data-suicide, then go into "panic" mode and lock the entire drive if they sense that you're trying to do data recovery on them. Ask anybody unfortunate enough to own a drive based on the Sandforce SF-1200 controller, like the OCZ Velocity2. OCZ's forums are *littered* with post after post after post (continuing to the present) from people who've had the drive just spontaneously decide to fail.
The problem isn't flash-wear... the problem is a perfect storm of buggy firmware, drive-level encryption, and paranoid firmware that views aggressive attempts to recover data lost due to that buggy firmware as a hacking attempt & locks out the entire drive in a way that can't be fixed by end users (mostly, because Sandforce won't allow the recovery/repair tools to be released to end users). IMHO, it's completely inexcusable. At the VERY least, they should have made the encryption and protection something that can be disabled by end users (probably requiring complete reformatting, but at least present as an option). Then, they could have made a recovery mode that allows drives that had the encryption disabled to just sequentially rip the raw bits from the flash for offline recovery. But no. They have to protect their shit IP that nobody who's been burned by them will EVER purchase again anyway, and casually write off petabytes of lost user data due to their brittle embedded firmware and protection as "not our problem".
OCZ and Sandforce are the best poster children for a class-action lawsuit since the day HP decided to sell CD writers without cache (that their engineers GUARANTEED would turn at least a quarter of the discs they touched into coasters). The sad part is that such a suit could only have things like piddling amounts of money as the penalty, instead of compelling Sandforce to furnish all source, signing keys, and in-house utilities relevant to the SF-1200 to anybody who's ever had the misfortune of purchasing a drive based on it.
Re: (Score:3)
No way. Too late. SSDs already cheap enough (Score:3, Insightful)
240GB SSDs are bouncing around 200. 2 bills for the boot SSD and your old drive gets the data partition and you are beating these hybrids on performance AND price.
Re: (Score:2)
For $200 you can get yourself a 3TB drive and have change left over.
Re: (Score:2)
That's a major shift of the goalposts there.
Can't we just have an honest discussion without idiots throwing in extra conditions just so they can win some silly little mass debate game?
Re: (Score:3)
This guy is saying - for a $200k i can buy a Ferrari. You are saying - hey, you can get a perfectly good combine harvester for less money.
Hard disk gigabyte price vs SSD gigabyte price is not the issue discussed here - performance is.
Re:No way. Too late. SSDs already cheap enough (Score:4, Informative)
Exactly. I have a 240GB SSD for my laptop and desktop's main drives, with oodles of secondary storage (7200 RPM, of course). The difference is magnificent. If you've never used a SSD before, you simply do not understand -> Adobe Photoshop CS5 loads in only 3 or 4 seconds. Try doing that on a mechanical hard drive, and it's just PAIN.
All software is not created equal (Score:4, Informative)
It seems that SSD accelerators can be hit/miss. If you take a look at http://www.theregister.co.uk/2012/07/12/velobit_demartek/ [theregister.co.uk] for example, some of these products don't seem to do anything - while some seem to actually work.
Like any young industry, it'll probably a while to shake out field until only a few decent contenders remain.
Surprisingly why? (Score:4, Informative)
It would be surprising if it weren't the case. We've been doing the same thing with memory for years. Our CPUs need memory that can perform in the realm of 100GB/sec or more with extremely low latency, but we can't deliver that with DRAM. So we cache. When you have multiple levels of proper caching you can get like 95%+ of the theoretical performance you'd get having all the RAM be the faster cache, but at a fraction of the price.
This is just that taken to HDDs. Doesn't scale quite as well but similar idea. Have some high speed SSD for cache and slower HDD for storage and you can get some pretty good performance.
I love Seagate's little H-HDDs for laptops. I have an SSD in my laptop, but only 256GB. Fine for apps, but I can't hold all my data on there (music, virtual instruments, etc). They are just too pricey to get all the storage I'd need. So I also have an H-HDD (laptop has two drive bays). It's performance is very good, quite above what you'd expect for a laptop drive, but was only $150 for 750GB instead of $900 for 600GB (the closest I can find in SSDs).
Re: (Score:2)
I have had laptops with up to 750GB HDD. My current laptop has a 256GB SSD, and my next will have at least 500GB. I hate to be buzzword complient, but I don't need all the video and music on my computer all the time. I can leave that in external storage, the cloud, and dow
Re: (Score:3)
A decent OS (back down MS fanboys, Win7 fits sometimes too) uses RAM to cache a lot of stuff anyway without the hassle of a ramdisk. That file you opened yesterday may still be in RAM and open very quickly if you have enough spare memory.
Sometimes they work well though. It's nice to have a 20GB ramdisk for scratch space for software that is too braindead to use memory when it's available and just hammers the disk instead. I've got two orders
Re: (Score:2)
Pointless (Score:5, Informative)
SSD's were recently @ $1/Gig. That's when I upgraded everything.
I've seen them as low as 55-65c a gig now. Yeah... gotta love how
tech drops in price RIGHT AFTER you decide to adopt.
Buy a WHOLE SSD drive. Put all the programs you use daily on it.
120G ~ $70
That is all.
FWIW, except for bulk storage, I will NEVER buy a spinning HD again.
I experienced a RIDICULOUS speed up, going from a 7200rpm drive.
-AI
Re: (Score:3)
To be honest, despite the dire doom and gloom warnings of people that the "end times will come" to first generation adopters of SSD's, I've got a first generation OCZ drive that's still chugging along and working like the day it was new. Heck, my page file is on it. It hasn't even used any of the backup blocks yet, 3 years on now and no complaints yet. I have a second 60GB drive that I transfer stuff on to if I'm using it alot, like MMO's and some programs that use large textures(Shogun2, Skyrim and so o
Re: (Score:2)
To be honest, despite the dire doom and gloom warnings of people that the "end times will come" to first generation adopters of SSD's, I've got a first generation OCZ drive that's still chugging along and working like the day it was new.
Thanks for that.
I did jump both feet into the SSD thing, knowing I could hit that switch
one day and be met with the same silence, just less screen =)
It's good to hear from a person that their SSD hasn't died on them.
FWIW, I hope you didn't jinx yourself.
-AI
Re: (Score:2)
All too possible, I do have a backup in place just in case. But if it dies, it dies. 3 years won't be bad, it's hard enough to find mechanical drives with a 3 year warranty on them anymore.
Not confined to high-end. (Score:3)
> In reality, the shift from hard drives (HDDs) to SSDs has thus far been confined to the upper end of the PC market.
Not entirely. I have the cheapest netbook I could find, and I replaced its hard drive with a cheap low-capacity SSD. I don't keep much big stuff on it so the capacity isn't a problem. In terms of performance and power usage and not having to worry about my data when I drop my computer, it's been entirely worth it.
Re: (Score:2)
Indeed. I have a severe doubt that it's the 'upper end' of the market. In reality, it's probably the more technically literate part of the market, who understands the difference between a SSD and a HD.
Only new for the consumer... (Score:3)
This is exactly what has been going on in the enterprise storage space for a while. I only know much about two vendors, but they both have a solution like this. High end IBM storage has EasyTier, which while originally for the mix of FCAL/SAS to SATA, it works with SSD too, and in the latest revs all 3 tiers at the same time. NetApp used to have a PAM card which is now called... FlashCache? FlexCache? F-Something-Cache anyway, which is essentially an SSD drive on a PCI card.
Good to see the high end tech being applied to consumer level workloads.
No mention of OS requirements (Score:5, Informative)
1. On the file level, which isnt very resource intensive(there aren't nearly as many files as blocks on a disk), but requires that the accelerator be able to read file system metadata(and of course be able to intercept OS calls) which severely restricts what kind of file system, and really even operating systems, you can use with the accelerator
or
2. Block-level caching. Much more generic, can really be used with any file system as the blocks, not any file system metadata, are the only thing that is used. However managing all that block information comes at a cost, either in main memory or more expensive hardware. For instance Flashcache requires about 500 megs of memory to manage a 300GB disk. Depending on your usage this may be acceptable(though is memory really that much cheaper than ssds nowadays?) but for most it isnt.
From the article I can assume that they only tested Windows, and that really limits its usefulness.
In reality? (Score:3)
In reality, the shift from hard drives (HDDs) to SSDs has thus far been confined to the upper end of the PC market.
In reality, 100% of the smartphones, tablets, many/all? of the ultrabooks, and many notebooks now ship with SSDs. In a short time, virtually all laptops will ship with SSDs.
Disks will go the way of tapes. You'll be able to get them, but the practical uses will be few.
In reality, I imagine that more computers (yeah, I count smart phones and tablets) are now sold with SSDs than disks.
As to your actual question about accelerators - I have no idea. I went solid state a couple of years ago and won't be going back.
Simpler solutions tend to be superior. (Score:3)
SSD prices have fallen quickly, while hard drives have gone up. If you don't need large amounts of storage it's better to just go SSD. But what if large amounts of storage are needed?
I would recommend buying an SSD, putting the OS and all applications on it, and then using a magnetic drive as the "users" volume. Any sanely laid out OS makes this very easy. The OS and Apps will load quickly, the large items (like video) will be stored on the cheaper, larger disk storage. No "hybrid" algorithm to worry about working. Two separate parts that can be upgraded independently. No OS support required. Perhaps some acceleration of some small data files will be missed, but the large ones would have never fit in the accelerated flash anyway.
I do think that file systems need to evolve in a new direction. ZFS is a preview in the right direction, but it would be nice to have a file system where you could add ram disk, or flash disk and tell it to be used as a "cache" for underlying disk, write through or write back. Easy to do in software. Plus better backup and replication support. I'd really like to configure my laptop with a 2TB spining disk, 256G super-fast SSD, and give 1G of RAM to the file system. Tell the file system to write everything through to magnetic, cache frequently used in SSD and RAM. When I'm on the hope network replicate the spinning disk to my NAS bit for bit. Perform incremental backups to my cloud backup service when connected to a fast enough network using compressed incremental to save space. Give me all that with ZFS's other features and it would be sysadmin filesystem nirvana...
ZFS caching (Score:2)
ZFS already supports flash devices for caches. For read caching (L2 ARC), you can create striped cache volumes. You get better speed that way, and if one of the devices fails, ZFS knows it and just goes straight to the main storage volume (the one being cached). Meanwhile, the other drive continues. For write caching (ZIL), since the data is "worth" more, you can create a mirror of flash devices. The benefit of the ZIL is realized even if the cache is small, but unfortunately SSD write speed can be worse th
Re:Simpler solutions tend to be superior. (Score:5, Informative)
I would recommend buying an SSD, putting the OS and all applications on it, and then using a magnetic drive as the "users" volume. Any sanely laid out OS makes this very easy.
Just an FYI for everyone, Windows does not count as a sane OS for this purpose. I managed to render a Windows install unusable trying to do that.
The best trick is to move just specific user's folders, not the whole Users directory, over, and then symlink it back to the original location. Trying to move the entire \Users folder almost always breaks something, often rendering it impossible to log in. Other methods either require setting up your own unattended install disks with odd config files, or do not work completely.
The general process: /COPYALL /E) /J C:\Users\GMan D:\Users\GMan)
1) Install Windows to the SSD as normal
2) Create a user account and a backup account. For this demonstration, their original, default home folders will be C:\Users\GMan and C:\Users\Admin
3) Reboot (to log both out completely)
4) Log into the backup account, otherwise the system will choke while copying your registry files
5) use Robocopy to copy the user folder to the hard drive (robocopy C:\Users\GMan D:\Users\GMan
6) Delete the user folder (rmdir C:\Users\GMan)
7) Symlink the folder on the hard drive back to the SSD (mklink
8) Repeat for any other users, but note that you only need one "backup" account
This gives a few advantages:
1) It works transparently with programs that assume you are at C:\Users\[username]
2) It copies all user data, not just documents/images/videos
3) If the hard drive fails, you don't break the OS - you can log in using the alternate account (Admin in my example) to try to recover things
4) If you really wanted to, you could try to set some specific files in your user directory to be on the SSD
5) If you have a Windows install, or at least recovery partition, on the hard drive, either drive can fail without rendering the system unusable.
Re: (Score:3)
Uh, Windows is sane for this purpose, and this is commonly done in corporate environments, keeping all user data on an CIFS share.
http://www.pcworld.com/article/190286/move_your_data_to_a_safer_separate_partition_in_windows_7.html
Basically install windows to C:, the SSD.
Spin up D:, the magnetic storage.
Create D:\Users\
Change the users home directory (My Documents) in the user properties to D:\Users\ (corp environments would be something like \\userserver\Users\).
It's more or less the same thing as changing
what an ugly bandaid (Score:2)
i dont know about windows, but, wouldn't a more elegant way to accomplish this be paging? having a very large swap on the SSD portion and a very high swappiness value would sort of do what this intends to do, without such an end-run around the entire cache architecture of the OS
Re: (Score:2)
here is a hint, never use swap on an SSD. with more than 4-8GB you really should not need swap on a modern linux desktop, at all.
really?? (Score:2)
plug your SSD into SATA slot one, a large magnetic disk into slot two.
install the bootloader and OS onto the SSD, and use it for
problem solved. Of course this works for any pair of "large but slow" and "small but fast" disks.
complete BS (Score:5, Interesting)
I have an i5 (sandy) ridiculous gaming computer with a GTS450, 8GB of CL7 RAM, P67 chipset, and a pretty fast 7200 RPM 1TB Seagate main drive. It's custom built and would be around $1000 retail at my shop (at the time at least). It takes over a minute to log in and it takes forever to load games.
I also built a system I'm selling for $520 with a Pentium B950, 4GB of pretty standard RAM, and a Maplecrest 60GB SSD. It logs into Windows in 4 seconds. The glowing balls don't even touch while loading the Windows 7 logo.
SSDs are not for high end systems only! They're specifically exactly the opposite. They're the best way to make a really cheap budget PC seem extremely fast.
Worst of both worlds (Score:3)
Hybrid drives have been on the market for years. It seems to me your risk exposure is only increased by combining the two. You now have to worry about the perils of spinning platters, oxide eating flash write operations and new management technology gluing the systems together not widely deployed.
The last I checked about a year ago there were overwhelming negative comments related to reliability of hybrid drives. Even assuming all the bugs have since been worked out seems like such a fleeting and pointless stop-gap measure as to not be worthwhile.
I have enough memory that most applications load instantly from the operating system cache. 32GB of ddr is readily available for less than $200 ... nothing involving a SATA bus can be faster than the operating systems main memory disk cache.
Hopefully memristers or other technologies will pan out soon and we can be done with slow, power hungry access and inherently unreliable storage mediums once and for all.
Have tested Intel's solution a bit.. (Score:3)
When I got my new Z77 board last week, I managed to slice my 120gb SSD into two parts. 18gb for Intel's cache system, and rest for "Data" - aka Windows install.
I configured the SSD to cache my 2tb spinney a bit, and it generally worked as expected. Performance ranged from clean SSD speed some places, to in worst case old HDD speed.
In other words, worst-case scenario was same as not having cache, and best-case scenario made it look like a 2TB SSD at no extra cost :)
I've currently disabled it, since currently I'm re-installing my steam games (over 300 in total..), but will re-enable it when the data is a bit more static again.
So far I consider the experiment a huge success, even though it complicated the install somewhat (SSD cache can only be configured from Windows, and only if the SSD is not running the OS).
SSD makers got greedy (Score:3)
The cost benefit of SSD's barely makes sense - the makers got greedy and decided to tack absurd price premiums on their gear far in excess of their benefit. And they've stayed there.
Re:I have seen SSDs used just to load the OS (Score:5, Informative)
First of all, putting the OS on a disk by itself doesn't only mean that Windows runs faster - The OS reads and writes to its files on a near continuous basis. For years before SSDs, we've known that simply getting that activity segregated onto its own disk, away from "real" file activity, gives a decent performance boost across the board; moving it to an ultra-fast random-access media helps even more (and even if you don't care about boot time, how about "responsiveness"? Every time Windows needs to wait for some stupid little icon to load, you need to wait for Windows to wait for some stupid little icon to load).
Second, SSDs have gotten a lot bigger and a lot cheaper. You no longer need to decide between spending a fortune or segregating your apps out; a $60 SSD will hold the OS and every app you could ever possibly run, with plenty of room to spare. Yes, you'll still want that second big-slow-and-cheap HDD for general purpose storage, but you haven't needed to carefully weigh "on which disk should I install this program" for at least a year.
Flash ram is not a permanent solution and will die due to the limited number of writes.
And you think a drive with actual moving parts will live forever?
Make no mistake, SSDs have their flaws, and cost definitely still counts as one of them. But once you really use a system set up with SSD system / HDD data, you'll never even consider going back. And mere boot time has nothing to do with it.
Re: (Score:3, Interesting)
> And you think a drive with actual moving parts will live forever?
Compared to how long SSDs have been in wide use, there are plenty of hard drives with "actual moving parts" that have lived forever.
However, the key thing is that you get some warning with a hard drive rather than it being sudden death.
Some SSD brands make Seagate seem reliable in comparison.
Re: (Score:3)
...there are plenty of hard drives with "actual moving parts" that have lived forever.
Hmmm, been using 'puters since 1984 and still haven't found one that has a hard drive that, a) lived forever, or b) gave me a warning before it died a horrible death.
Seriously awful technology that is long overdue for an overhaul.
Re: (Score:2)
Just had to replace a failed runcore ssd after only 2 years, rather disappointed.
Re: (Score:2)
Or buy an Intel 310 (I think? it's the one I have in my home server) SSD which would randomly boot up claiming to be 8MB and require a secure wipe to recover.
Hopefully the firmware update fixed that one.
Re: (Score:2)
Re: (Score:3)
last year is years ago?
Indeed. The drive didn't even exist 'years ago'.
Someone below posted a similar bug with a different model of SSD. 'Update the firmware' seems to be a regular occurrence once you start using SSDs; so far I've never had to update the firmware on a hard drive.
Re:I have seen SSDs used just to load the OS (Score:5, Informative)
There are a variety of different ongoing tests to look at how long drives actually last [xtremesystems.org]. Looking at a fairly standard older Intel 320 40GB drive, it went 190TB written before the MWI threshold was reached, and continued on until 685TB. That means it completely rewrote the drive 17500+ times.
No, it won't last forever. And it's not ideally suited for every single industry and use. But for the typical user, they are more likely to need a larger drive or otherwise upgrade then wear out the drive.
Re: (Score:3)
256GB
assume 20% over-provisioning
307.2GB * 3000 writes = 921,600GB
921,600 * 1024 = 943,718,400MB
943,718,400 / 3600(hours) / 24(days) / 365 (years) = 29.925years
With all programs opened, HD IO is closer to low 10s of KB/sec, not MB. Most of my IO is network traffic.
After a year of randomly benchmarking my SSD, having to reinstall Windows and 100GB of games a few times due to mistakes, it still is at 0% worn. At this relatively heavy usage rate, it will take more than 1
Re: (Score:2)
All of them, once they get full.
Re: (Score:2)
Most drives have some amount of cells for wear-leveling and Idle garbage collection that are not available to the OS. Mine has 8GB. If it doesn't, just leave a couple GB of unpartionned space.
Re: (Score:2)
Re: (Score:2)
It's truly depressing that 35 seconds to use is considered fast on any system. Solaris got the nickname "slowaris" for that sort of behaviour. I don't work with win7 much, but I'm pretty sure I've never set up a win7 system that took that long to start up - turn off bonzi buddy or whatever crapware is doing stupid time wasting shit on startup.
The second thing is those "few years" have already happened and the answer was wear levelling on the SSD drives, which is more then just a "trick".
Re: (Score:2)
On older Win systems I found using erunt with ntregopt along with pagedefrag from sysinternals generally worked well, and erunt was one of the first things I installed on my Win7 box.
Several times over the years erunt registry backups saved me some grief as well (but backing up hives on boot adds a wee bit to the boot time.)
Re: (Score:2)
I do OS + most programs.
I have a mostly-vanilla Windows install on my SSD (I have a partition waiting for Linux, but I haven't found the time for it yet). The only change was moving \Users\GMan and \Steam over to a hard drive. C:\Users\GMan is now a symlink to the same on D: - I had tried it moving all of \Users, but that borked Windows so bad I had to reinstall. And now there's a backup user account, so if my HDD fails I can still at least log in.
Should I ever get Linux installed, I'll probably make /home/
Re: (Score:2)
I believe this is the solution you're looking for. The SSD is only used as a cache, which means that only the files that are read a lot get copied over to the SSD, and are read from the SSD automatically. Ideally, this means that the number of writes is minimal (if the files stored are the ones you constantly use, then they should only be written to the disk the first time and read from it subsequently). A database would never be in your cache, unless it's primarily used for retrieving data (reporting), rat
Re: (Score:3)
Re: (Score:2)
Oh come on, dead in a manner of months?
What crevice of your personal biology are you pulling that number out of?
Presumably you haven't even used one yourself based on what you're writing.
If you're writing enough to your SSD to make any significant dent in its lifespan you're probably running some sort of heavy server and should probably get a more expensive SSD that's suited for that purpose.
SSDs come with extra space (I think like 7-10%) so wear leveling still has some safety room when the drive is full (t
Re: (Score:2)
You're missing out then. I picked up a 240 GB ocz vertex 3 for 180 bucks, and in the right MOBO (one that can actually do full speed) it makes a huge difference to the user experience.
I had a 120 GB SSD, and ya, that, between development tools, windows, office productivity and 1 game (whichever game I was playing the most) I was chronically pushing my luck on space. 240 is enough that it behaves pretty well. I don't use it for a 'data' drive, for that I use a RAID 1 traditional drive, which is a combinat
Re: (Score:3)
Under a simple OS like windows? yes, I added one to my work PC and it flies. I then got another for my mac book pro and it flips out causing problems. same goes for using it under linux. It seems that more advanced filesystems and OS's that do a lot of housekeeping to the drive will freak these drives out.
Luckily I was able to sell my second drive to a friend who could use it in his windows laptop.
Re: (Score:2)
With pretty much any modern drive (whether platter based or solid state) if you are paranoid about data security you have two options, either encrypt everything that may ever hit the drive or physically destroy the drive when you are done with it. Both platter based and soild state drives have remapping systems in place (though soild state drives to a lot more remapping) such that it is pretty much impossible to gaurantee an external overrwrite will hit anything. Both have built-in "secure erase" commands b
Single Article - Multiple Pages (Score:5, Interesting)
TFA at extremetech isn't that feature rich, nor embarking on a brand new frontier that none of us had ever been
TFA could have been made into ONE PAGE, but no, extremetech ain't gonna let us, the readers, enjoy it in one shot - we had to click through all the 5 pages
Please, Slashdot !
Next time you give us a link to a single TFA with multiple pages, please indicate it right upfront
Thank you !
Re:Single Article - Multiple Pages (Score:5, Informative)
Not only this, the claims are at least 12 months out of date. SSD's are now less than $1/GB, & the average drive sold now is 120 or 128GB.
I upgraded my desktop with a cheap solution (AMD A8, 990FX, 16GB RAM, 128GB SSD, 2TB HDD) all for less than my last upgrade cost ($669 vs $955) 3 years ago. SSDs are definitely part of the norm now, we order many machines with dual 128MB SSDs in them, both laptops & desktops. The price difference is negligible, so this article seems more like a cry by someone attempting to hold onto the old way of doing things.
Re:Single Article - Multiple Pages (Score:5, Informative)
Not all of us are gamerz ?
Comment removed (Score:4, Informative)
Re: (Score:3, Informative)
AMD A8 (and A6 and A4) have a graphics core on board. Those graphics cores even are quite sufficient for non-hardcore gaming. I'm a fan of the FM1 platorm, and think it is way underrated. It has decent processing power, decent graphics, is very quiet (even with stock cooler) and not expensive at all. Example: A6-3650 (86.90€), 2x8GB kit RAM DDR3-1333 from ADATA (67.98€), motherboard GIGABYTE GA-A75-D3H (99.90€). That's quite some power for 254.78€ (including taxes, excluding shipping
Re: (Score:2)
That is how much of a difference SSDs are over HDDs.
This shouldn't be the case unless your operating system sucks at caching, and I am speaking as an early adopter (have had one SSD or another for 3+ years). The GP's point is valid: SSDs are great for improving bootup and application startup time, but unless you plan to put all your files on SSD (or, like I said, your OS sucks at caching), the returns are definitely diminishing. Better to max out the DRAM.
That said, I generally do recommend SSDs; just get a small, cheap one for the OS. You don't need t
Re: (Score:2, Informative)
Dude, that is the 5,000 hour bug.
Update the firmware.
http://bit.ly/O1Fvzj [bit.ly]
http://hothardware.com/News/Crucial-Acknowledges-Weird-5000-Hour-M4-SSD-Bug-Promises-Firmware-Fix-in-MidJanuary/ [hothardware.com]
http://www.crucial.com/support/firmware.aspx [crucial.com]