Seagate's Shingled Magnetic Recording Tech Boosts HDD Capacities to 5TB and Up 195
crookedvulture writes "Seagate has begun shipping hard drives based on a new technology dubbed Shingled Magnetic Recording. SMR, as it's called, preserves the perpendicular bit orientation of current HDDs but changes the way that tracks are organized. Instead of laying out the tracks individually, SMR stacks them on top of each other in a staggered fashion that resembles the shingles on a roof. Although this overlap enables higher bit densities, it comes with a penalty. Rewrites compromise the data on the following track, which must be read and rewritten, which in turn compromises the data on the following track, and so on. SMR distributes the layered tracks in narrow bands to mitigate the performance penalty associated with rewrites. The makeup of those bands will vary based on the drive's intended application. We should see the first examples of SMR next year, when Seagate intends to introduce a 5TB drive with 1.25TB per platter. Traditional hard drives top out at 4TB and 1TB per platter right now."
25% improvement in space ... (Score:5, Informative)
No thanks.
Re:25% improvement in space ... (Score:5, Informative)
Re: (Score:2)
You still have to get the data on there. If that process is too bothersome when compared to alternatives, then us data hoarders may just pass on these drives.
Plus, these drives will likely go for a hefty premium above and beyond smaller drives (like 4TB ones) that also perform better.
They really don't need any additional reasons to dissuade potential buyers.
Re: (Score:2, Insightful)
Pretty sure these will be marketed towards the write-rarely "backup/media dump" segment. At lower $/GB than a non-shingled 5.4kRPM.
Re: (Score:2)
It's not how frequently you're going to be writing to the drive but how much data you want to put on it when you do. Being unable to clone a drive WILL be a problem.
You are trying to argue with precisely the sort of user you're trying to speak for.
Re:25% improvement in space ... (Score:4, Insightful)
You would be able to clone a drive, just not as quickly.
But from the sound of it, it is probable that well formed sequential writes (such as cloning a whole disk) might run at full speed, there's no need to read and rewrite a track if you can hint that it will be overwritten anyway.
Re: (Score:2)
TRIM could do it, or even big write requests that span the triplet.
5 1/4 HD's (Score:2)
This is all well and good, but couldn't just one manufacturer afford to set aside one measly manufacturing line for making 5 1/4 inch Hard Drives again?
Here me out. Now that they are up to 1TB per platter with current tech on 3.5 inch drives just imagine what they could fit into a 5 1/4 inch drive now!!
I know I wouldn't be the only one willing to shell out bux for one of those, providing they used all that space intelligently: With Data Spaces that large it would pretty much be a requirement to include buil
Re: (Score:2)
Umm, around 9.1 TB? ((Simply did 5.25" drive area / 3.5" drive area) * 4 TB)
Re:5 1/4 HD's (Score:4, Informative)
There was once a "bigfoot" brand of HDDs that did just that. It was a disaster. It's unlikely anyone will try that again. You can just put 2 3.5" drive in about the same volume in your case, so why not do that?
Re: (Score:3)
One of the Compaq mid-tower lines used those drives. Quantum Bigfoot. I worked at Computer City at the time, and every time one of those towers came in for service, it was for a bad drive. It got to the point where we would see a customer carrying one up to the counter and we would tell him/her what the problem was before they even set it down.
The really sad part was that for the first few months, we had to replace the defective drive with the same type because that's what the warranty dictated. After those
Both size and manufacturing (Score:4)
Re:5 1/4 HD's (Score:5, Informative)
As I understand it, one of the big reasons for moving away from 5.25" toward 3.5" and smaller is because of the need for faster and faster seek and read/write times. They had already made the bus path from head to CPU pretty fast, after that, the low hanging fruit for further gains was to simply make the disk(s) spin faster. After all, you can't possibly send bits on the wire faster than they spin past the read head. Problem is, spinning the larger 5.25" platters faster a) sucks back a lot more power than their smaller brethren. b) more power means more heat==shorter MTBF c) increased vibration increases read/write errors. (a problem exacerbated by ever-smaller magnetic domains)
Another reason of course is that the smaller package just makes so much sense at the end user level as well. Smaller portable consumer devices, more drives per rack etc
Finally; selling 5.25" drives in a world of 3.5" and smaller has been tried. "Quantum Bigfoot" [wikipedia.org]
Re: (Score:3)
That's why we say fuck magnetic media and cram SSD tech into that 5.25 form factor.
And make it a hot-swap bay.
Re: (Score:2)
Here me out.
Wear you out?
Re: (Score:2)
Here me out.
Wear you out?
Well, that depends. Send me a pic. ;) (Yes, I know I misspelled "Hear". :P)
Re: (Score:2)
Re: (Score:2)
It could be fine for some applications, especially with an SSD cache. For other uses, not so good.
Re: (Score:2)
For something like DVR recordings, I don't need the speed, and just want the space.. So it could be a reasonable tradeoff.. I didn't see how much of a difference it would make in cost.
clearly I don't understand something... (Score:2)
rewriting data compromises data on the next track, which needs to be read/written, which compromises...
So you need to rewrite the whole damn 5TB disk?
"higher bit densities come with a penalty"
That sounds like an understatement.
Re: (Score:2)
TFA says they've limited the overlap to prevent the need to rewrite the whole disk. Only the three-track segments, which do not affect the tracks beside the trio.
That said, I won't be an early adopter on this one. We'll see how it pans out in the real world before I consider deploying this.
Re: (Score:2)
that is where they got the name,... it runs down your platter like shingles down your torso. What you pictured a roof?
Re: (Score:2)
So you need to rewrite the whole damn 5TB disk?
You failed to read even the summary?
Re: (Score:2)
DRAM refreshes and rewriting data are two different things.
leaked map (Score:3)
Blame Microsoft (Score:5, Funny)
People will just blame Windows for the sluggishness.
Re: (Score:2)
I wonder if they could make 50tb drives today? (Score:2, Interesting)
Sometimes I wonder if they already have the technology to make 50 or 100 tb drives and they are just trying to keep their profit margins up by incrementally increasing storage at a fixed rate every year.
Yes, all twelve agreed to go out of business (Score:5, Insightful)
Yep, they've had it since 2004, when all twelve of the drive manufacturers agreed to just sit on it while Western Digital kicked their butt in the marketplace. Nine of them went out of business rather than reveal their secret.
Re: (Score:2)
Citation? Last time I checked, the largest WD disk available is "only" 3 Tb.
Re: (Score:3)
http://www.wdc.com/en/products/catalog/ [wdc.com]
You must have checked last several months ago.
because they are ALL hiding their 50TB, AC says (Score:2)
4TB, I believe. AC's conspiracy theory is that all the drive companies have had 50TB drives they've been hiding. Since most of them have been driven out of the hard drive business, I guess they were so committed to the conspiracy that they'd rather fold than get rich selling huge drives.
Such is the logic of the left-wing nutjob conspiracy theorists. Damn the NSA for making them right about something. Even a broken clock is right twice a day.
Re: (Score:2)
Re: (Score:2)
...they already have the technology to make 50 or 100 tb drives
Stored in a secret Area 51 bunker staffed by Brent Spiner.
Re: (Score:3)
They have the technology, but it's limited to write-only drives.
Re: (Score:3)
Seriously, I don't think so.... In fact, from every indication, they're all really struggling to find increasingly creative ways to cram more magnetic data on a given amount of platter space, and reliability is probably suffering.
I don't have proof, but MANY people I know who are in I.T. and work with large capacity drives every day will tell you it's their observation that SATA drives became less reliable when capacities went over the 1 to 1.5TB mark. The 2TB drives all started using the newer "perpendic
Re: (Score:2)
Meanwhile the only 1TB drives I've lost were getting a bit old anyway and had a pile of ECC errors on the way out as a bit of a warning.
I'm impressed with BSD and ZFS - I'm getting surprisingly high performance out of even IDE drives on 32 bit "n
Not going back (Score:3)
Re: Not going back (Score:2)
Re:Not going back (Score:4, Insightful)
Yeah. Buy storage in 256G chunks.
That makes as much sense as someone getting giddy over how large of an array they can make out of 10 year old hard drives. It will be unnecessarily complex and resemble some sort of Rube Goldberg machine.
Large drives are hardly a "niche" use case.
On the other hand, there is a very wide gap between what expensive SSD can reasonably deliver and what much cheaper spinning rust can manage. Spinning rust can manage a wide range of use cases.
It's SSD that represents the niche: small data for very casual users that don't do much of anything.
Re: (Score:2)
Re: (Score:2, Insightful)
I know how thin SSD drives are. I have some. Although I realize their limitations. I just don't swim in the kool-aid or act like some sort of tech fashionista.
It's good that you mention drive failures because spinning rust gives you some warning. It makes it easier to prepare rather than just being surprised suddenly.
The cost difference also makes it more likely that you have some degree of protection either from array redundancy or extra copies of the data.
Not going out of your way to waste as much money a
Re: (Score:2)
Re:Not going back (Score:4, Insightful)
512GB hits the use case for probably 95% of consumers (based anecdotally on backup sizes and harddrive capacities for ~3-400 friends, customers, family, etc).
Re: (Score:2)
If this were even close to true, large corporations would not use NVRAM technologies to back their incredibly critical data stores. That "spinning rust" in a mid-sized 8-drive RAID-10 array can deliver roughly 2000 oper
Re: (Score:2)
Which actually does make sense if you want a freebsd zfs test rig and actually want a few disk failures to see how you can handle them instead of finding out the hard way on the important stuff. I get that your point is about daily use though.
Re: (Score:2)
Well if you're happy with 256GB storage total, but you can get a 128GB SSD + 2TB HDD for the same money. If streaming covers all your needs then good for you but heck my Steam directory full of 10GB+ games alone would give it breathing issues. I think they do damn well in pairs, just checked and I'm still looking at a 16:1 price advantage (250 GB SSD ~= 4 TB HDD). Personally I just love the ability to have near infinite space for a few bucks, you can be a digital hoarder and still have it fit a mid size cab
Re: (Score:2)
How do you store the last ten years' of photos you've taken, or your music collection?
The first two days of my child's life is enough raw video data to seriously dent an SSD all on its own.
I use an SSD for booting, but my primary filesystem is all spinning platters. With modern caching options (on Linux and Windows at least), you can get very near SSD speeds on frequently accessed files using a huge hard drive as your data store anyway.
I wonder what the retention reliability is like (Score:2)
I have to seriously question the retention reliability of a device that requires re-writing multiple tracks at one time.
The performance impact is obvious as well.
I think I'll avoid these like the plague until they're proven to be as reliable as older technology.
Does it (still) make sense ? (Score:3, Interesting)
Re: (Score:3)
Spinning disks are only dead if you have no bulk storage needs, unless you think prices are going to fall through the floor out of the kindness of NAND Flash manufacturers' hearts.
There's a single chassis in my closet that has 96TB of disks in it. That kind of density is utterly unthinkable on flash memory.
Re: (Score:3)
Traditional disks are STILL about 10x the capacity and 1/10th the price-per-capacity of SSDs, as they have been since they arrived. Price-per-GB for SSDs has come down, but so has price-per-GB of mechanical drives-- currently you can get a 3TB drive for ~$100, while a 256GB SSD costs around $200-- thats 8x the cost for the SSD.
Re: (Score:2)
8x the cost, but 100x the performance. There's a reason so many system builders are using an SSD for the boot/OS drive and critical applications, and a regular HD for long-term storage.
That hybridization is the first step to totally deprecating hard drives, or relegating them to the same fate tape faced so many years ago.
Re: (Score:2)
Reality check. Tape died because it stopped growing in capacity anywhere near fast enough to keep up with disks. Not because it was too slow. That in no way whatever bears on the disk situation now. When and if it ever does, and it just might (since this story makes it clear just how they are scraping the bottom of the barrel and not coming up with anything worthwhile for advancing disk tech), then at that time we can talk about disks dying. Disks are going to far outperform ssd's in GB/$ for a long time to
Re: (Score:3)
Tape didnt die at all, its right where we left it (in the server room).
Call me when HDDs come anywhere close to the price / capacity of an LTO5 cartridge (~$30 /~3TB), or their archival life, or their durability; or have anything resembling a modern tape library in terms of media management.
I dont think tape is going anywhere in terms of archival storage, any time in the near future.
Re: (Score:2)
We don't have 10 years of history of SSDs, but we do have of flash which are obviously closely related. 10 years back:
Slashdot comment system for Drupal [mukhsim.com]
Flash at around 128MB @$50. [archive.org]
And HDD at around 160GB @150. [archive.org]
Today it's flash around 128 @$55. [newegg.com]
And HDD at 3TB @$150. [newegg.com]
1024x increase in flash for the same price point. 18.75 increase in HDD capacity for the same price point.
I decided t
Re: (Score:2)
There are several well-funded start up in the valley making pure-SSD enterprise arrays now, plus several making SSD-fronting-HDD arrays.
Given that the cost of the physical disks in big box storage is a small fraction of the cost of the unit, SSD-based storage will only be more expensive because EMC/NetApp can get away with charging even more.
Another mess (Score:2)
For some workloads these things will create nothing but problems. And all that for a 20% density increase? Sounds quite stupid to me.
Re: (Score:2)
The current "Green" drives from WD and Seagate already create nothing but problems for certain workloads, but they're extremely appealing for one-off modest-density needs that are probably appropriate to most consumer applications.
Shipping? (Score:2)
>Seagate has begun shipping hard drives based on a new technology dubbed Shingled Magnetic Recording....We should see the first examples of SMR next year, when Seagate intends to introduce a 5TB drive with 1.25TB per platter.
These two things don't match.
Poor compromise for only small capacity increase (Score:5, Insightful)
Re: (Score:2)
The read-modify-write penalty for overwriting existing data in-place is huge (even with attempts to minimize it with smart block mapping) and not worth the very minor increase in areal density. It's a bad sign that the storage industry was forced to adopt this because it means better encoding technologies are further off in the future than originally anticipated. Brick wall.
If it means that rotating media no longer has a write performance advantage over flash, then it is a very poor compromise indeed.
Re: (Score:2)
If it means that rotating media no longer has a write performance advantage over flash, then it is a very poor compromise indeed.
What is this, 2008?
Rotating media hasnt been competitive in write performance for quite awhile now.
Re: (Score:2)
First things First (Score:2)
I'm glad to see that unlike some other well-known technical blogs, Slashdot has pushed aside new revelations about our Police State to pass along important product roll-out press releases from the biggest tech companies.
The summary doesn't mention (Score:2)
But if you edit a single point in the first layers of the hard drive, you have to re-write your entire hard drive down? What happens when the power goes out before you're done writing? Does it rewrite entire tracks or just the magnetic domains that are compromised? How does it even know certain tracks have valid data or will it require a proprietary driver to make it work?
Re: (Score:2)
Or ZFS
Wait, I'm not sure I want this... (Score:2)
I may not want this because I see TV commercials all the time that say that I can get Shingles already if I've had the Chicken Pox. Supposedly there's a med that I can use to get rid of it or prevent it.
Re: (Score:2)
You mean like everyone running Windows, as well as anything using an ext filesystem?
From e2fsck:
-D Optimize directories in filesystem. This option causes e2fsck
to try to optimize all directories, either by reindexing them if
the filesystem supports directory indexing, or by sorting and
Re: (Score:2)
Right there in the help it says you won't normally use -D. Besides, is there anything that hasn't updated to ext4 that would also likely have a 5TB drive?
Re: (Score:3)
http://askubuntu.com/questions/9306/do-i-need-to-defrag-ext-file-systems [askubuntu.com]
http://en.wikipedia.org/wiki/Ext3#Disadvantages [wikipedia.org]
There is no online ext3 defragmentation tool that works on the filesystem level.... While ext3 is more resistant to file fragmentation than the FAT filesystem, ext3 can get fragmented over time or for specific usage patterns, like slowly-writing large files.[23][24] Consequently, ext4, the successor to ext3, is planned to eventually include an online filesystem defragmentation
All filesystems running on magnetic media require defragmentation. Those that "do not" are defragging. Fragmentation is a fact of life with any filesystem. And before you start up with the "well ext requires less", so does NTFS: comparisons between ext and "the Windows world" are invariably referring to FAT, not NTFS, which is by all accounts a strong competitor to the ext family.
So a
Re: (Score:2)
Why do they "require" defragmentation?
It sounds to me like you're saying they require it *for performance reasons*. Not for technical reasons. As long as random access is as fast as you need it to be, who cares how fragmented things get?
Re: (Score:2)
They require fragmentation because when 2 10MB files are allocated, and then data is removed from the first to bring it down to 5MB, there is a 5MB hole in between the files. Over the course of time, there will be many small holes in your filesystem, and eventually you will need to write a file that is bigger than any single "hole". At that point, the data will be fragmented across several holes.
This is simply a reality of using filesystems, and cannot be avoided without a precognitive, omnipotent filesys
Re: (Score:2)
To answer your second question, the "why" is that every magnetic media we have has seek times that are roughly 3 orders of magnitude slower than the timescales RAM and CPU work on, making every seek a massive performance hit.
Until we completely eliminate those seek hits, defragmentation will be necessary, and at the moment SSDs are nowhere near the majority of data storage.
Re: (Score:2)
Since you obviously know that a *file* can be fragmented, obviously you already know that a file doesn't have to be contiguously written.
Thus, you don't need to defragment it. The directory structure knows that the 'file' is in blocks 1-5, 8, 14.
Re: (Score:2)
but you are wrong, fragmentation can severly impact performance in the real world. Some people *do* defrag files under Linux when it becomes a problem
Re: (Score:2)
Yes, as I originally said, FOR PERFORMANCE. Not for ACTUAL USAGE, IF THE RANDOM ACCESS TIME is within someone's needs.
Re: (Score:2)
But not on a SSD.
Re: (Score:2)
On a related note, I don't have to change the oil in my car, not for ACTUAL USAGE, since the SLUDGE BUILDUP happens over a long time.
Re: (Score:2)
Yes, and then instead of the magnetic head seeking to one track, and grabbing the entire file, it has to seek 3 times, and wait for 3 disk rotations (worst-case scenario), imposing a penalty of ~30ms before the CPU can get back to work on that data it was waiting for.
You dont HAVE to defrag, but the excess seeks will destroy your performance. Defragging essentially tries to mitigate the very problem that SSDs solve (since they have essentially 0 seek time). Seek time accounts for 90% of the time your comp
Re: (Score:2)
TW Don't allow an NTFS disk to exceed 50 percent full or you will suffer data loss/corruption - don't believe me? Check the MS Knowledge base and yes it still applies to Win8 since it's using NTFS.
Source or bull. The only space limitation on NTFS is that you are "supposed" to leave at least 15% of space for defrag-- though i suspect like the "swap=2xRAM" metric of old that it is only a rough guide and horribly outdated.
Re: (Score:2)
Re: (Score:2)
BTW Don't allow an NTFS disk to exceed 50 percent full or you will suffer data loss/corruption - don't believe me? Check the MS Knowledge base and yes it still applies to Win8 since it's using NTFS.
Ok, I don't believe you. I will check the Knowledge Base as you suggest. Now can you provide the KB article page in question?
Re: (Score:2)
Technically on mechanical drives, ext4 benefits from defragging... So does btrfs.
Re: (Score:2)
So do all filesystems operating on media with higher sequential throughput than random throughput.
Fragmentation will occur on any filesystem which allows files to be modified without being re-written in full.
Re: (Score:2)
Technically on mechanical drives, ext4 benefits from defragging... So does btrfs.
I don't think there is a file system which would be completely immune to fragmentation. The reason why ext4 is often touted to not fragment badly is because it tries to spread the files across the disk instead of storing them sequentially (like NTFS and many other file systems). I don't know what kind of strategy btrfs uses.
Re: (Score:2)
Technically same thing on NAND based SSDs (opening a page takes a few us for MLC).
This is what I've been thinking too. Defragmentation is usually not recommended for SSDs, but as you say, one would think that reading full pages would actually provide a slight performance improvement. Then again, the contents of the SSD might be remapped to arbitrary places due to wear leveling, so the OS might not see the physical linear structure of the disk anyway. Thus the defragmentation would have to be done with some kind of cooperation between the OS and the disk, for which there exists no impleme
Re: (Score:2)
I have had drives from every manufacturer but one fail in warranty or just out of it, and I've had drives from all manufacturers last for years and years without issues.
All the hard drive companies have made some good drives and some bad ones.
(The one that never died prematurely on me was Micropolis. I had their server-grade drives.)
Re: (Score:2)
Never had a Micropolis SCSI 5.25" or 3.5" fail either. They were built to last forever. OTOH, I distinctly remember paying in the neighborhood of $2000 for 300's and then again for 1000's. That's MB, not GB, BTW.
Re: (Score:2)
Re: (Score:3)
I buy several hundred drives a year and I've consistently had more problems with all non-Enterprise Western Digital product lines than with I had with Seagate, Hitachi or Samsung models. By rough order of preference, I found WD "Blue" drives least reliable, followed by WD Green, followed by Seagate Eco models, followed by WD Black. The most trouble free drives over the last five years or so? Samsung's F-series and Hitachi DeskStars. Goddammitsomuch.
Re: (Score:2)
I have a 1.5TB drive that's been in service for nearly 5 years.
I usually retire drives not because of failures or disk errors but due to capacity. I've seen drives from 500G up to 4TB and hammered drives of all sizes.
Capacity doesn't really impact longevity.
Re: (Score:2)
I can't match your duration individually, but I have a huge failure-free aggregate duration of 2 and 3TB drives.
I have a total of 22 Samsung HD204UI 2TB 5400rpm (the last 4 were actually Seagate rebrands, but same design). Power-on hours 7574, 8090, 8098, 8592, 8609, 8690, 8691, 9330, 10,041, 10,105, 11,612, 11,612, 11,676, 11,676, 16,730, 17,270, 17276, 17,769, 17,769, 18663, 18663, 19650.
Also 7 Hitachi/Toshiba (another buyout) 3TB 7200rpm. POH 2133, 2761, 2766, 2925, 3533, 3598, the 7th one is not online
Re: (Score:2)
I have had car air conditioners sit for periods of up to a YEAR without being turned on, and never had one fail yet. I had one car for 18 years and never once even started the engine from december through april of every year. The car including the AC still worked fine until rear ended and totaled in the 18th year. Another car sat from 1999 to this year in the driveway. Finally somebody bought it, and he says the AC still works fine. So much for that old wives' tale.
I don't think hard drives degrade in stora
Re: (Score:2)
2.5" drives seem to come in (3) sizes. 7mm, 9.5mm and 15mm.
You're pretty much guaranteed that the 7mm drives are single-platter. There just isn't enough room in there for a 2nd platter along with the requisite spacing.
Harder to say for the 9.5mm units, but they're probably a mix of single and double platter.
The 15mm units are going to be almost all double (or triple?) platter.
Power usage is also a hint. The more platters, the more p
Re: (Score:2)
You're probably right about 7mm. HGST has a 1.5TB 3 platter 9.5mm. Samsung has or at least used to have a 500GB 3 platter 9.5mm. You left out 12.5mm; not sure about the platter count there. WD Passport 2TB's have 4 platters in 15mm.
Re: (Score:2)
Re: (Score:2)
bullshit on "done safely", instead of loss to a single file a power loss or unexpected interuption wil be devastating to the intergrity of an enormous amount of data.
Re: (Score:2)
Do not want and WILL NOT BUY EVAR.
It's way, way, way too much of a crippling performance and reliability hit for a laughably miserably tiny capacity gain. 25%? Are you kidding? I'll buy two 4's to get 8. I'll never buy a 5 to replace a 4. Maybe, just possibly, if it gave a 300% capacity gain I might possibly consider it for data where speed and reliability does not matter at all. Hmm, come to think of it, I guess that covers a big fat ZERO percent of my needs. So, no. Just no.
Re: (Score:3)
So I guess HAMR [wikipedia.org] is still in the labs.
Stop - HAMR time. Isn't this basically what was used in Minidisc [wikipedia.org]?