Storage Vendors Are Quietly Slipping SMR Disks Into Consumer Hard Drives (arstechnica.com) 221
"Storage vendors, including but reportedly not limited to Western Digital, have quietly begun shipping SMR (Shingled Magnetic Recording) disks in place of earlier CMR (Conventional Magnetic Recording) disks..." writes Ars Technica.
"In addition to higher capacities, SMR is associated with much lower random I/O performance than CMR disks offer."
Long-time Slashdot reader castrox shares their detailed report: Shingled Magnetic Recording is a technology that allows vendors to eke out higher storage densities, netting more TB capacity on the same number of platters — or fewer platters, for the same amount of TB. Until recently, the technology has only been seen in very large disks, which were typically clearly marked as "archival"...
Storage vendors appear to be getting much bolder about deploying the new technology into ever-smaller formats, presumably to save a bit on manufacturing costs... [S]everal users have reported that these disks cannot be successfully used in their NAS systems — despite the fact that the name of the actual product is WD Red NAS Hard Drive.
Citing a statement from Western Digital, the article concludes that "The writing on the wall here seems clear. Yes, Western Digital slid SMR drives into traditional, non-enterprise channels — and no, the company doesn't feel bad about it, and you shouldn't expect it to stop...
"Western Digital doesn't appear to be the only hard drive manufacturer doing this," they write, noting that the storage-news web site Blocksandfiles.com "has confirmed quiet, undocumented use of SMR in small retail drives from Seagate and Toshiba as well."
"In addition to higher capacities, SMR is associated with much lower random I/O performance than CMR disks offer."
Long-time Slashdot reader castrox shares their detailed report: Shingled Magnetic Recording is a technology that allows vendors to eke out higher storage densities, netting more TB capacity on the same number of platters — or fewer platters, for the same amount of TB. Until recently, the technology has only been seen in very large disks, which were typically clearly marked as "archival"...
Storage vendors appear to be getting much bolder about deploying the new technology into ever-smaller formats, presumably to save a bit on manufacturing costs... [S]everal users have reported that these disks cannot be successfully used in their NAS systems — despite the fact that the name of the actual product is WD Red NAS Hard Drive.
Citing a statement from Western Digital, the article concludes that "The writing on the wall here seems clear. Yes, Western Digital slid SMR drives into traditional, non-enterprise channels — and no, the company doesn't feel bad about it, and you shouldn't expect it to stop...
"Western Digital doesn't appear to be the only hard drive manufacturer doing this," they write, noting that the storage-news web site Blocksandfiles.com "has confirmed quiet, undocumented use of SMR in small retail drives from Seagate and Toshiba as well."
SMR disks can die RAPIDLY in RAID arrays (Score:5, Informative)
Another problem with SMR is that some RAID systems can kill them in a matter of days. Read the fine print of your raid system's supported disks section to see if SMR is supported.
Drobo systems, for example, don't support SMR.
So, for example, recently we purchased three new Seagate 8TB Barracuda drives to add to our Drobo system. Within three days two of the drives had completely failed and the third was flashing up warning lights leading to an urgent purchase of non-Seagate drives to replace it.
Checking with Drobo and SMR drives aren't supported - are these SMR drives? I checked the online support sheet for the drive, no mention of SMR.
I emailed Seagate and they claim they don't even know!
"Thank you for contacting Seagate Support. "Typically"the new Barracuda over 8 TB uses SMR, we do not have access to this kind of information, so I can't confirm this."
But more digging revealed they are SMR.
https://www.reddit.com/r/DataH... [reddit.com]
All three drives were returned to the dealer and a full refund received.
Since then I've been using Toshiba disks without any problems.
Re: (Score:2)
Re:SMR disks can die RAPIDLY in RAID arrays (Score:4, Informative)
So, as a follow-up, it appears all major disk vendors are now using SMR on their higher capacity budget drives with various levels of openness about it.
As far as I know WD are the only ones shipping NAS certified drives with SMR, which is REALLY shady.
Re: (Score:2)
That sounds more like a bad batch to me. That would mean that a little snippet of code that simulates RAID write patterns could kill your disks that fast too. What may be happening is that your work pattern is causing the buffers to fill up so it'll micro-stall and the RAID controller will drop it. Other than that I do see people say write performance will be very bad, including rebuild performance so not good for RAID5/6. If you do want to use them then RAID 0, 1 or 1+0.
Re: (Score:3)
No,
see: https://drobocommunity.m-ize.c... [m-ize.com]
Re: (Score:2)
recently we purchased three new Seagate
That was the beginning of your mistakes. Don't you people read Backblaze's drive stats?? HGST, definitely, WD maybe, Seagate, fucking never.
Re: (Score:2)
So explain how continued writes can kill a platter drive? Worst case scenario seems that writes would just be incredibly slow.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
According to backblaze (not blackblaze) whatever you buy right now is probably not representative of the reliability figures they have because their numbers seem to point to a new reliability champion every year ... but you need at least a year or so of data to make a relevant assessment.
Re: (Score:3)
The great thing about anecdotes is that everyone has one, ... about other products. I swore by WD because of multiple Seagate failures. Now I have a mix of WD and HGST (also WD) drives, and in my experience using my massive datatrove of a whole 15 drives I can say that WD and HGST are 100% reliable* whereas I've a 50% failure rate of Seagate drives in 3 years.
But that's the problem with small datasets. My experience is not realistic and my failures do not reflect the reliability or quality control of any of
Re: (Score:2)
Re: (Score:2)
I remember when Seagate was the responsible manufacturer. That certainly has changed. When WD drives go, at least they give you some notice. The recent Seagate drives I purchased, died with no notice. I will never use them again.
Re: (Score:3)
Further Info from german wiki about SMR spread (Score:5, Informative)
The german wikipedia page for SMR has a section captioned "Verbreitung"(* -> spread of SMR disks in market) where links three links to "computerbase.de" which had done the research are cited.
https://de.wikipedia.org/wiki/... [wikipedia.org]
Daneben konnte durch Recherchen von Fachzeitschriften im April 2020 gezeigt werden, dass alle groÃYen Hersteller Festplatten mit SMR vertreibt, ohne dies zu kennzeichnen.
Translation:
Besides that research by computer magazines conducted in April 2020 have shown, that all big hard drive manufacturers ship disks with SMR without labeling this.
There are also the following disks/manufacturers listed where research has shown that these are using SMR.
Toshiba
DT02: 4 TB und 6 TB
P300: 4 TB und 6 TB
MQ04: 1 TB und 2 TB
L200: 1 TB und 2 TB
Western Digital
WD*0EFAX: 2 TB, 3 TB, 4 TB und 6 TB
Seagate
Barracuda ST2000DM008 (2 TB)
Barracuda ST4000DM004 (4 TB)
Barracuda ST8000DM004 (8 TB)
Desktop HDD ST5000DM000 (5 TB)
Re:Further Info from german wiki about SMR spread (Score:4, Insightful)
Very frustrating for all users that bough these drives. With a builtin CMR-like cache they can and will fail when you need them most: in RAID usage with lots of written data. Let's hope the list does not grow and that these drives will be replaced by the manufacturer...
Re: (Score:2)
Re: (Score:2)
Unlikely. SMR drives are orders of magnitude slower at writing than reading. The healthy drives in the array won't time out while reading waiting for slower write drives. The likely read delays are more in the order of seek time, rather than the 7seconds or so many "RAID" drives default to reporting read errors to controllers (yes that's different but it points out that if your disk times out in 7 seconds your entire RAID would degrade with every unrecoverable I/O error, and that's just not the case).
Re: (Score:2)
Very frustrating for all users that bough these drives.
Well that depends on why you bought them. :-)
And when the drive wakes up next morning (Score:3, Funny)
it remembers nothing, its cache is empty and its plug hurts like hell.
How can I tell? (Score:2)
Is there some kind of test I can run or a product number I can check to see if I have one of these drives?
Re: (Score:3)
Open it? Or, if you look at it in gparted or some other tool (hwinfo on windows) it should show you the drive serial number which you should be able to look up online (or in manufacturer's database usually accessible via RMA'ing). There's probably a serial number format by device type.
I'm guessing the label will say what it is inside, though.
Re: (Score:3)
WD says their current 8-14 TB "WD RED" drives are CMR, not SMR. So you personally should be safe.
The 10 TB hard drive you buy next month may be different. It looks like WD is aggressively trying to hide that parameter from consumers. They are more forthcoming with identifying "host-managed" SMR drives that are aimed at sequential-write workloads in data centers (such as log-structured files and databases).
Re: (Score:3)
You cannot use the drive model number to check whether it is SMR or not - WD use the same model numbers for SMR and CMR models.
WD is misleading (Score:2)
Western Digital has been shameless lately in their marketing. The 10TB drive I purchased is in reality a 9.1TB drive.
Re: (Score:2)
9.1 TiB? If so then yes they are skimping! 10 TB == 9.3 TiB.
Failure modes (Score:3)
This was back when SSDs were sensitive to power loss, making SMR even worse.
Why? (Score:3)
The only reason left to use spinning platters on less than gigantic drives is certain types of reliability. Lower read error rates, offline persistence, avoidance of power fail during remap lotteries.
If vendors won't even say what you are buying so that you can't even know what the risks are what's the point of spinning disks at all?
Somehwat misleading (Score:5, Informative)
This is the trend for Western Digital (Score:3, Informative)
Being a former employee of Western Digital (originally with Hitachi before it was sold to WD), I'm not surprised. Western Digital has been deceitful for a while.
First, their 'official' response to submarining SMR drives in clients:
"In a typical small business/home NAS environment, workloads tend to be bursty in nature, leaving sufficient idle time for garbage collection and other maintenance operations."
and
"All our WD Red drives are designed to meet or exceed the performance requirements and specifications for common and intended small business/home NAS workloads."
So, in other words, Western Digital claims they know better than you what you want a HDD to do. Perhaps that's true for some cases, but clearly not all.
Their real deceit is their next generation HDD. Microwave assisted magnetic recording (MAMR):
https://hardware.slashdot.org/... [slashdot.org]
https://www.youtube.com/watch?... [youtube.com]
If you watch their announcement, you see the only 'proof' that MAMR works is modeling results from a professor at CMU. Turns out this model didn't include all the relevant physics. If you include it, the 'resonant' condition MAMR requires is lost. Their 'prototype' MAMR drive was not MAMR but ePMR.
ePMR drives an electrical current through the write pole to help sweep out magnetic domains for faster write field switching that improves jitter (allowing for more bits down the track). Driving an electrical current through the write head pole tip should heat the head which may reduce the reliability. They have chosen not to mention this publicly.
Finally, I will point out their CTO and CEO have both left the company along with their VP of recording heads. I expect their COO will be next.
Re: (Score:2, Interesting)
Re: (Score:2)
If you're buying that you were probably getting SMR anyway.
Re: Who is buying hard disks (Score:2)
Re:Who is buying hard disks (Score:4, Informative)
Re: Who is buying hard disks (Score:5, Insightful)
It's like who would ever need more than 640k? I have a config.sys file that will get you 637.5k free. Just say the word and it's yours!
Re: (Score:2)
I wouldn't say that needing 72TB of storage is the same thing than "needing more than 640k".
Re: (Score:2)
Re: (Score:2)
"Who is buying hard disks these days??"
"People with special storage needs."
Point is, most people can handle with 512GB / 1TB. And you can get that in SSD (even in NVMe) for a reasonable price.
Re: (Score:2)
My NAS has 4 x 10TB Ironwolves. The limiting factor in data transfer speed is the gigabit LAN, not the drives. Access time is not a problem for the data I store on it. I have a separate SSD pool for data that does need faster access time.
Re: (Score:2)
That's because you're probably not doing any random I/O. Where I work, we have 60+ drives in RAID10 to provide up to 40Gbps to 300TB but still need ~2TB SSD SSD and ~512GB RAM cache for burst capacity.
Random I/O on any drive at 100% usage is still 10MB/s with proper caching techniques.
Re: (Score:2)
What's a "proper caching techique" for random I/O?
Re:Who is buying hard disks (Score:5, Informative)
SSDs are definitely less expensive than before, but if you run a NAS at the SMB or enterprise level, you don't necessarily want to spend the extra $/EUR/what-have-you for a SSD. For example, a 10 TB Western Digital Red is $285 (USD). For comparison, a 7.6TB SSD (biggest single drive I could find) is over $1300. That is definitely not cheap enough.
The thing to remember is a NAS is often times bay limited too. For example, the Synology line maxes out at 8 bays without an expansion chassis. That means that you could fit 80 TB of WD Red drives or at most 60 TB of SSD storage. And at the prices above that means $2,300 for HDD vs $10,400 for less SSD-based storage. That is a massive difference.
So, the TL;DR for your question is people who need volume storage or high-density/high-volume use HDD.
Re:Who is buying hard disks (Score:5, Interesting)
It helps to read the datasheets on some of the consumer grade devices out there.
I have not tried it yet (because I havent had a need to do it, but will be more than willing to try it later), but the SoC used inside say, a WD MyCloud EX2 Ultra, has hardware support for SATA port replicators, at least according to the Marvel datasheet for that chip-- Along with support for SATA hotswap, and eSATA on the physical port you slap the replicator on. (meaning some clever DIY with a 3d printer and a replicator board would give you an eSATA port on the top of the unit you could plug additional enclosures into.)
This is probably true with other "low cost" solutions for SOHO applications, but you would have to do some aggressive market research. The issue is making sure that such cheap solutions have enough processor and RAM to handle that kind of expanded array size.
At some point in the future, what I want to do is integrate a small port replicator and a laptop sized drive holder into a desktop sized drop-in module (3D printed) that can live inside my EX2 ultra, then try adding 2 eSATA enclosures. Should be an amusing exercise. That's far off in the future though.
Re: (Score:2)
Re: (Score:2)
If you use your synology mostly for sequential reads and writes (eg your video collection), you won't notice any difference between mechanical and ssd, because the LAN is the limiting factor.
Re: Who is buying hard disks (Score:3)
If youâ(TM)re setting up a NAS, you should be using disks for the backing and a (relatively) small SSD for cache to speed up the random access times. Stick your 6x10TB disks in, and add 2TB of SSD to speed up random access to recent files.
The SSD cache is going to be cheaper than one of the drives anyway, itâ(TM)s not really significant overhead.
Re:Who is buying hard disks (Score:5, Informative)
People who need bulk storage.
Re:Who is buying hard disks (Score:5, Insightful)
And don't want to trust everything to the Cloud. Stop paying and poof.... your data is history.
Re: (Score:3, Informative)
I have a friend who ripped all his legitimately bought bluray discs to HDDs. I think he has some kind of ZFS array from memory, it's pretty big anyway.
Re:Who is buying hard disks (Score:5, Insightful)
Wait till you hear that people still buy tape drives. (gasp)
Re: (Score:3)
Wait till you hear that people still buy tape drives. (gasp)
Consumers? I doubt that. I actually had one long ago but they've gone off to enterprise land for people who have 100+ TB to back up. The problem is that the tape drive is super expensive and also a single point of failure. Compared to "expand as you go" disks that can be hooked up to any off the shelf system it's extremely inflexible. And also only for real archiving in that you have to find the right tape and pull it to get it back, no tape robots at a reasonable price.
Re: (Score:2)
isn’t tape used mostly for backup and not for online storrage
Many, if not most, consumer HDDs are also used for backups and archives.
Re: (Score:3)
You are wrong. CERN is using tape as cheap storage. It uses a few large (10-100TB) replicated disk and SSD caches (dCache) but the bulk of its objects are on tape.
Google does something similar as does AWS to reduce cost on less-frequently used objects. The latency of tape is still within seconds in many cases due to some smart algorithms and load spreading.
Re: (Score:2)
Lll correct me if I’m wrong but isn’t tape used mostly for backup and not for online storrage, so you are talking about a completly differen set of onerns,
Yes tape is used for different purposes, but clearly a device that has 100x the performance, and 10x the cost of another device are both used for the same purpose... *rolleyes*.
I'm not entirely sure what an onerns is but I'm sure the ones for SSDs and HDDs are different too.
Re: Who is buying hard disks (Score:4, Informative)
SSDS still aren't quite reliable enough in the long term for environments in which massive random access I/O occurs - eg data centre or cloud.
Re: (Score:2)
Use a hybrid drive with a large cache. Just have your storage controllers issue 'sync' every so often so that volatile data in the cache gets committed safely.
I think that is WD Black?
I dont think those are really datacenter grade for reliability, but the large cache size should allow them to do pretty good IOPs with stripe writes.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
This is true, but there is a tossup on power consumption and cooling costs when using a spinny disk vs an SSD array.
You might spend more up front on a pure SSD array, and spend more when a hotspare goes into service on a unit failure, but save more than the difference on power and cooling costs.
Re: (Score:3)
Use the right tool for the right job. I tinker with a few Raspberry Pi's (I daren't specify how many, it'd be kind of embarrassing), and constantly backing up/restoring microSD's, flashing new images (uncompressing a 400 meg archive that balloons up to a 2.something gig image) is not well suited to an SSD. I use my old, dime a dozen, spinny hard drives.
:P . I'll look into this. From what I can tell the higher dens
I actually own a pair of 6tb WD-Red drives. They were not dime a dozen
Re: (Score:3)
I'm talking about many, many 2gig writes, over what I hope to be many years. Also each back up of my "main" Pi is running about 30 gigs (gzipped) right now. To say nothing of the others.
Suffice it to say, yes.
Re:Who is buying hard disks (Score:5, Insightful)
>"SSD's are cheap enough."
Are you kidding? Have you priced how expensive 100TB of SSD is? If what you need is bulk, inexpensive, reliable storage and space/power/speed isn't a concern, then SSD can't even remotely compete with spinning hard drives. I am guessing almost everyone who has considerable storage requirements is using SSD for the boot and operational drives and spinning drives for bulk, archival, and backup. I have done this at home and work for many years now.
The point of this whole article is that we still need *reliable*. If shingle-based storage is much less reliable, then it doesn't fit anyone's needs. Thus, the magnetic drive vendors are digging their own graves, because SSD is nowhere near ready to take over all the roles of spinning drives.
My main concern is the deception. There is no valid reason that vendors should conceal what technology is being used in their models, unless there is something wrong with the technology. I know the next time I am ready to buy a spinning drive, I will have to now research this matter thoroughly and I hope the information is out there.
Re: Who is buying hard disks (Score:2)
If space, speed and power arenâ(TM)t a concern then why are you complaining about the HDD being slow?
Re: (Score:2)
>"If space, speed and power arenÃ(TM)t a concern then why are you complaining about the HDD being slow?"
I am not. I am complaining if the reliability is poor, as many others in the thread are saying. And complaining that the manufacturers should be up-front with what technologies are used in each of their models.
I have no experience with them, yet. Perhaps the only reliability problem is using them in RAIDs, due to the timing issues. If that is the only case, perhaps it isn't so bad.
Re: (Score:3)
Not if you are doing video production. You may need to keep hours and hours of video stored. It really does depend on what you need to do. My wife has many TBs of digital photos and other images because she does digital scrapbooking and everyone uses no less than 300dpi for everything as well as 12"x12" pages. 4TB worth of SSDs would be a bit expensive.
Even a lot of gamers still add a spinning plater to keep part of their game library.
Re: (Score:2)
There are literally video cameras with an SSD slot because HDD recording is too slow. You might have to replace them more often, but not as often as SD cards.
Re: IS this a bad thing? (Score:5, Informative)
Re:IS this a bad thing? (Score:5, Informative)
The article also notes that the only reason for using this tech in smaller drives is to reduce the nr of platters, a cost savings which doesn't appear to be passed on to the consumer. This is one of those instances where new technology degrades performance, but in a way that probably won't matter to most users. So, suppose WD sells a 2TB HDD for $100. Along comes this new tech. The honest way to deploy it would be to sell a new 2TB "budget" HDD for $80 (nicely undercutting the competition at the same time) and continue to sell the old one at the old price. But I fully expect them to follow the usual pattern: you replace the old product with the new one but keep the price at $100. Then you start selling the older model under a "performance" label, at $150.
It's odd that they would do this to their Red line, as it has a decent reputation for use in NAS storage.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2, Informative)
Do you think RAID does "funny things to drives" because you don't understand how RAID works?
RAID does not alter the distinction between reads and writes in any way.
Can you explain why "the difference will be noticeable" with RAID where it otherwise wouldn't be for "regular users"?
Re: (Score:3)
Because the drive will be dropped out of the array if it's not as responsive as it's expected to be.
Re: (Score:2)
That's a potential issue with RAID CONTROLLERS, not with RAID, and it does not represent doing "funny things TO drives".
I do not challenge the idea that there is a potential issue, I challenge the OP's ignorant explanation of what it is.
Any controller could time out and drop a drive, not just RAID controllers.
Re: (Score:2)
A dropped drive on a non-RAID controller just means a reboot at worst. You're arguing semantics without a good reason.
Re: (Score:3)
No I am not, and you're getting desperate. The claim is that RAID does "funny things to drives". What "funny things" does "RAID" do?
A dropped drive can result in more than just a reboot, it can result in lost data. A similar lost drive in a RAID system is less likely to result in the same, but at least you seem to acknowledge that you've changed the subject from RAID to controllers.
How about you spend more effort staying on subject and less time flexing?
Again, one can be wrong, or be rude, not both (Score:5, Insightful)
> you don't understand how RAID works?
> RAID does not alter the distinction between reads and writes in any way.
Actually raid 2, 3, 4, 5, and 6 turn a write into multiple reads, followed by a write or two. So yes, most raid levels do change the distinction between reads and writes.
Everybody makes mistakes at times. We all say things that turn out to be not quite right.
On Slashdot, you can often get away with being a bit of a jerk.
You chose to be a jerk WHILE being wrong. You chose to be rude while "correcting" a true statement with your mistaken understanding. That's not a good look. Next time you talk about a subject you don't know anything about, you might want to phrase your ignorance in a more diplomatic manner.
Re: (Score:3)
>> RAID does not alter the distinction between reads and writes in any way.
If you don't understand how changing a write into multiple reads alters read vs write, yet are unwilling to learn, I don't see how I can help you.
Well, there's the kernel :) (Score:5, Funny)
> Do you have any relevant experience implementing RAID products ray? Any intellectual property you've contributed to?
Run rpm -q --changelog and find out.
Or if you're a Windows user, try ctrl-f
http://ftp.isu.edu.tw/pub/Linu... [isu.edu.tw]
Re: (Score:3)
I was wondering about the same thing.
OK, so SMR is worse then CMR. By how much? If there's a 20% difference, I couldn't care less. If it's a 70% difference, there's a problem.
Furthermore, it really depends what you use your NAS for. If you have mostly large files which you read slowly and incrementally (movies, TV series), all HDDs, even the old SATA1 drives, would yield enough performance.
My NAS uses the cheapest large capacity HDDs I could buy. I found the Seagate Backup Plus Hub 8TB to be a sweet spot, I
Re:IS this a bad thing? (Score:5, Informative)
The issue with using SMR drives in RAID is that once the writeback cache is exhausted, write service times can be so long, that the RAID controller treats this as a failed drive and evicts the drive from the array. This is an issue mainly with rebuilding the array after a drive failure, which can result in high drive loads for a prolonged period, which can exhaust the writeback cache on the drive; drive eviction during a rebuild is also a high consequence failure, as the array is by definition already degraded.
Yes it is a bad thing (Score:3)
The issue isn't (typically) normal use.
The problem is when a drive is replaced and the new one is being resilvered. That means written continuously for a long time. Any size of cache or CMR sector buffer will get filled. It takes seconds to a minute to flush the buffer, and every SAS/SATA controller sees that as a drive fault.
Re:IS this a bad thing? (Score:5, Informative)
But if the drive happens to be re-writing that CMR data to SMR when you make your read request, then read performance is impacted. SSDs do the same thing (MLC, TLC, and QLC drives do the initial write in SLC mode). But the difference is SSDs are hellaciously fast at random I/O, so other R/W operations are not noticeably impacted if they happen to occur right when the SSD is re-writing SLC data in MLC/TLC/QLC mode.
Re: (Score:2)
Re: (Score:2)
One could argue that "RAID arrays" and "enterprise models" (whatever that is) are the types of devices that clearly SHOULD be using SMR, as they can be tailored to the specific usage model required. It is not clear why desktops should use them at all.
Re: (Score:3)
Problem isn't average performance (Score:5, Informative)
The problem is when you try to use them in a NAS or RAID setting (especially the WD Red drives since they're marketed specifically for NASes). In that usage, all it takes is one slow read or write (yes, reads are affected too if you happen to try to read from the drive right when it's re-writing data to SMR) that exceeds the timeout threshold for your array, and the drive gets dropped from the array. If you've got multiple SMR drives which happen to be doing this at the same time (because they had the same amount of data written to them at the same time), they all drop from your array at the same time and you've lost all your data.
Re: (Score:3)
Nobody else but this guy should be talking.
Re: (Score:3)
1 - The least knowledgeable person about the topic posts a strongly worded counter to the summary that seems logical on the surface but is incorrect
2 - Others that are also not knowledgeable on the topic moderate the post as "Interesting" or "Informative"
3 - Soon there are multiple posts by the people that don't know anything, all modded to a 5
4 - Slowly the people with actual knowledge start modding up good posts
Re: (Score:2)
And if you are resilvering after replacing a failed drive, this will happen, because you need to pretty much fill up the entire drive as quickly as possible.
Re: (Score:3)
The assertion that a drive will fail within the time it takes to reconstruct redundancy data
The drive won't fail as a drive, it will fail to respond quickly once its cache is full, reverting to a mode so slow that no RAID controller will put up with it, and the RAID controller marks it failed. Probably a dozen people have explained this to you, but you seem pretty slow to get it.
SMR is unsuitable for RAID, simple as that. It's a tool for a different job. That makes it particularly dirty that WD snuck it into drives marketed as RAID-suitable.
Re: Problem isn't average performance (Score:2, Insightful)
Re: Problem isn't average performance (Score:4, Insightful)
The WD Reds (NAS targetted drives) were vetted by vendors, and were fine. As PMR drives. Then WD changed to SMR *without disclosure*, so there's no caveat emptor issue here.
For those who say "SMR should be fine, controller issue blah blah", I point out that none of the higher capacity, high cost, enterprise level NAS drives are shingled. Because they aren't suitable for whole-disk-write resilvering.
Re: (Score:3)
If I ask for a specific product and they give me the product but with a slightly different feature than what is listed, they have committed fraud and false advertising, whenever or not it's detrimental.
Re: (Score:3)
Yes, but do they advertise the drive as having a specific recording type, and if so, have they failed to advertise it correctly? IME SOP is to simply not advertise such specs. Maybe it's buried in a data sheet somewhere, maybe not, and caveat emptor.
Re: (Score:3)
From the way the article describes it, those SMR drives preform better.
So why complain?
There is plenty to complain. SMR drives are fine for write-once, read-often data. The problems occur the moment you need to modify existing data on the drive. SMR acts very much like SSD in that to write the data, you actually have to write a large superset of data, not just the block(s) you are modifying/writing, but surrounding blocks as well. In SSDs, this is not too much of an issue due to how fast and easy it is to access such data on the drive to read the blocks into memory, update the portion that ne
Re: (Score:3)
Dunno what point in history you lived in when companies always set price based on cost-of-production. IBM has a golden wrench they'd like to sell you. Actually they'll just charge you to send one of their techs in to turn the stupid thing. You don't get to have the wrench yourself, no sir.
Companies lower prices when competitors put pressure on them to do so. If that means cutting costs and passive the savings along to customers to keep them happy and compete, then so be it. No competition means no redu
Re: (Score:3)
Prices drop when the lower price causes enough increase in sales to make up for the revenue lost by the drop. Direct competition is the most obvious reason why that would happen, but it is not the only reason.
Re: (Score:3)
No, direct competition is the ONLY reason why it happens. If there's no competition on a cost basis, prices can stay high indefinitely and nobody can do anything to stop it.
The HDD manufacturers are looking at a market with declining market share. NAND has severely reduced demand for magnetic storage of any kind. We're seeing similar price shifts in declining tech markets (dGPUs are an excellent example). The push is to keep the remaining customer base paying high prices in light of the face that increa
Re: (Score:3)
No. Did not know that. I assumed it was tuberculosis, because it was an article on difficulty driving.