
Seagate's 30TB HAMR Drives Hit Market for $600 (arstechnica.com) 44
Seagate has released its first heat-assisted magnetic recording hard drives for individual buyers, marking the commercial debut of technology the company has developed for more than two decades. The 30TB IronWolf Pro and Exos M drives cost $600, while 28TB models are priced at $570.
The drives use HAMR technology, which uses tiny lasers to heat and expand drive platter sections within nanoseconds to write data at higher densities. Seagate announced delivery of HAMR drives up to 36TB to datacenter customers in late 2024. The consumer models use conventional magnetic recording technology and are built on Seagate's Mosaic 3+ platform, achieving areal densities of 3TB per disk.
Western Digital plans to release its first HAMR drives in 2027, though it has reached 32TB capacity using shingled magnetic recording. Toshiba will sample HAMR drives for testing in 2025 but has not announced public availability dates.
The drives use HAMR technology, which uses tiny lasers to heat and expand drive platter sections within nanoseconds to write data at higher densities. Seagate announced delivery of HAMR drives up to 36TB to datacenter customers in late 2024. The consumer models use conventional magnetic recording technology and are built on Seagate's Mosaic 3+ platform, achieving areal densities of 3TB per disk.
Western Digital plans to release its first HAMR drives in 2027, though it has reached 32TB capacity using shingled magnetic recording. Toshiba will sample HAMR drives for testing in 2025 but has not announced public availability dates.
coercitivity (Score:5, Informative)
TFA is wrong. The laser inthe disk head strikes a gold target on the disk head which in turn heats up the disk.
the disk is heated to change (lower) the coercitivity of the material. Any thermal expansion is an UNDESIRABLE side-effect.
Re: (Score:3)
Any thermal expansion is an UNDESIRABLE side-effect.
Thanks - when I read TFS it sounded bad in my head. Thermal expansion would be very difficult to precisely repeat over extended intervals, which would make me very leary of trusting any storage technology that was relying on something constantly thermally expanding and contracting at set rates over a period of years.
Re: (Score:2)
NAS and enterprise (Score:1)
Please be aware that, while available for all to buy, these drives are for NAS and enterpeise use.
put'em in a normal case for DiY NAS, or in a portable enclosure, oninyour normal rig a lone "media drive" at your own peril.
they need certain mounting and ventilation standards to work reliably without shorteningtheir life.
Re: (Score:2)
Re: (Score:3)
Yarrr
Re: (Score:2)
Re: (Score:2)
I am still rocking 2TB drives in my NAS and I am nowhere near filling it up
Re: (Score:3)
I've been creating YouTube videos since 2006 or so and I have a cupboard with about 20TB of archived video and project files on external USB hard drives -- most of that has been created since I switched to recording in 4K about four years ago. With the move to 6K or even 8K raw footage, 30TB is *not* a lot of storage -- although I would be nervous about committing so much data to a single drive in a non-enterprise environment. At the very least you'd want a redundant RAID setup which would mean buying m
Re: (Score:2)
A redundant RAID setup? Surely you'd want a redundant RAID array of discs.
Re: (Score:2)
With VR porn videos ranging in size from 50-150GB per video 30TB won't get you very far in a big collection. But porn jokes aside, if you do any kind of actual video work you will quickly fill this up. Though I suspect it won't be useful for a working drive since you will want fast read/write rates for that.
But on a consumer side, a 10min video shot on an iPhone 13 in pro mode will take up 50GB. Install Magic Lantern on your DLSR and shoot in 4K RAW and you'll quickly be over 100GB for 10min, over 400GB if
NAS/ZFS rebuilding (Score:3)
Please be aware that, at these capacities, rebuild times will be meassured in weeks. Please use at least N+2 equivalent redundancy, and. In the case of RAID6, do not exceed ~ 12 drives total per volume.
if you are concerned with IOPS, either for normal use or for rebuild scenarios, get HAMR+MACH.2 HDDs
Re:NAS/ZFS rebuilding (Score:4, Interesting)
At these capacities I wouldn't use RAIDs 5 or 6, or hypothetical RAID 7+ (ie others that use the matrixing type thing) anyway (partially for the same reason we stopped using RAID 5 after capacities went over a terabyte), RAID1 with three or more disks seems like a much more solid option.
At some point you have to ask why you're using RAID at all. If it's for always-on, avoiding data loss due to hardware failures, and speed, then RAID 6 isn't really am great solution for avoiding data loss when disks get to these kinds of sizes, the chances of getting more than one disk fail simultaneously is approaching one, and obviously it was never great for speed.
It's annoying because in some ways it undermines the point in having disks with these capacities in the first place. But... 8 10G disks in a RAID 6 configuration gives you 60Gb of usable capacity, as opposed to six 30Gb in a RAID 1 with three disks per set. So there's a saving in terms of power usage and hardware complexity, but it's not ideal.
Re: (Score:2)
> the chances of getting more than one disk fail simultaneously is approaching one,
Was meant to read
> the chances of getting more than two disks fail simultaneously is approaching one
But as usual I didn't proof read...
Anyway, the point is the scenario RAID 6 was created to solve that RAID 5 couldn't (multiple disk failures) RAID 6 is going to be inadequate for in the near future. 30Tb drives is around 3X beyond the limit of RAID 6's usefulness.
Re: (Score:2)
You seem to be saying that larger sizes increase the probability of multi-disk failures. Are you saying that because larger disks have higher probability of single-disk failure, or just because large disks increase reconstruction time, increasing the probability of another failure during reconstruction?
If it's really true that the chances of more than two simultaneous disk failures is approaching one... these disks must be extremely unreliable.
Re: (Score:2)
> Are you saying that because larger disks have higher probability of single-disk failure, or just because large disks increase reconstruction time, increasing the probability of another failure during reconstruction?
Yes ;-)
It's exactly the same issue that made us all switch from RAID5 to RAID6. The larger capacities increase the chances of failure, and the longer rebuild times (which are getting worse) are also likely to exacerbate that.
> If it's really true that the chances of more than two simultan
Re: (Score:2)
Also as usual I wrote GB when I meant TB throughout the above. Hopefully you all understood that...
Too early in the morning...
Eh? (Score:4, Interesting)
Eh?
> At some point you have to ask why you're using RAID at all. If it's for always-on, avoiding data loss due to hardware failures, and speed, then RAID 6 isn't really am great solution for avoiding data loss when disks get to these kinds of sizes, the chances of getting more than one disk fail simultaneously is approaching one, and obviously it was never great for speed.
If you're at this point, then using drives at all is probably already off the table. But I think this position is probably ridiculous.
I have many years of experience managing file clusters in scopes ranging from SOHO to serving up to 15,000 people at a time in a single cluster. In a cluster of 24 drives under these constant, enterprise-level loads, I saw maybe 1 drive fail in a year.
I've heard this trope about "failure rate approaching 1" since 500GB drives were new. From my own experience, it wasn't really true then, any more than it's true now.
Yes, HDDs have failure rates to keep in mind, but outside the occasional "bad batch", they are still shockingly reliable. Failure rates per unit haven't changed much, even though with rising capacities, that makes the failure rate per GB rise. It still doesn't matter as much as you think.
You can have a great time if you follow a few rules, in my experience:
1) Engineer your system so that any drive cluster going truly offline is survivable. AKA "DR" or "Disaster Recovery". What happens if your data center gets flooded or burns to the ground? And once you have solid DR plans, TRUMPET THE HECK OUT OF IT and tell all your customers. Let them know that they really are safe! It can be a HUGE selling point.
2) Engineer your system so that likely failures are casually survivable. For me, this was ZFS/RAIDZ2, with 6 or 8 drive vdevs, on "white box" 24 bay SuperMicro servers with redundant power.
3) If 24x7x36* uptime is really critical, have 3 levels of redundancy, so even in a failure condition, you fail to a redundant state. For me engineering at "enterprise" level, we used application-layer logic so there were always at least 2 independent drive clusters containing full copies of all data. We had 3 drive clusters using different filesystem technologies (ZFS, XFS/LVM) and sometimes we chose to take one offline to do filesystem level processing or analysis.
4) Backups: You *do* have backups, and you do adhere to the 3-2-1 rule, right? In our case, we used ZFS replication and merged backups and DR. This combined with automated monitoring ensured that we were ready for emergencies, which did happen and were always managed in a satisfactory way.
Re: (Score:2)
I'm not going to argue for and against the first part of your comment - all I can say is RAID controllers tend to be fussier than you're giving them credit for. Disks don't have to fully crash for there to be problems - If disk 1 in a RAID5 of 5 disks fails, and there's one sector that's unreadable on disk 3, that can be enough for many RAID controllers to crash out and mark the entire thing as unrecoverable. And... it's not necessarily a bad thing they do, as that does mean there's unrecoverable data and t
Re: (Score:2)
And... it's not necessarily a bad thing they do, as that does mean there's unrecoverable data and the file system no longer has integrity.
Marking an entire array as bad due to one bad block is definitely a "bad thing they do". Checksumming will help identify bad blocks. There's no need to nuke the entire file system over a flipped bit.
Do Not Want (Score:2)
I don't want bigger magnetic drives. THEY'RE TOO SLOW!
I want 30TB, or even 15TB, NVMe drives for $600.
Re: (Score:2)
These are for mass storage in enterprise settings, not for home users. It's for applications that demand huge storage volume but speed is not critical.
Re: (Score:3)
It's for applications that demand huge storage volume but speed is not critical.
Like my porn collection.
Re: (Score:2)
640k was good enough until AI
now i can generate 30TB per day
Re: (Score:2)
I know what they are for. Here's a tip enterprises want massive storage AND speed AND reduced power consumption. Enterprises also want to stop paying "enterprise" prices. A 15TB NVME U.3 drive currently costs me north of $3,000 per drive.
Re: (Score:2)
And I want a unicorn, but for $500. $600 is waay too expensive.
Re: (Score:2)
I want 30TB, or even 15TB, NVMe drives for $600.
I want a unicorn.
Re:Do Not Want (Score:5, Informative)
Well, *duh*, if you could have SSD for the same cost as magnetic drives then of course everyone would want them.
Joke can be on you though, you didn't say SSD, you said NVMe. So there is a concept of a spinning disc with an NVMe interface, since it's increasingly weird to bother with SAS/SATA when PCIe interfaces are more and more prolific, including switch chips taking the role of things like SAS expanders. So it may well be that you can get slow disks that are, technically, NVMe drives.
Re: (Score:2)
So there is a concept of a spinning disc with an NVMe interface
The worst of both worlds.
My referenced desire is for U.2/U.3 interface SSDs. They exist, but cost thousands.
Re: (Score:2)
The HDDs under discussion will come down in price, probably to under $200 within two years. They'll also drive down the prices of smaller disks. So basically you're looking at much cheaper back up media (who uses tape these days when you can hotswap a disk?)
A $600 SDD will eventually come down in price (in terms of price per terabyte) but you're looking at it taking years to even halve the price, as Moore's law doesn't apply any more. And it really is based upon general tech advancements, on foundries findi
Re: (Score:2)
The HDDs under discussion will come down in price, probably to under $200 within two years.
I'll make your a bet: if these drives are less than $300 in two years, I'll buy you one. Otherwise you buy me one. That threshold is 50% above the price you predict. Deal?
Enterprise HDDs on the retail market (vs premium system integration costs) have been just about $20/TB for years. I discovered that they'll drop by 66% in the next two years.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Don't be ridiculous. Even in a 16 spindle RAID 10 they won't come even remotely close to 7,000+MB/sec.
Re: (Score:2)
The fastest hard disks can now hit 550MB per second, so 16 ought to be able to hit about 7000. Of course that's streaming transfer. Latency and hence random access won't be comparable.
Ha! ...that's nothing (Score:2)
Durabilty (Score:2)
I would be concerned about the durability of these drives. Repeated heating and cooling is bound to take it's toll.