The Billion-Dollar Bet on the Future of Magnetic Storage (ieee.org) 200
For several decades, the areal density of hard disks increased by an average of nearly 40 percent each year. But in recent years, that rate has slowed to around 10 percent. Seagate and Western Digital, the leading manufacturers of hard drives, differ with each other on how to get around this. From a report: In back-to-back announcements in October 2017, Western Digital pledged to begin shipping drives based on what is known as microwave-assisted magnetic recording (MAMR) in 2019, and Seagate said it would have drives that incorporate heat-assisted magnetic recording (HAMR) on the market by 2020. If one company's solution proves superior, it will reshape a US $24 billion industry and set the course for a decade of advances in magnetic storage. Companies that wish to store huge amounts of data do have other options, but hard drives are still the go-to choice for enterprise storage needs that fall somewhere between faster, more expensive solid-state drives built on flash memory, and slower, cheaper magnetic tape.
Seagate now aims to debut a 20+ terabyte drive based on HAMR in 2020, and Western Digital promises MAMR drives that will hold roughly 16 TB later this year. Western Digital expects to quickly scale up to MAMR drives with 40 TB of capacity by 2025, while Seagate believes it can achieve similar capacities through HAMR, though it has not publicly stated a target date. Both companies are essentially starting from the same place, with hard drives that share a few key components. The disk, for example, is a thin platter that has been coated with some form of magnetic material made up of countless individual grains, each of which is magnetized in one particular direction. Ten or so grains in a cluster, all with magnetization pointing in the same direction, represent a bit.
Seagate now aims to debut a 20+ terabyte drive based on HAMR in 2020, and Western Digital promises MAMR drives that will hold roughly 16 TB later this year. Western Digital expects to quickly scale up to MAMR drives with 40 TB of capacity by 2025, while Seagate believes it can achieve similar capacities through HAMR, though it has not publicly stated a target date. Both companies are essentially starting from the same place, with hard drives that share a few key components. The disk, for example, is a thin platter that has been coated with some form of magnetic material made up of countless individual grains, each of which is magnetized in one particular direction. Ten or so grains in a cluster, all with magnetization pointing in the same direction, represent a bit.
20-40 terabytes? (Score:2)
Once they get down to about $100, I guess a few drives would be enough to backup all the devices in my home, and maybe store some other media.
Until then rsync diffs to a remote NAS will be good enough.
There isn't much on my systems that can't be downloaded again after installing extra apps and my personal data.
Re: (Score:3, Insightful)
Re:20-40 terabytes? (Score:5, Insightful)
People on Slashdot have been posting the equivalent of "640k should be enough for anybody" for decades now.
Something always comes along to fill that space. For a while I thought streaming/cloud might slow it down as people stop keeping stuff locally, but no it's carried on growing as fast as the R&D can manage.
Re:20-40 terabytes? (Score:5, Interesting)
You are 100% accurate. My take however, is slightly different.
As Spindle Drives increase their density(and capacity) so will SSD technology. And based on my subjective (not empirical) opinion, should be able to more than keep up with Spindle Tech.
The issue is that as these competing forces work, eventually one will win out. We are already starting to see how this is playing out, and it doesn't look good for spinning drives. One of the reasons is that there is a bunch of competing but related techs being hammered out in just the Solid State arena. So in addition to competing with Spinning drives, Solid State tech is competing with itself.
Does this mean that spinning drives are going away completely? Not any time in the next 5 years. There will be a steady decline in use, but I'm fairly certain that Spinning drives will go the way of tapes (which still exist somewhere). They are too old, too bulky, too slow, too much anything to be useful in the very long run.
I am 100% sure that there are use cases today for Cheap Dense Slow Storage. Mostly for long term /archival storage. Anything that needs access to a processor will want / require Solid State.
Re:20-40 terabytes? (Score:4, Interesting)
That's a fair analysis.
If the demand for mass archival storage drops too low, then the drive manufacturers won't be able to amortize development costs over enough units, and prices will go up. That's the scenario that will most likely be the final death of spinning drives, as it will lead to solid state mass storage being cheaper.
While this is likely to be a slow process, I remember when memory prices had wild swings, and it's possible we may see the same with solid-state memory in the coming years. An sudden drop in SSD prices could kill the hard drive market overnight.
Re: (Score:2)
The failure mode of SSD may be very gentle, like seeing an increase in correctable block errors until they become uncorrectable block errors. You only have to hope the controller's firmware is not buggy and does not crash hardly.
Re: (Score:2, Insightful)
Re: (Score:3)
I have noticed that SSDs fail a lot less than HDDs... but when they fail, they fail hard. However, since the beginning of time in computers, one always was supposed to have backups and never trust that they could ever get their data back from spinning rust. SSDs only drive this point home. Once the electrons are out of the gates, there is no going back.
Re:20-40 terabytes? (Score:4, Informative)
I am 100% sure that there are use cases today for Cheap Dense Slow Storage. Mostly for long term /archival storage. Anything that needs access to a processor will want / require Solid State.
Systems and applications that require really fast storage will require DRAM. Flash is way too slow compared to DRAM. On the other hand, many data centers are extremely cost sensitive. These data centers account for many tens (hundred?) of millions of annual HDD unit sales. Many large internet companies require massive cold storage, i.e., data that is needed maybe a few times a year or less but which need to be retrieved in a few seconds when needed (e.g., think about the tail end of the distribution for Facebook browsing or Google search queries). For cold storage, flash is too expensive, and tape is too slow.
Even though flash prices have been dropping rapidly, they still have not gotten close to HDD prices. As a point of comparison, take a look at average price charts [pcpartpicker.com] for various capacities of HDDs and SSD. Based on this webpage, the average large-capacity SSD price is around $250/TB, while the average large-capacity HDD price is around $40/TB. This roughly 5x price difference has held steady for many years. More importantly, HDDs have held this price advantage in the last decade without the usual historical once-per-decade technology disruptor. PMR was the last mini-disruptor ten years ago. HAMR/MAMR/bit-pattern has been promised for a very long time, and the price difference relative to flash will only increase when these new disruptors are commercially ready.
Re: (Score:2)
Even though flash prices have been dropping rapidly, they still have not gotten close to HDD prices.
A quick search through Amazon for the not cheap Samsung 860 EVOs show's a roughly $160 / TB price. WD Red prices are roughly $30/TB. Still a 5X difference for these specific drives, but the prices are far lower than indicated. And note that other lines or brands of drives are even cheaper. I saw a 2TB AData drive on sale a few weeks ago for roughly $80 / TB. Not that I'd want one in my system but the prices can get much lower for SSDs today.
Yes, the numbers I originally quoted were average prices. Shopping around will yield lower prices. For example, Backblaze [backblaze.com] was paying about $20/TB two years ago. The ~5x difference has surprisingly held steady for many years.
My main point is that HAMR/MAMR aren't going to save hard drives in the next couple of years if history is any indication. In fact, this may mark the switchover for spinning HDDs from main storage to mass storage, replacing tape except where true longevity is needed. And even that may change, because it's rather trivial to clone 100TB from HDD to HDD. With tape - I hope it's better than it used to be, as it was easier to just rotate backups than clone a tape.
Well, it depends on what "saving" means. HAMR/MAMR are still unproven in a commercial setting. They have been working in the lab for many years, but storage devices have very stringent reliability requirements, so it remains to be seen what the actual reliability of the eventual rel
Re: (Score:2)
Tape stretches
Re: (Score:2)
Tape stretches
Yes, tape can break. However, only those few bits at the break are potentially lost, and only if the ECC is overwhelmed. The rest can be recovered. This is in contrast to HDDs and SSDs where the entire device is lost. Being able to completely separate the media and the head is huge. This will likely never be possible for HDDs. Maybe someone smart will find a way to do it for SSDs.
Re: (Score:2)
I agree it will keep large-capacity HDDs afloat, but the question is how many people will need that capacity at that point. I consider myself a relatively heavy storage user for a consumer, likely in the top 5%.
Think data centers, like what Google, Amazon, Microsoft, etc. have. The number of HDDs that these companies buy each year is staggering.
BTW, cloning full 100 TB HDDs is not trivial. Assuming a theoretical max transfer speed of 200 MB/s, it would take almost 6 days to copy the data, and that theoretical speed will likely be hard to achieve.
Well, considering there are only 14TB drives today with 10TB being common - that means a 100TB "drive" is actually a raided drive, and will push an average of roughly 1000MB/s, provided you have the proper controllers in play. But that's probably unfair. I would hope the next iteration of drives do better than up the average transfer by less than 50%. My current external 8TB HDDs are pushing over 120MB/s on copies between them. And that's through a common USB controller (USB 3, so no where near max) with the receiving HDD being the bottleneck. They're pretty cheap drives with slower spindle speeds, hence the slower write transfers.
Even reading a 14 TB HDD at max theoretical speeds will take a day, and no aged drive is going to be anywhere close to those speeds. If you're getting 120MB/s then you're probably archiving a lot of large files.
Re: (Score:2)
I think we will see different types of storage appear. Most machines will use SSD because it is better in almost every way except for capacity, while HDD will be useful for large capacity arrays where I/O speed isn't as critical, but storage is.
HDDs also can be useful in desktops, provided they have a good amount of SSD built in, which functions both as a "landing zone" for data (where the drive can tell the OS that it is complete once it finishes to the SSD, and then move the data to the spinning platters
Re: (Score:2)
The thing with SSD is that in many cases the form factor (like 2.5" disks) is big enough that you could have several terabytes of flash chips in them.
The effective size of a flash chip isn't really a huge obstacle unless you insist on the M.2 or similar form factor.
Re: (Score:2)
Re: 20-40 terabytes? (Score:2)
Wikipedia in it's entirety is less than 200 GB compressed. Even uncompressed it would fit no problem on my home NAS.
Re: 20-40 terabytes? (Score:2)
The images and videos aren't technically wikipedia; they're part of the "wikimedia commons". You're right; including all the multimedia content significantly increases the size. I don't know the exact size currently but back in 2014 it was "over 23 terabytes"; could be double that by now.
It gets quite a bit smaller if you only want to archive the English language content though.
Re: (Score:3)
My 8TB NAS is filling up (3 4TB drives in RAID-5). When we were thinking of building an entertainment center in our weird TV space, we realized that it was mostly for storing our DVDs, and it was cheaper to build a file server in the basement instead. Now we're starting to collect Blu-Rays, which I haven't ripped because I want to keep the full menus and special features, but I don't have a good works-on-Linux system for that yet. (The best suggestion so far has been to do a raw copy of the disc and then
Re: (Score:2)
that's about the solution I came up with when I wanted all the menus and stuff for DVDs, but that was a number of years ago. Is there a good solution for playing an ISO image nowadays, especially across a network?
Re: (Score:2)
I use Xine for DVD iso files, and it works just fine. I use MythTV, and I have it set up as the player for .iso video files, and it is essentially flawless, regardless of whether the storage is local or on the LAN. I did have this fail when the file server had network problems and dropped down to 100Mb instead of 1000Mb, even though that should have been sufficient. I haven't tried it with WiFi.
Blu Ray has a completely separate system for everything, and I'm not aware of any open source software for proc
Re: (Score:2)
For most of my purposes DVD image/audio quality is fine, but I really like having the menus available for things like language/subtitle selection. Thanks for the pointers!
Re: (Score:2)
HandBrake has menu options to select subtitles and audio tracks when ripping a DVD. I haven't personally tried to include them, but it should work, and then you can just use VLC's menu to select the desired subtitles or audio track?
Re: (Score:3)
MakeMKV for ripping.
I also use Media Center Master to rename the files, metadata tag them from a couple of sources (IMDB being primary), download artwork, etc. It makes using kodi actuall
Re: (Score:2)
I store all my data on a 2 TB NAS, with plenty of headroom. I really can't think of too many ways I could take advantage of a 20 TB HDD. I'm guessing the market for this stuff will be mostly the people who are collecting and storing data ON you, not FOR you.
I have about 3tb used on the NAS I have. Most of it is replaceable in the case of a failure. I keep my photos and other stuff in a couple places, including a local backup drive and remote NAS I setup. With 50Mb/s up, overnight diffs don't take very long.
I still have about 100 Blu-rays and 1K DVDs I'd like to rip at full size with menus and extras. Mostly because I'm too lazy to walk across the room to pick one out to watch. Maybe I just need a jukebox robot to pick them...
I'd consider getting rid of
Re: (Score:2)
Re: (Score:2)
I have fairly narrow taste, so when I find
Re: (Score:2)
I guess a few drives would be enough to backup all the devices in my home
Huh? You mean you're not on the 6K porn bandwagon yet?
Re: (Score:2)
I hate this as much as you do, but we all know they use a decimal definition of TB, and you need to get them to use TiB if you want the one true binary definition.
Re: (Score:2)
At least they are still talking bits and are not like old game cartridges bragging bits as "8 MEGA" in an exploding balloon burst zap pow!
Re: (Score:2)
They can do math quite fine. It is a long standing practice in the hard drive arena to count MB / TB along 1,000 and not 1,024 byte boundaries. One is base 10, the other is binary/hex based.
That being said, your point is noted as being a stupid "marketing" ploy that no longer works, because nobody cares about the differences at that level. And ... if you need that extra 1 TB, you should already be adding another drive to your system ;-)
Re: (Score:2)
Thing is for at least two decades hard drives where sold in binary capacities rather than decimal. Then they decided to change hard drives and the cynical among us know this is simply to inflate the perceived capacity.
I would contrast this to RAM which is still sold in binary capacities. The 3TB servers we brought last year at work come with 3*2^40 bytes of RAM not 3e12 bytes of RAM.
Re: 20-40 terabytes? (Score:2)
What I do remember is the difference at lower capacities being so negligible that it was usually blamed on FAT and MFT tables and such, which also reduced the storage capacities. Not much difference between 1MB and 1.02MB...
How Big Is Too Big? (Score:2)
Every time I see posts about hard drives getting bigger, I wonder: how long until they're no longer practical due to concerns about data safety? Backing up a large drive is already difficult.
Then again, I would really like to see them make this kind of progress with SSD... A 10TB SSD would be a wonderful thing. :)
Re:How Big Is Too Big? (Score:4, Insightful)
Simple answer: Always keep more than one backup.
Backup yesterday's drives onto today's bigger drives and keep both generations around. Repeat every couple of years or so, discarding the grandparents. This way your total storage keeps growing to keep up with your accumulated data and you always have two copies of it around in case a drive dies.
Re: (Score:2)
This is what RAID is for. And in the simpler cases, mirroring (RAID-1). The question is how long will it take to copy over all the data from your old drives. Depending on the situation, you might have to do it online, which slows down the process while impacting performance during the migration. Once you've migrated over, the question is not one of size, but of data rates. If you aren't generating data much faster than before, then your old system for incremental backups or offsite copies will work the
Re: (Score:2)
RAID does not a backup make.
That said, they're also impractical for a lot of RAID levels these days. It takes a while to rebuild a disk when you have to rebuild 10+TB of data. When "a while" starts translating into 24 hours or more, you end up at serious risk. RAID6 will only get you so far. Full-out mirroring is expensive.
ceph with more smaller disks can be better then (Score:2)
ceph with more smaller disks can be better then 3-6 super big disks with high rebuild times.
Re: (Score:2)
Right. That's why I talked about backus, and the fact that drive size has no impact on incrementals.
You're right that RAID rebuild time is important. Also, the rebuild puts more stress on the surviving drives, which are probably almost identical to the failing drive, so failures aren't independent. But the issues are similar with SSDs. Regardless of how you store your data, there are complicated management issues, and that's why we have companies like DellEMC that specialize in enterprise storage.
For my
Re: (Score:2)
Nice how you can't read what I wrote and just toss out a slogan.
Re: (Score:2)
RAID is not backup.
Re: (Score:2)
I never said it was, but it doesn't mean RAID isn't part of a data reliability strategy. RAID is useful. So are backups. So is reading comprehension.
Re: (Score:2)
ZenShadow was talking about how long it takes to make a backup. And then you said "that's what RAID is for" because you clearly still don't understand that RAID is not backup.
RAID will no no way reduce the time it takes to make a backup, because, get this, RAID is not backup.
Re: (Score:2)
Backup is only part of what the original comment was about. He also used the phrase "data safety." And get this, RAID is about data safety. I also talked about backup as a separate factor in data safety. I don't get the instinct to jump all over people for talking about RAID. It's an important part of keeping data safe. So are backups. If you have a drive crash, backups are useless for recently-written data, so you need RAID. If you have a power hit that takes out the whole system, you need a backup
Nothing changed (Score:2)
Every time I see posts about hard drives getting bigger, I wonder: how long until they're no longer practical due to concerns about data safety? Backing up a large drive is already difficult.
Backing up a large drive has ALWAYS been difficult. The only thing that changes is the size of the number. Some of my early machines have 40MB hard drives and I had no practical means to back up that much data at the time. Now it might be 40TB but the problem is the same and so are many of the solutions. Back then we had tape, second hard drives, removable discs. Today we have... tape, hard drives and removable disks (solid state or optical instead of floppies). The more things change the more they st
Re: (Score:2)
A 10TB SSD would be a wonderful thing. :)
Ever hear of RAID?
Re: (Score:2)
When its too big to fail.
Oh wait...
Community opinion? (Score:2)
So what does /. think about the below?
I am uneasy about 2TB+ drives. The way I see it, that is a lot of data running on a single/small set of failure points. At about 2TB, that's about all the corporate data I generate over 2-3 years.
Each year I trim out a lot of stuff and zip it to about ~50GB of important, must keep, stuff. With 2TB drives, we tend to just keep everything.
And I just feel more is at risk with few protections. One stolen laptop, bad disk jolt, header jitter, etc and so much is gone. I j
Re: (Score:3)
The advantage of 2TB drives is you can back up everything, twice. And you don't have to waste time sorting out the bits you really need to keep/archive, just do the whole lot.
Someone posted a break-down of the cost/benefit ratio once. As I recall there was a photographer who had a fair amount of photos to store, and it was suggested he sort out the good ones and discard the rest. Turns out that paying someone minimum wage to do that was far more expensive than just adding more and more storage to keep it al
Re: (Score:2)
Back in the early aughts I realized I was the biggest danger to my data and sat down to create a sustainable backup system. At the time, duplicate storage was a bit spendier than I wanted to pay, but it was clear that we weren't far from a point where it would be cheaper than organizing my way to using smaller volumes, so I just committed to backing it all up, and mirroring that twice (I swap on-site and off-site mirrors once a month).
That isn't to say that I don't trim and organize - in the end, keeping b
Re: (Score:2)
I am uneasy about 2TB+ drives. The way I see it, that is a lot of data running on a single/small set of failure points. At about 2TB, that's about all the corporate data I generate over 2-3 years.
2TB disks are cheap. Was a time I never thought I'd say that, like back when used MFM disks were $1/MB in Santa Cruz county, home of Seagate. But you can get two of them, and back everything up twice. I use two pogoplug v4s running debian to connect them to the network, because gig is fast enough for my purposes.
How about we stop already? (Score:2, Offtopic)
Have you bought an SSD lately? They are mostly air as it is. There is absolutely no reason these couldn't be packed with newer chips to the same degrees as the solid bricks these drives were a few years back to make 200+TB SSDs. There is no particular reason that we need magnetic drives or similar capacity SSDs should cost significantly more. The drive manufacturers just have a common interest in maximizing return on every bit of infrastructure they own and have formed a consensus around it.
This isn't much
Re: (Score:2)
Price.
Get me an SSD that I can use to store my DVD collection on at the same price I'm paying for the hard drives that do it today, and I'm there. I'll even pay a premium for the power savings. But I don't really care about performance for this application, as the current solution is good enough.
As long as the drive manufacturers can make magnetic media significantly cheaper than flash, they'll stay in business.
Re: (Score:2)
Price is a value being set by the manufacturer. SSD's are superior technology to spinning magnetic storage. That is my point, the actual silicone used in that SSD doesn't justify the price being charged. Instead the price is based on the "premium" capabilities SSD brings to the table. If you simply eliminate the magnetic tier and price SSD options accordingly you dramatically cut the margins on those chips since it won't be premium but rather standard performance at that point.
I'm sure the argument that man
Re: (Score:2)
"And no, it's not the cost of sand, idiot."
You might have got me there if I'd said it was the cost of sand.
"Creating silicon wafers isn't free"
That is certainly true and while the heavy energy input is part of it, most if it just that the equipment is made in small quantities with ridiculously high per unit cost.
"The price on SSDs reflect a global game of chicken, as each vendor is trying to maximize the price while still undercutting their competition."
Yet they publish roadmaps far in advance. You'd never
Re: (Score:2)
Semiconductor manufacturing is different to cell phone manufacturing. The cost of entry is orders of magnitude more expensive and each new generation has an investment cost that makes nuclear power look easy. You can count the number of contenders in the market now on one hand because of this. Intel having trouble moving on to another generation shows how difficult it is.
Re: (Score:2)
In the paragraph "It's no different than cell phones" I was referring to the collaborative/competitive dynamic between the existing players and not the engineering. By syncing up for the most part and competing on marginal adjustments they make more profit than by releasing a market upset increment to try to steal the market outright.
Although, for that side of it I could have said "it is little different than the chips in cell phones."
I absolutely do not dispute the need for very deep pockets to enter this
Re: (Score:2)
Re: (Score:3)
This is kind of hard to follow, but I agree with your initial point. The reason these guys are trying to sell platters is that they have factories which make platters.
It's gonna be way cheaper in a decade to just do it all solid-state, but until that time, they're going to milk their investment.
Re: (Score:2)
need more pci-e lanes / bigger pch link to make ss (Score:2)
need more pci-e lanes / bigger pch link to make a few sdd really not get speed capped.
Even a few sata ones can over an pch link and / or an SAS back plane
Re: (Score:2)
Sure, the performance would hit bottlenecks (does now, it isn't like there aren't SANs populated with SSD now)... but how is that not a better problem to have than having slow and low capacity magnetic storage? Solid state scales to well over 200TB per disk with existing technology if they package it that way and the silicon isn't any more expensive than in the past when the chips were lower capacity, it is just more efficiently utilized.
I'm speaking to a technical crowd, so technically yes, some of the new
Re: (Score:2)
"The main reasons for magnetic hard drives are cost, reliability, and scale. Yes you and anyone can build a huge raid of SSDs--if money was no object. However price per GB is $0.025 for a HDD and $0.25 for a SSD so sometimes 10x as much."
Yes, but that isn't because SSDs cost 10x as much to make. They price is where it is so they can maximize margin on SSD by selling it as premium technology.
Re: (Score:2)
"Maybe not, but they are still MORE expensive to make"
More expensive to make than previous generation chips not necessarily more expensive than magnetic storage. At least not after initial costs for expanding manufacturing capacity are accounted for. Realistically we are talking about 100TB SSDs for a couple hundred dollars to replace 10TB magnetic disks selling for $180 now.
"my only option is to pay the consumer price, which is radically more than spinning storage"
Yes, I'm not proposing everyone go out and
Re: (Score:2)
First of all, props for making this argument without being rabid and foaming at the mouth like most who advocate for the interests of businesses.
"People's Price Adjustment Council - East Coast Bureau"
In some of the places they are doing the manufacturing, there is something like that. But consumer sentiment still has power in the market. If the small number of people like me who are aware of what is happening are vocal and spread the sentiment a growing unrest on the subject spreads along with awareness. Ev
Clearly this is already decided (Score:5, Funny)
It's HAMR. HAMR will beat MAMR.
I don't mean that HAMR will succeed -- it might not come to anything, and/or some new thing might appear that is even more successful -- but between the two, it's HAMR over MAMR.
Because it's not mostly about manufacturing costs or speed or reliability. It's about sales. Guys will buy HAMMER tech and avoid the clearly breast-referencing MAMR, and non-tech folks are NOT going to want obviously cancer-causing microwaves in their laptop.
It's not about logic, it's not about technical merit, it's obvious which one can sell and which one cannot.
They should rename MAMR Wave Assist Recording, because WAR would stand a marketing chance against HAMR.
Re:Clearly this is already decided (Score:5, Funny)
Combine them and up the drive speed and you could have WARHAMR 40K drives. (in red, obviously.)
Re: (Score:2)
BIngo. You win a job in advertising.
Re: (Score:2)
I'd agree with you, except that there's a popular song claiming that WAR is good for "absolutely nothing".
I suspect the two do the same thing (Score:2)
Microwaves have a wavelength of roughly 12 cm, which is 150,000 times bigger than 800nm. There's no way you could aim microwaves precisely enough to heat up the surface area that represents a single bit on a disk platter. Both HAMR and MAMR probably just rely on injecting a sm
Re: (Score:2)
And MAMR will beat ROCK.
Re: (Score:2)
I think the country is too homophobic for that to work.
Re: (Score:2)
They'd also be enclosed in a metal box which is enclosed in a larger metal box, much more sealed than your microwave.
Who needs bigger disks? (Score:2)
"640K ought to be enough for anybody."
Or in the case of hard disks, a few terabytes.
I'm actually semi-serious: it seems to me that the days of mechanical storage are numbered. With SSDs, and now Intel's XPoint, one can seriously hope that hard disks will be phased out just as floppies were. Fewer mechanical parts ought to mean more reliability, not to mention the obvious speed advantages. Granted, I did buy two hard-disks last year, but only to replace disks in an existing NAS. Those might well be the last
Re: (Score:2)
"640K ought to be enough for anybody."
Or in the case of hard disks, a few terabytes.
Or, "640K ought to be enough for anybody, for a sufficiently large value of K."
Not even interested anymore (Score:2)
If you work in a datacenter I'm guessing this is good news everyone!, but my own storage needs have (somehow, for completely unrelated reasons *cough*) gone down since the advent of streaming services such as Netflix.
Sure, games are getting bigger but I'm not a teenager anymore, so I buy maybe a dozen games per year at the most. Last year I only bought seven and that number is inflated because I bought a cheap bundle of five games on Steam.
Just save it in the cloud. (Score:2)
Ok, yes if you save it in the cloud then you data will at some point will be save on some sort of storage medium, most likely a magnetic hard drive(s).
However for normal consumer usage. 1tb is more then enough, and it has been that way for a long time, because most of the data that we consume is on the cloud and in general while it is on the cloud the data is more optimized. For example, if you are to backup all your applications, on the cloud normally there would be one copy for thousands of users, and re
HAMR vs MAMR (Score:2)
Hammer
Mammer (ies)
Makes me think of Thor's hammer, vs nice tits
Western Digital Promises (Score:2)
"Western Digital promises MAMR drives that will hold roughly 16 TB later this year", Western Digital has been so far behind in delivering things it promises I wouldn't count on this in any way. Seagate has been shipping 8tb desktop drives for a while now and Western Digital still doesn't have a 8TB Blue or Black drive listed on their website.
Technically superior means nothing (Score:2)
Beta was far superior to VHS. Guess who won that battle. L-1011 vs. DC-10, ugh! Then we have Mac vs. PC, uh oh!
Re: (Score:2)
And NOW where is VHS? It's in the attic with the cassette and 8-track tapes.
Re:This Is Interesting (Score:5, Funny)
Maybe you should call them and point this out.
I don't like to think of them fumbling in the dark, wasting time trying to make these drives without really knowing what they're doing.
Re: (Score:2)
Re: (Score:2)
Wish I had mod points. Beautifully said.
Re: (Score:2)
Considering that Seagate started work on HAMR in the 90s and has missed (by years) every potential release date they've ever set, and that Western Digital was working on HAMR before abandoning it in favour of MAMR, it has certainly been far more difficult than either of them ever anticipated.
What's changed at this point is that Seagate claims to now be "shipping production drives" (actually validation articles for a limited number of large customers), though they still don't expect to enter mass production
What happened to optical storage? (Score:2)
Re: (Score:2)
Re:What happened to optical storage? (Score:4, Informative)
Sony and Panasonic are the only ones working on it with a goal of shipping products, as far as I can tell. They've got Archival Disc, which is an extension of BluRay, which currently holds 300GB per disc (two sides, each with three layers, 100GB per layer), with plans for up to 1TB per disc on the roadmap. It's basically an extension of BDXL.
They missed their key ship dates, and at present the discs are only commercially available inside of Sony's Optical Disc Archive format, which is basically a cartridge containing many double-sided discs. Current capacities top out at 1.5TB for read/write cartridges, and 3.3TB for write-once cartridges.
Major reason for optical is dying (Score:2)
The major selling point of optical was cheap distribution of read-only data like music and video CDs, DVDs, and Blue-Ray.
Streaming makes this much less important. Sure, it would be ncie to have a consumer-priced "super blue ray" reader that could store a full-length 3D 8K movie but when most people would rather stream it, do I really want to spend the money to develop such a device?
Yes, there are still two important reasons for optical media that will keep the market alive for at least another decade or tw
Re: (Score:2)
Re: (Score:2)
Bluetooth is both microwave and UHF, since they overlap.
Bluetooth operates in 2.400 to 2.485 GHz.
Microwave covers 300 MHz to 300 GHz.
UHF covers 300 MHz to 3 GHz.
Re: (Score:2)
The ITU and IEEE don't define the microwave range, they define the UHF band. Regardless of which definition you use for UHF (Bluetooth is not UHF by the IEEE standard), Bluetooth falls into the microwave band.
Re: (Score:2)
Yep. SSDs are currently about four times the price of HDDs per byte. That's a massive drop over the last few years and the difference is only going to get smaller with time.
These new HHDs will have to be really cheap to keep HDD alive for more than a year or two. If they cost more than current HDDs then there won't be much point to them.
Re:NAND prices dropping (Score:5, Interesting)
Re: (Score:2)
Enterprise is ditching HDDs in favour of SSDs as fast as possible for everything except bulk, low speed storage. The performance gap between SSDs and HDDs is vast, and RAM for cache is relatively expensive per GB. For some applications, particularly anything database related (including mail servers, often one of the biggest and most business critical operations) SSDs are impossible to beat.
Reliability isn't a major issue, accounted for with RAID and backups.
Re: (Score:2)
Consumer NAND is only 4x the price. Professional-grade/enterprise SSD are still up there at 10-20x the cost of spinning rust. Even at 4x the price, you're still looking at an average investment of $1M vs $250k.
Re: (Score:2)
Re: (Score:2)
And so will movie rips: VideoCD, DVD, Blu-ray... 4K, 8K, etc.
As for me, Netflix in standard definition is good enough for my tiny 27" display. If I really like a movie I'll buy the DVD/Blu-Ray.
Re: (Score:2)
I need a lot less today than one or two decades ago.
BRILLIANT!!!!!! (Score:2)
This is a great idea. Exactly the same width as the SATA+POWER connector -- 4.5cm. Honestly it doesn't buy you much space, but you can lock out the platter drives. The companies which ONLY make SSDs should definitely be doing this.
Put a couple of notches into them, so they snap in -- no vibration issues demanding screws. You don't need a metal frame around them. Hot-swappable in a sexy way without having to have expensive extra carriers. They should have a standardized hole in the plastic at the front