Microwave Tech Could Produce 40TB Hard Drives In the Near Future (gizmodo.com) 151
Western Digital has announced a potential game changer that promises to expand the limits of traditional HDDs to up to 40TBs using a microwave-based write head, and the company says it will be able to the public in 2019. Gizmodo reports: Western Digital's new approach, microwave-assisted magnetic recording (MAMR), can utilize the company's existing production chain to cram a lot more storage onto a 3.5-inch disk. In a technical overview, Western Digital says it has managed to overcome the biggest issue with traditional HDD drive storage -- the size of the write head. These days, an average hard drive maxes out in the 10-14TB range. But by integrating a new write head, "a spin torque oscillator," microwaves can create the energy levels necessary for copying data within a lower magnetic field than was ever previously possible. There's a more thorough white paper for those who want to dive in. According to Western Digital, MAMR has "the capability to extend areal density gains up to 4 Terabits per square inch." By the year 2025, it hopes to be packing 40TBs into the same size drive it offers today.
"the company says it will be able to the public" (Score:5, Informative)
C'mon, guys, can't you even be troubled to proofread the very first sentence?
Re:"the company says it will be able to the public (Score:5, Funny)
Give them a break. Every so often everyone accidentally a whole word.
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)
Doesn't time just fly? Whippersnappers these days have no idea how valuable time is.
Re: "the company says it will be able to the publi (Score:2)
Re: Can't they go back to the 5-1/4 inch disk form (Score:2)
There are various other issues why 5-1/4â donâ(TM)t work, it has to do with physics. I think the last 5-1/4â I used were an IBM DeskStar series on a SCSI bus and they had a rotational speed of 3600rpm.
Re: (Score:2)
"There are various other issues why 5-1/4â donâ(TM)t work, it has to do with physics."
No, it has to do with material construction. We've got plenty of materials now days that could allow a 5 1/4" platter to rotate at 10K RPM without failing. My 6" trim saw runs faster than that with a less-than-1mm thick blade.
Re: (Score:2)
Does your trim saw run with less than a micrometer of flex on the blade and have less than 10W power consumption? 10 and 15k RPM disks exist in 2.5" and 3.5" but they become significantly more expensive to produce, increasing the power, thus heat production to the point that SSD is simply cheaper. SSD is already becoming cheap enough for many datacenter purposes when you're taking into account power costs.
Re: (Score:2)
"Does your trim saw run with less than a micrometer of flex on the blade and have less than 10W power consumption?"
Yes, it has to for high-precision lapidary work, especially on precious opal, in fact I can put my entire weight on the blade and it barely flexes. Power consumption, 10W every hour nonstop? Yes, actually less than that since it runs on 5V 400 mA.
Welcome to like 1980, old timer. This sort of tech and alloy has been around for DECADES, you're only now catching up to what us faggy jewelry types h
Re: (Score:2)
I do remember having stored a 5.25" Quantum Bigfoot somewhere. It's not anymore in service, but it operated for about a decade in my room in the student dorm I lived, around the turn of the millennium... Slackware based 486 router PC (I was the unofficial local ISP at that time) with an experimental cable modem connection, 2 NE2000 compatible ISA NICs (coax was more robust when squeezed between the door and post of the dorm rooms...) and a single 'luxurious' 3Com 3c509 with 10base-T for the room-local net.
Re: (Score:2)
give us the 5-1/4 inch format disc and we can at least HALF the number of physical HD we are force to use now
Why stop there?
The very first hard drive, the IBM 350 RAMAC, had fifty 24-inch platters. If we went back to that form factor, with this new technology you could pack over 130,000 TB into a single drive!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
No, the RAMAC was more of a walk-in freezer. You're thinking of the IBM 2311 a few years later, which was a top-loading washing machine that gave you over seven megabytes of storage. Nobody knew what to do with all that capacity.
I'm feeling the shaft vibrations (Score:2)
Sigh.
When you're trying to recover a spinning-rust ZFS volume, what you really care about is independent head-servos per terabyte, if you want your mean-time-to-recover to be a smaller number than your mean-time-to-cascading-failure.
Sure, you could build a viable ZFS storage system using multi-petabyte hard drives, so long
Re: (Score:2)
This is good news for anyone who wants to work with uncompressed video. It might not amount to a ton of people, but let's be happy for those youtubers who aspire to be something better than disneycartoys, ok?
Re: (Score:2)
A serious answer to an inflammatory question:
I currently have well over 20TB total storage in my closet accessible to me. Roughly 10TB is already allocated to just my photo collection from my DSLR camera. Camera RAW files are in the area of 25-35MB/ea. Plus once Photoshop gets involved for the editing process, each of those images are in the 500MB-4GB range (yeeeuep, requiring the switch from PSD to PSB files)
As noted elsewhere, too. People who do video recordings will need this level of storage. Video reco
Re: (Score:2)
I don't think that was what the OP was getting at. It sounds like they were suggesting putting multiple backups on the same media and that is a VERY BAD idea. The real advantage of spinning rust right now is that it is CHEAP.
You can buy several external spinny drives for the cost of a single SSD. That means multiple physical copies of your backups.
Once you are getting into backups or a NAS, the speed advantage of SSD doesn't matter so much anymore.
The further away from the CPU the storage is, the slower it
Re: (Score:1)
... ever done any traveling? Data-roaming charges can bankrupt you.... Ever traveled to the country-side where you have a really crappy connection?
I have music in FLAC format that i cannot stream online... I have quite a few pictures taken over the last 25 years that i want to save... I have old video's i want to save... I have a bunch of movies ripped to disk so i can do playback without having to switch disks.....
Sure there are many things that can be streamed, but all those streaming-services continuousl
Re: (Score:2)
> Who needs local storage beyond the OS and apps in this day and age when anyone can store their data on the cloud
Anyone that wants it in a timely fashion.
Cloud storage absolutely SUCKS for performance because performance of the network sucks. That's not even getting into the reliability, portability, and cost issues.
Even local gigibit NAS storage is hopelessly out of date in terms of performance.
Re: (Score:2)
Few people cares (Score:4, Interesting)
Re: (Score:2)
On the plus side it'll let companies host video and backup services much more cheaply and reliably. On the minus side, it'll let authoritarian governments maintain databases of effectively infinite size for every single citizen cheaply and reliably.
One way or other, it does impact everyone on the planet.
Re: (Score:2)
Re: (Score:2)
Anyone that does more than mindlessly tap vaporous bubbles cares.
Re:Few people cares (Score:5, Insightful)
Re: (Score:1)
Consumers who care are using SSDs already.
Higher performance -- until you have to seek. Or if you aren't the only one using the drive. Interface speeds do not keep up with capacity increases.
Re: (Score:2)
Re: (Score:2)
Above 1 TB only geeks and IT companies care. Just like it was case for CCD above 10 Megapixel. At 1 TB disk is "solved" problem for general audience and things that matter now are performance and durability.
Not for HDDs, the "high performance" market has all but disappeared except for a few hybrid SHDDs for laptops. Even failure rate is for consumers "it can fail", they don't have enough in a RAID setup or whatever to care about the statistics, at most they have a SSD for performance and a HDD for bulk storage. And the enterprise typically has this in some kind of SAN or storage server to handle failures, unless it's so bad that the failure rate works out to a $/TB difference they don't care. So the HDD market
Re: (Score:2)
On an operating system with working symlinks, you can install part of a game on ssd, and part on HDD. It's only amateur hour windows where this is still a problem.
Re: (Score:3)
On an operating system with working symlinks, you can install part of a game on ssd, and part on HDD. It's only amateur hour windows where this is still a problem.
Acutally Windows 7+ has symbolic links. But while that's technically true, you'd still have to identify which files go where and redo it every time you install the game and if an updates changes any file paths or folder structures it might not stick. Steam could push a standard split where publishers could put up to say 10% of the installation files in a folder marked for acceleration. Make a 20GB game? Max 2GB goes in that special folder. Of course you could pick that both folders are the same, all 20GB on
Re: (Score:2)
Also, one has to be _very_ careful how new data is incorporated to avoid breaking symlinks. Tools like "cp" or "tar" in the Linux and UNIX world will normally copy content on top of symlinks, and write changes to the target of the symlink. "rsync" will not, nor will any operation that copies a temporary file in place and moves it to replace the previously symlinked file.
I've had extensive difficulty with people using symlinks carelessly to move bulky content elsewhere, then wondering why they discovered dif
man rsync. Three different options to combine (Score:2)
Rsync can do pretty much whatever you want regarding symlinks. There are three different command line options. Since symlinks may point outside of the directory you're copying, and may be either relative paths or absolute paths, the "right" behavior is situation dependent. Rsync lets you choose what is right for your situation.
Re: (Score:2)
You've a very good point that "rsync" provides options to handle symlinks differently. But those options are aimed at the correct replication of a symlink on the source side. The transfer of content on top of an existing symlink normally breaks the symlink on the receiving side, unless it is a matching symlink.
If you've seen a way to get it to consistently transfer plain file content from the resource side, on top of an existing symlink, leave the symlink untouched and publish the content to the target of t
If I'm understanding, consider hard or bind mount (Score:2)
So the source is NOT a symlink. You want the destination to be a symlink, and you want it to copy from src/a/file to destination/b/somewhere/ file ? So the file contents up up somewhere totally different than where they are on the source, based on a previously existing symlink on the destination?
If I'm understanding you correctly, you can probably achieve the same goal by trading the symlink for either a hard link or a bind mount. If the symlink points to a directory, use a bind mount instead. If the syml
Re: (Score:2)
I'm afraid that rsync normally deletes the local file and replaces it with the new file, doing the replacment either bore the complete delivery (for certain options) or after completed delivery to a temp file. That breaks the hard link. Rsycing hardlinks is quite tricky if the hardlink exists on the _target_ filesystem, and not on the source filesystem, and does not happen if the hardlink leads outside the target rsync directory. "bind mounts" do not work for individual files in Linux filesystems, only for
Hard links point to blocks, not files. Still man p (Score:2)
> I'm afraid that rsync normally deletes the local file and replaces it with the new file
---inplace
>. "bind mounts" do not work for individual files in Linux filesystems
That's what I said. I said "if you're symlinking to a directory, consider bind mounts"
> and does not happen if the hardlink leads outside the target rsync directory.
Hard links don't lead to any directory. Hard links (aka file names) lead to disk blocks. We call them "hard links" when two or more different file names happen to lead
Re: (Score:2)
"--inplace" breaks symlinks and hardlinks. I just tested it under CygWin and under a current Linux.
"--copy-dest" simply replicates unchanged files. That is not part of the "copying content from elsewhere on toop of symlinks and making sure the conent is copied to the target of the symlink that I was trying to explain.
"--backup" doesn't help the issue. The backup exists and you can derive the old symlink to script around the issue. You can also do a "--dry-run" and parse that to deduce what needs to be synce
Re: (Score:2)
Goodness, my typing was horrible in my response. It detracted. The core of the message should stand, but I do need some sleep. If the danger of relying on rsync to consistently copy non-symlinked source content to local symlinked content, and expecting the changes to propagate reliably to the symlink target, is unclear, I'll revisit the issue.
Symlinks to place content on another fileystem is useful. It needs to be handled with caution.
Do you want the *exact* solution, or keep arguing? (Score:2)
I pretty much told you how to do it, and marked that with "(this is a big hint)". Do you want to solve your problem, do you want to know how we do it, or do you want to keep arguing that it can't be done?
Re: (Score:2)
Some games have ridiculously long load times regardless if you put them on HDD or SSD, whatever it's waiting for it's not the disk. It's a bit annoying that games that take 50GB+ can't split their assets up over two disks for fast/slow access but it's not going to change so whatever, if you play it a lot make room on your SSD and put the more rarely played games on HDD if you run out of space.
...because most times it's not the disk. It's the fucking "always online" bullshit, together with horrible optimization.
Game loads and grinds to a halt waiting for some crap server to respond to a shitty security check. When that happens 100 times during load... there's your performance bottleneck right there. That's why many pirated games load faster than their "official" variants.
Re: (Score:3)
Demand for increased storage will build, the question is how fast. People collecting 4k content can use up that space easily. 40T is speculated by 2025, which will be well into the 4k transition. 10gb ethernet is getting cheaper. If that hits consumer price levels, it'll be easier for average people to handle higher storage volumes. Faster video cards and high rez target game resolutio
Re:Few people cares (Score:4, Insightful)
Above 1 TB only geeks and IT companies care.
And 640K of memory should be enough for anybody.
Re: (Score:2)
Well, it is certainly more than enough storage space to dedicate to fake quotes.
Re: (Score:2)
Above 1 TB only geeks and IT companies care.
Even fewer people don't use GMail, Dropbox, and YouTube.
Re: (Score:3)
1TB is surprisingly small these days. Most people I know have ~500GB in home videos and pictures alone, let alone media they may still have for iPod's and similar devices.
2-4TB is right now the range in what most "consumers" buy hard drives at. Not because of availability, but because of necessity.
Re: (Score:2)
Yeah sure. No one needs 1TB.
Except for those people who are taking 1080p/60 videos on their mobile phones, cool kids in the street videoing themselves doing cool shit on their bmx bikes in 2160p with their knock-off GoPros, people who buy either the cheapest DSLR to snap photos of their babies at 30MB per button press or like a more expensive SLR which can weigh in at closer to 100MB, or maybe just someone who has more than a couple of games installed (Doom weighs in at 80GB minus DLC, as does Deus Ex, hell
Re: (Score:2)
While you’re right about consumers generally not caring about or needing this, you shouldn’t be so dismissive. Anandtech has a nice graph showing the projected split between SSDs and HDDs in enterprise over the next few years [slashdot.org], and it makes it pretty clear that this sort of technology will not only remain relevant, but will, in fact, remain dominant for the foreseeable future.
Of course, for those of us building NASes or otherwise working with large files at home, cheap storage is still important.
Re: (Score:2)
Dang link got broken. Here’s a fixed link to the graph [anandtech.com].
Re: (Score:2)
Issue #1:
The game called Shadow of War has a disk footprint of 97 GB. Doom occupies 45-50 Gb. Latest Gears of War occupies cca 100 GB and Ark: Survival Evolved can balloon to over 150 GB with a few mods and maps installed. Battlefield 1: 50 GB. GTA5: 65 GB. Mafia 3: 50 GB. Far Cry 4: 34 GB.
Get 15 such games and your 1 TB drive explodes.
You could argue "gamers are not general audience"; maybe, but if you ever want to start playing new games, you'd appreciate larger HDDs.
Issue #2: If you get married with a we
Re: (Score:2)
Considering that video games are now in the 50GB+ range, 1TB doesn't seem like so much total storage anymore.
Re: (Score:2)
> Above 1 TB only geeks and IT companies care.
So only "geeks" do home video? You sound like fan of that company that wanted to "enable" people but really only seek to limit them terribly now.
Just my games take up 400G and I'm not even a Windows user.
Re: (Score:2)
Media projects and even short 4K and 8K movie clips need more space just to work on.
Re: (Score:2)
Re: (Score:2)
That is because most of laptop drives today are "hard drive". They're flash. Flash is much better for laptops, partly because of the lack of mechanical components, and partly because it retains state so much better without using batter power in sleep mode. It's also generally much faster to search and retrieve data randomly stored or ordered data, without spending extensive time "optimizing" the filesystem. But it is much, much more expensive per GB of storage space.
Re: (Score:2)
"because it retains state so much better without using batter power in sleep mode."
Well I'd sure fucking hope so given sleep mode is a goddamned suspend to RAM and not to disk like HIBERNATE.
Re: (Score:2)
I do the same. I've got a whole stack of old 3.5" laptop HDD's as I upgraded my laptops through the years; everything from 6 Gb to 40/60/80/250GB. It was always cheaper just to buy a basic laptop, then buy a pair of spare HDD's rather than pay the markup for the more expensive model. Just keep everything sorted from Linux ISO downloads to PDF manuals and funny cat images.
Re: (Score:1)
4+ TB USB 3 drives are below $150 these days.
That is my solution for lack of bulk storage on the laptop.
Re: (Score:2)
Last time I said that someone replied "O, you're using consumer grade", which was true, as that's all I have access to. And it didn't cause me to trust SSDs any more than I had.
A bit later someone else, in another thread, said that he was still having problems with enterprise grade SSDs.
I'm not sure whether the problem is that the technology is unreliable or that the manufacturers don't care, but I see no reason to trust SSDs for anything archival, or for any application where the power might suddenly fail
Re: Few people cares (Score:2)
Re: (Score:2)
Interesting. That was one of the assertions that I made that was derided.
I agree with you that it's true. It's just not the only reason not to trust them. OTOH, if there are some that always fail into "read only" mode, they aren't as bad as I've been thinking.
Re: (Score:2)
Interesting. That was one of the assertions that I made that was derided.
I agree with you that it's true. It's just not the only reason not to trust them. OTOH, if there are some that always fail into "read only" mode, they aren't as bad as I've been thinking.
SSD firmware guy, here. You guys are correct, SSDs are not for archives. Data in NAND decays after a while, and the design is to read and refresh the data while it is still recoverable through ECC or parity.
Also, please understand that no SSD can guarantee to always go into read-only mode in the event of failure. No more space to write, because the drive has had too many grown defects? Goes to read-only mode. Drive loses one NAND die more than the design can recover from, thus likely losing some/all lo
Re: (Score:2)
I'm sure there are limits as to how much storage is desirable. But I'm not sure what those are. For me a couple of 350GB disks and a few 2TB usb backup drives has sufficed for a few years, and there's no immediate sign that I'll run over. OTOH, I'm working on a program that may well eat it all up and beg for more.
Re: (Score:2)
Turn in your nerd care
Nerds don't need care, they have robotic souls and uptime.
Was 640 KB enough for you back in the day?
Most of my work today I have to fit into 32K, sometimes less. It is plenty of room for a lot of things. Sometimes I need a few terabytes, sure. Depends on the use case. There is no guarantee that I have any use cases requiring more than 640KB. Isn't that even more true for common folks using mostly remote resources? Do they really need much more than a display buffer?
Comment removed (Score:4, Insightful)
Re: (Score:2)
Some of you are missing the ramifications of this. Even though this is magnetic media it will drive down the cost of cloud storage. Right now it is cheap, but not that cheap. This could make is feasible for everyone to store all of their data in cloud for pennies a year....encrypted of course.
Archive storage is simple enough, but is there any such service that has an open source client and lets you set an AES key so it can be sorta like a remote mounted encrypted container? Or are the sync tools so smart you can create a big container in a sync'd folder and it'll sync just the bits that change? Because I don't trust anything that's in the hands of Apple/Google/Dropbox to be truly private, in fact we know many people want the cloud provider to provide integration with apps or sharing or whatever
Re: (Score:2)
Some of you are missing the ramifications of this. Even though this is magnetic media it will drive down the cost of cloud storage. Right now it is cheap, but not that cheap. This could make is feasible for everyone to store all of their data in cloud for pennies a year....encrypted of course.
I don't think cost is the issue here. First, I don't think people are eschewing cloud storage for cost reasons alone; with BackBlaze being $0.005/gb/month and Amazon and Microsoft both under a nickel a gig even for their highest tier, few people are saying "too expensive". Most of the issues have more to do with either principle (i.e. not wanting their data on someone else's hard drive), bandwidth (10TBytes transferred over 10mbits/sec upload pipe...grab a snickers...), or latency (video editing in The Clou
Expenses are a function of number of drives (Score:2)
> Further, I'd argue that I doubt most cloud storage companies list "disk storage" as their primary expense
40TB drives would mean the cloud providers need 90% fewer servers. A tenth as many servers means the datacenter can be 1/10th the size. 90% fewer servers means 90% less tech time of employees going around swapping out bad drives, running cables to new servers, etc. Basically ALL the costs other than marketing and internet bandwidth are a function of the number of drives.
Looking at it another way,
Re: It does matter (Score:2)
Actually you raise an interesting question here that I hadnâ(TM)t really thought about until now: how much of the cost of cloud storage is for the actual storage? That is to say, if the cost of HDDâ(TM)s miraculously were $0, how much would Dropbox, Amazon cloud, etc cost? How much goes into maintenance/labor, encryption/security, server upgrading, etc?
Those are all a function of number of disks (Score:2)
Larger drives means fewer servers per terabyte. Fewer servers means all costs drop. The dollar cost of the bare drives themselves is a fairly small portion of the overall costs, so *cheaper* drives don't make a huge difference, but *bigger* drives make a big difference.
High capacity HDDs terrify me (Score:2)
Just imagine a 40TB HDD starting developing bad sectors... and then RAID arrays crashing... I really can't handle even thinking about it. I think that the use of tape backup systems for those pesky 10-14 TB disks will soon become mandatory in every home.
Re: (Score:2)
You have a brand new array of 40tb disks and are going to use raid tech from the 80's? extent/file based mirror and parity is mature at this point zfs/btrfs/others for local and things like gluster for scale wide. With 12 drives in a 1ru or 90 in a 4ru that's an awful lot fo raw storage.
Re: (Score:2)
ZFS pretty much can be a single solution for backups though. In my particular organization, we have servers in multiple physical locations. On each storage server, we're using ZFS RAID-Z for local resilience. Then we're using ZFS snapshots for historical backups locally. Then finally we use ZFS SEND/RECV to mirror all of the snapshots across the multiple data centers. 40TB drives backing these pools would be an absolute DREAM!
Ultimate Porn Cache (Score:1)
How much porn can I get on that?
Wonderful but when will 10,12 and 14TB drop $$$? (Score:2)
Curious to know - prices seem to stagnate lately, drops have really slowed of late.
Picked up 5TB disks for $200 US nearly 3 years ago. There's occassionally deals which beat that but overall, its still a normal price sadly.
Re: (Score:3)
As always, the drops will come when the new tech arrives. 10TB drives are about half the price (~$350) now as they were when they came out (~$6-800). A 5TB drive is now at the $100-150 price point so it's dropped by 30-50%.
Re: (Score:2)
Re: (Score:2)
No, even with MAMR they'll still need to evacuate the drive case and fill it with helium because you can't risk water molecules in the enclosure attenuating the microwave emissions from the heads and causing bad data writes.
Does drive performance scale up? (Score:2)
This is an impressive density, but my question is whether drive performance increases and what kind of bus will the drive have?
SATA 3.2 claims 16 GBits/second but my guess from glancing at the other 3.2 features would indicate this is mostly a flash-oriented spec that would be tied to M.2 slots, and that the ordinary SATA slots would still be SATA-3 @ 3 Gbps.
Without a bus and drive combination capable of moving 40 TB in a reasonable amount of time (4-6 hours), these drives will be a novelty and not useful i
Re: (Score:2)
The bus is fast enough, but there is a limit to the mechanical motion. You can make the drive spin faster (10k, 15k) and the arm move faster but at some point the forces involved becomes an issue (so you end up with 10k, 15k drives only being in 2.5" packages). So to answer your questions, no, there won't be any significant upgrade in speed and yes, the rebuild times for these will be tremendous and you can't really think about your data as "RAID sets" anymore, that model is quickly becoming outdated. You h
Re: (Score:2)
"so you end up with 10k, 15k drives only being in 2.5" packages"
You say that as I look at all these 3.5" 15KRPM Ultra-Wide SCSI drives sitting in my drawer.
Re: (Score:2)
and yes, the rebuild times for these will be tremendous and you can't really think about your data as "RAID sets" anymore, that model is quickly becoming outdated. You have to think about "storage nodes" as a single, really large hard drive.
A distinction without a difference. Whether the data is made device redundant by some kind of RAID system or some kind of network copying, drive performance plays in significantly in restoring redundancy after a node failure.
I think the value of these drives is dubious if their redundancy rebuild rate is measured in days due to read/write rates not scaling. They may have niche use cases (ie, containing a complete copy of some large storage quantity on a single medium) but I'd also worry that the density w
Re: (Score:2)
Re: (Score:2)
Most storage systems I've worked with either strongly advise or outright require drive sizes > 1 TB to use a double parity system like RAID-6 due to the lengthy rebuild times associated with large individual members.
Flash performance is so far ahead of its bulk capacity, though, that capacity can scale without performance appearing to be a factor in restoring redundancy.
Re: (Score:2)
Business as usual? (Score:2)
So about a 3x capacity in 7 years?
Doesn't sound particularly ambitious, nor 'in the near future'.
The problem is file systems (Score:2)
Re: (Score:2)
>> Once you have 40 million files trying to find those 10 JPG files you took of your son's birthday party can take an hour or more.
Thats what you get for using a crap OS like Windows.
Don't just dump everything in the top level of your "My Pictures" folder and expect some photo app to sort it all out. Organize things hierarchically in the file system in sub directories (or as you Windows users call them, folders).
Re: (Score:2)
40TBof data using microwaves, but (Score:2)
How many Hot Pockets Snack Bytes [hotpockets.com] can it heat up? Programmers get hungry ya know.
Wharton MBA fail whale (Score:2)
Jeff Frick interviews Brendan Collins on MAMR [youtube.com] — 12 October 2017
15 seconds logo rotation.
30 seconds lip-gloss application.
30 seconds applied lip gloss.
At 2 m mark there's a flaccid PMR confession.
2m30 Finally, a useful question. Answer mentions head process "Damascene".
3m30 three enabling technologies for last three to four years: helium, microactuation, Damascene process.
Then, effectively "this new thing today makes things better blah blah blah track density blah blah blah linear density blah blah bl
actual technical disclosure? (Score:2)
I was extremely snarky in another post about an MBA-level chit chat on YouTube that revealed basically nothing.
Now I've tracked down a 1.5 hour technical talk featuring Mike Cordano, Dave Tang, Janet George, Brendan Collins, and Jimmy Zhu.
Technology of the Future: Western Digital Announces MAMR for Next Generation HDDs [youtube.com] — 12 October 2017
The first 6m30 are disposable, that's as far as I made it so far.
This was via a failure forum where Seagate employees gather to trash Seagate management. Wow, what a
their first taste of low tech interferometry (Score:1)
military been using this tech in space for decades. next up they'll get rid of the spinning disks and switch to holography using interferometry to read and write to the matter.
https://www.trumpsweapon.com/ [trumpsweapon.com]
By the year 2025... (Score:2)
Re: (Score:2)
Re: (Score:2)
It's fascinating that so-called IT geeks and professionals can get so religious about this stuff. Any kind of system can use a variety of tools and technologies as are appropriate.
Right tool for the job and all of that.
My boot drive is a 1TB SSD. Cost a pretty penny too but it's useful. That doesn't mean I ignore spinning rust for the bulky stuff or portability.
Re: (Score:1)
for a white paper, it's pretty weak. no data. not even any theory. mostly just wishful thinking.