Hitachi Promises 4-TB Hard Drives By 2011 372
zhang1983 writes "Hitachi says its researchers have successfully shrunken read heads in hard drives to the range of 30-50 nanometers. This will pave the way for quadrupling today's storage limits to 4 terabytes for desktop computers and 1 terabyte on laptops in 2011." Update: 10/15 10:39 GMT by KD : News.com has put up a writeup and a diagram of Hitachi's CPP-GMR head.
Waiting for... (Score:5, Insightful)
Re: (Score:3, Funny)
Re:Waiting for... (Score:4, Insightful)
Even my none geek friends and family are starting to feel the pain as working with video and Bit Torrent becomes more common. Multiple TB usage won't be that uncommon I think. What we really need now though is RAID-5 for the average Joe.
Re: (Score:3, Interesting)
They've squeezed enough space into tha
Re: (Score:3, Interesting)
--2-port PCI SATA cards can be had for like $19 these days, and they DO make eSATA -> Sata cables for ~$7.
Re: (Score:3, Interesting)
Bzzzz. Next contestant!
Linux's software RAID solution is often faster or on par with high-end hard RAID solutions, plus, it doesn't tie you to a specific hardware vendor. Linux's software RAID solution is generally far, far, far better than the low-end, commodity RAID solutions which comes with various MB/chipsets these days. The down side of software RAID is it takes more CPU. In a day where multi-core CPUs are common and CPUs are faster than ever, almost everyone can spare the CPU.
Re: (Score:3, Informative)
It may suck, but somebody's got benchmarks saying that it's faster...
Link [wustl.edu] from 2004, but still relevant, I'd think.
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
I have a toy, which I keep considering turning into a business, that would make it easy for users to backup their files to a central server farm that'd keep redundant copies in different locations, make fi
Re: (Score:2)
Re: (Score:2)
I use this approach at work, rather than spending colossal amounts of money on expensive tape libraries and backup software. It seems quite effective, although it does require a bit of thought to use effectively. (Don't back up live MySQL databases, write them to a backup file!)
Re: (Score:2)
It has confirmed my belief that all new large harddrives will fill up in 4 months.
Sad part is 900GB seems kinda small by today's standards.
Re: (Score:2)
Hard drives cannot keep up with space demands these days.
RAID 5 is the only way to go if you want a lot of cheap redundant space.
Re: (Score:3, Informative)
It's not the size of your RAID mate, it's how you use it.
Re:Waiting for... (Score:5, Insightful)
Actually, my sickened mind went a completely different direction... remember when we were going to have 8 Ghz Pentium 4s with 6 GB of RAM to run Windows Vista?
Heck, it's still common to see computers sold with 256 MB of RAM, which wasn't a particularly large amount 5 years ago... that it's even salable today speaks volumes. I have an "end of life" Pentium 4 2.4 Ghz that I picked up this w/e for like $50. 20 GB HDD, 512 DDR RAM, CD, Sound, etc.
Other than the small-ish HD and the CD instead of the DVD, this system is not significantly different than a low-end new system. And, when it was first sold 3-4 years ago, its specs weren't particularly exciting.
Point being, there's a "we don't talk about it" stagnation going on in the Computer industry. I honestly think that most of the new purchases are based on the expectation of EOL and the spread of viruses. It's gotten to where it's actually cheaper to buy a new computer than it is to reload your old one. Part of that is the fact that it takes a full business day of rebooting the computer to update Windows from whatever came on the CD.
This part just floors me. I have the original install disk for the aforementioned $50 Dell 2.4 Ghz system, and am reloading from scratch so it's all clean. It takes ALL FREAKIN DAY simply to update Windows to the latest release, with a 1.5 Mb Internet connection. (not high end, but still no particular slouch)
Yet it takes about an hour and just ONE short line to update CentOS (RHEL) to current:
My point to all this?
The computer industry has (finally) reached a stable point. Performance increases are flat-lining to incremental, rather than exponential, and there's little incentive to change this, since a 4-year-old computer still does most anything anybody needs a computer to do. There will always be a high-performance niche, but it's a niche. The money has moved from computing power to connectivity.
People no longer pay for processing power, they pay for connections. Thus the Intarweb...
The small thing yaou neglected (Score:5, Insightful)
Yes, indeed, we've reached the point where any computer, even if 4 years old, is good enough to do most day-to-day activities (hanging around on the web, wrting some stuff in a word processor, e-mails, and ROFL/LMAOing on AIM/MSN/GMail/Facebook or whatever is the social norm du jour).
Case in point, my current home PC is still Intel Tualatin / 440BX based.
*BUT*...
As you said (and that's something I can confirm here around too), Joe 6 pack buy a new computer every other year, just because his current machine is crawling under viruses and is running too slow (and spitting pop-ups by the dozen). He either pay wads of cash to some repair service that may or may not fix his problems, may or may not lose his data in the process, and he'll have to wait without a machine for a couple of days. Or he gets a new machine. And...
Those outrageous configuration never showed up. Never the less, it seems like Vista was still designed with those in mind.
So in the end the new machine Joe Six pack *WILL* have to be better/faster/stronger, simply because the latest Windows-du-jour has tripled its hardware requirement for no apparant reason.
OS maker will continue to make new versions on a regular basis, mostly because that's their business and they have to keep the cash flow in. Also, there are security issues to fix (by adding additionnal layers of garbage over something that was initially broken by design), legal stuff (add whatever new DRM / Trusted Computing stupidy is latest requirement voted the **AA lobby), add a lot of dubious feature that still 0.1% of the user base will need (built-in tools to sort / upload photos, built-in tool to edit home-made movies, or whatever. Modern OS tend to get confused with distributions and go the Emacs-way of bloat).
All this will result in newer OS that take twice the horsepower to perform the exact same task as older.
And thus, each time Joe 6 pack changes his computer, he gets a newer one, which will obviously have the latest OS on it, and thus will *need* to have 4x the computing power. Just to continue hanging on some IM, sending e-mail, writing things, and browsing porn
Emacs is no longer bloated by today's standards. (Score:2)
Actually, the complete Emacs "operating system" takes up less than 75 MB, uncompressed and including all documentation and LISP source code. The main emacs package is just 25 MB uncompressed. By today's standards, that's positively tiny. Damn Small Linux claims to fit a complete OS in only 50 MB, but like many Live CDs, it "cheats" by storing everything in compressed form and decompressing it on the fly.
Emacs in term of functionnality (Score:2)
I meant Emacs from the point of view of functionnality. Initially, Emacs was supposed to be an editor with some extension capability.
This extension capability has been abused over time, and now Emacs can be used as an e-mail client, a browser, features interactive chatbots, and has pretty much everything else including probably a kitchen sink (indeed: There's a Nethack extension [nongnu.org] for Emacs, and Nethack does featur
Re:Waiting for... (Score:5, Interesting)
Regarding the second part (reinstalling XP) - you should really look at Acronis True Image - it's what we use.
Basically, you install WinXP+patches and whatever programs you need once, make an image and store it on a DVD, network or on a hidden partition on HDD. At boot, you can press F11 to start Acronis instead of Windows from the hidden partition (it's a lightweight Linus distro) and you can restore your image in 5-10 minutes. Even if the image is 6 months old, you still need to download just a few patches and software updates (e.g. update from FF 2.0.0.0 to 2.0.0.7).
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:3, Informative)
Next time do "# yum -y update && shutdown -r now" the && means that it will only run shutdown if yum reports successful completion, so if yum breaks you can see the errors.
Re: (Score:3, Insightful)
As someone with close to 300 DVDs (yeah, yeah...I know, MPAA evil...but I try to buy as many of them used as I can), I'm going to wait until HD technology starts catching up with disc technology before upgrading to HD. So any breakthroughs tha
Actually, that's the scary part (Score:5, Insightful)
I started my programming experience almost directly with assembly. Well, I had about a year of BASIC on my parents' ZX-81 first. But that was a damn slow machine (80% or so of the CPU was busy just doing the screen refresh) and Sinclair BASIC was one of the slowest BASICS too. So with that and 1K RAM (you read that right: one kilobyte), you just couldn't do much, you know. So my dad took the Sink-Or-Swim approach and gave me a stack of Intel and Zilog manuals. Anyway, you had to be particularly thrifty on that machine, because your budget of CPU cycles and bytes makes your average wristwatch or fridge nowadays look like a supercomputer.
I say that only to contrast it to the first time I saw a stacktrace (Java, obviously) of an exception in a particularly bloated Cocoon application running in WebSphere. If you printed it, it would run over more than two pages. There were layers upon layers upon layers that the flow had to go through, just to call a method which, here's the best part, didn't even do much. That nested call and all the extra code for reusability sake, and checks, and some reflection thrown in for good measure, obviously took more time than the method code itself needed.
It hurt. Looking at that stacktrace was enough to cause physical pain.
Now I'm not necessarily saying you should throw Cocoon and J2EE away, obviously there are better ways to do that even with them. Like, for a start, make sure your EJB calls are coarse granularity so you don't go back and forth over RMI/IIOP just to check 1 flag.
But how many people do?
The second instance when it caused me pain is when I was testing a particularly bloated XML-based framework, and it took 1.1 seconds on a 2.26 GHz Pentium 4 just for a call to a method that did nothing at all. It just logged the call and returned. That's it. That's 2.5 _billion_ CPU cycles wasted just for a method call. That's more than 30 years worth of Moore's law. Worse yet, someone had used it between methods in the same program, because apparently going through XML layers is so much cooler than plain old method calls. A whole 30 years worth of Moore's Law wasted for the sake of a buzzword. The realization hurt. Literally.
Again, I'm not saying throw XML away generally, though I would say: "bloody use it for what it was meant, not as a buzzword, and not internally between classes in the same program and indeed the same module." It just isn't a replacement for data objects (what Java calls "beans"), nor for a database, nor as just a buzzword to have on the resume.
Each iteration of Moore's Law is taken as yet another invitation to write crappier code, with less skilled monkeys, and don't bother optimizing... or even designing it well in the first place. Why bother? The next generation of CPUs will run it anyway.
And the same applies to RAM and HDD, more or less. I've seen more than one web application which had ballooned to several tens of megabytes (zipped!) by linking every framework in sight. One had 3 different versions of Xerces inside, and some classloader magic, just because it beat sorting out which module needs which version. Better yet, they were mostly just the GUI to an EJB-based application. They didn't actually _do_ more than display the results and accept the input in some forms. Tens of MB just for that.
So now look on your hard drive, especially if you have Vista, and take a wild guess whether those huge executables and DLLs were absolutely needed, or are there mostly because RAM and HDD space are cheap?
At this rate and given 4TB HDDs, how long until you'll install a word processor or spreadsheet off a full HD DVD?
So? (Score:4, Insightful)
Re: (Score:2)
How about working for it instead of praying for it?
Sincerely, an atheist.
Re: (Score:2, Informative)
2007: $40000
2008: $12000
2009: $3600
2010: $1080
2011: $324
If this works out, 2011 might be about the time solid state disks overtake hard disks.
Re: (Score:3, Informative)
Re: (Score:2)
flash storage has been growing faster than HD for the past few years. About 6-7 years ago, a big HD would 80 GB, while a big flash card would be 32 MB, i.e. a ratio of about 2500. Now, a big HD is 500 GB and a big flash card is 16 GB, which means the ratio is more around 30. Basically, flash has been growing nearly 100 times faster. If it keeps doing that (I've no idea whether it will), flash storage will be bigger then HD in about 5 years.
Nice historical observation, however the flash price curve has now about settled down to something more resembling Moore's law, as opposed to the nigh-on miraculous rate of the previous few years. Hitachi's prediction is also in line with Moore's law. If nothing dramatic happens to change those relative rates then the current factor of 25-50 price difference will remain for quite a few years yet. Put it another way, I won't be putting my rotating media optimization skillz out to pasture just yet.
Re: (Score:2)
Full circle... (Score:2)
"But GMR-based heads maxed out, and the industry replaced the technology in recent years with an entirely different kind of head. Yet researchers are predicting that technology will soon run into capacity problems, and now GMR is making a comeback as the next-generation successor."
*Scotty sets down mouse- looks at keyboard and replies:"How quaint."*
Having seen all of the referenced articles and links on my own, this just ties it all together nicely.
On the downside, if you haven't been subjected or hunt
Base 2 or Base 10? (Score:3, Funny)
I have a need right now... (Score:5, Interesting)
A simple SATA RAID controller interfaced with 4 such drives can give me 12TB of cheap, fast, storage. At 1TB per year, should be good enough for my needs. H/w vendors currently recommend expensive SAN boxes; which I don't like... no useful value for the application at hand.
Re: (Score:2)
Re:I have a need right now... (Score:5, Insightful)
I sincerely hope you do backups anyway. RAID is simply there to allow you to continue running a service under some specific failure conditions that would otherwise cause the service to be down whilst hardware is replaced and backups restored - it is not a substitute for backups, RAID and backups accomplish different jobs.
Some examples of failure conditions where RAID won't save you but backups will:
- Some monkey does rm -rf / (or some rogue bit of software buggers the file system).
- The power supply blows up and sends a power spike to all the hard drives in your array (I've personally seen this happen to a business who didn't take backups because they believed RAID did the same job - they lost everything since all the drives in the array blew up).
- The building bursts into flames and guts your server room.
In all these conditions, having a regular off-site backup would save you whereas just using a RAID will not.
Re: (Score:2)
Sure, you can recover from 1 disk loss, but what about 2? Murphy is a cruel bastard who enjoys eating fools like you for breakfast.
Re: (Score:2)
RAID 6 is your friend.
Re: (Score:2)
Even raid6 in this configuration is scary. I'd want a SAN, if for no other reason than the backend management. On top of the fact that you slam 16 drives in the bloody thing ( minimum for this kind of data ), and have half as hot spares to a raid6 array. On top of this, you have a support contract with the vendor, so if a drive dies you have an exact replacement in under 24 hours. You dum
Re: (Score:2)
Actually, the setup includes an off-site Disaster recovery setup that will have identical storage size, in an external drive cage, attached to vanilla hardware. So in the event of a major crash, I just need to transport the DR box and rebuild the RAID.
Re: (Score:3, Insightful)
Because, you see, you've just spent your budget on hardware that will never likely be used that gets you no visible day-to-day advantage, except leaving you vulnerable to multiple simultaneous drive failures. (This is surprisingly likely: go read the Google paper on drive failure rates.)
Instead,, you use a second system with snapshot backups, possibly using a syste
Re: (Score:2)
On top of the fact that you slam 16 drives in the bloody thing ( minimum for this kind of data ), and have half as hot spares to a raid6 array.
Holy shit, dude. There's responsible redundancy, then there's paranoia, then there's overkill, then, far off in the distance, there's having half a shelf dedicated to hot spares.
One hot spare per shelf is heaps. Consider a 7*750G RAID6 that suffers a disk failure. An array rebuild will take ca. 20 hours (assuming it's not offlined during the rebuild). Even a c
Re: (Score:2)
Just because you can't be bothered to make a reliable system that actually meets the requirements at hand does not mean everyone needs to spend an order of magnitude more for features that provide no value for the problem.
Re: (Score:3, Interesting)
Studies are archived daily, with an automated script simply carving an appropriately-sized LV out of the VG,
Will we even use magnetic HDs in laptops in 2011? (Score:4, Interesting)
Re:Will we even use magnetic HDs in laptops in 201 (Score:3, Insightful)
Thats a lot of porn! (Score:5, Funny)
I don't want more space... (Score:4, Informative)
Re:I don't want more space... (Score:4, Interesting)
I see comments like this all the time, and really don't understand them.
I have personally bought an average of one HDD per six months over the past decade, and, except for ones outright DOA, I have only had one fail, ever (and that after it had served for a good many years). And I include both DiamondMaxes and the legendary DeathStars in that list, both considered some of the most prone-to-failure out there.
Considering my work environment, I can expand that sample to most like 100+ HDDs; Of those, only two have failed, both laptop drives.
I have to suspect the people experiencing the flakyness of HDDs either fail to adequately cool them (I put ALL my HDDs loosely-packed in 5.25 bays with a front-mounted 120mm low-RPM fan cooling them) or somehow subject them to mechanical stresses not intended (car PC? portable gaming rig? screws tight agains the drive's board?).
Re: (Score:2)
I don't have time to cool all my hard drives. In fact, I'm sure the one in this computer is covered in dust. It's a deskstar, and it's been making odd rattles for a while, so I know this system is headed south. Could I have babied it to where that wasn't going to happen? Yeah, but I don'
Re: (Score:2)
Fair enough - I can accept that interpretation... But ignoring the reality that HDDs have rapidly moving parts that must never touch mere nanometers apart, combined with a high sensitivity to heat, well, that just asks for trouble. Ideally, we'd have better. Practically, we have what we have.
I don't have time to cool all my hard drives.
I didn't mean to imply that I have some complicated setup... Ju
Re: (Score:2)
This kind of constant use is apparently too hard on consumer h
Re: (Score:2)
What you really want is a hard drive that is big enough that it contain all your data, while cheap enough that you can buy a few without going over budget. That way it is easier to make backups, as well as implementing a redundant RAID.
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
What happened to PMR? (Score:5, Interesting)
Prior to the rise of perpendicular recording [wikipedia.org], we had cheap and plentiful 200-400GB HDDs using plain ol' longitudinal recording. Suddenly PMR hits the market, promising 10x the storage density at up to 1Tb/in^2 (which Seagate claims they actually achieve), and two years later we have only two real models (with a few variations for SATA/PATA) of 1TB drives available.
Call me crazy, but a few really trivial calculations show that at 6.25in^2 *of usable area) per platter surface, times two surfaces per platter, times three platters, we should have, using today's technology, 4.5TB (note the change in case of the "B", no confusing units here) 3.5" HDDs.
So forgive me for not wetting my pants in excitement about an "announcement" that something realistically available today, we won't have for another half of a decade.
The bigger problem (Score:5, Interesting)
Yes, there are some cases where 4TB truly isn't enough without the problem being poor data management (large datacenter, huge DVD-quality media collection, etc). But far too often we see the reason for more space being poorly managed mail servers, tons of WIP that has not been properly archived or disposed of, huge amounts of unhandled spam, work-related casual conversations that really don't need to be stored after the work they relate to has been completed, outdated and obsolete software not being uninstalled, inflated registry (or any other overhead data) that keeps being backed up and restored without any cleanup involved...
A lot of people, when challenged with the problem of this vast array of useless junk data will just respond "well we have space, and if we run out we can always buy more, and the purchase price is way cheaper than the manhours needed to clean up this mess, so why bother". Another common excuse is "it doesn't bother me, so why not keep it just in the potential case I'll ever need it again, even if the chance is extremely small".
It does not occur to these people that proper data management is extremely important procedure, and must be ingrained in the business process. Much the same way you clean up physical garbage, remove obsolete physical equipment, empty the contents of that blue recycle bin under your desk, and do it all on a regular basis to keep the garbage from getting out of hand. Trash not worth keeping in real life does not become valuable when stored online, even if it can be stored for free or cheaper than the disposal price.
Properly disposing data as a business process will take time, but this time will be saved many times over when people don't have to dig up through junk to find what they need, when important things are not buried in crap, when all data worth storing is clean and polished and free of rust, when your OS is not clobbered up by crap processes or temporary files, when your DBE doesn't have to go through zillions of crap stored in the database to find a single row, when you do the cleanup as-you-go, rather than waiting for things to be completely out of hand and then doing a half-assed job because by that point it is really hard to tell apart the good from the junk.
The problem is spiraling - the longer people don't properly clean up data, the harder it is to clean it, especially as files grow larger and more complex as hardware and applications evolve. In turn, it motivates people to just invest in extra drive space, processing power, memory, etc, because by that time it's cheaper than the cleanup. And of course, once the resources have been invested into, they are filled with even more crap until they are full too.
But the biggest problem of poor data management is actually not technical, it's business-related. As we are faced with an increasing information overload, it is very easy to make poor decisions based on data that is not necessarily wrong, but is outdated, matched with incompatible other data, or just not put in the right perspective. The whole "data warehousing" principle absolutely REQUIRES proper and timely maintenance and cleanup of data. This is so important that (and this has been proven over and over again) large corporations with proper data management gain a substantial strategic advantage over those who don't.
It's not just about a little slower response time, or some more work to find what you need on the server. It's about right business decisions vs. wrong business decisions. And it's also about not being taken advantage of - contractors and business partners can easily manipulate data to present it in the light favorable to them, and if you are a private business, this kind of crap can make you bankrupt. Of course, it happens day after day in the government with the taxpayers footing the bill, but that's another story altog
Re:The bigger problem (Score:5, Interesting)
I for one am getting sick of having to navigate between endless stacks of DVD-spindles every time I'm in a house!
Re: (Score:2)
So with 200 DVD's, at roughl
Re:The bigger problem (Score:4, Insightful)
I'm sorry, but this is just fantasy world 101. I almost never have to look through old mail, but when I do it's because some clients are trying to dredge up something that just not how it happened. Often when I do, it's important that I have all the "useless" mails as well, so you can say with confidence that "No, you just brougth this up two months before the project deadline and it wasn't in any of the workshop summaries [which are in project directories, not mail] before that either."
When I do, it's far more efficient to search up what I need rather than going over old junk - what you're saying is something which would imply that the Internet is useless since it's full of so much redundant, unorganized information. It's quite simply not true, and even though you should extract vital bits to organized systems, keeping the primary source around is very useful.
Extracting experience from current communication to improve business systems (or for that matter, technical routines) should be an ongoing process - it's vital going forward. Going back to old junk to try to figure out what's deletable just to run a "clean ship" is just a big timesink and waste of money. Maybe you'd have an argument if there was a good system not being used because it's all kept as unorganized mailboxes. In my expererience, usually the prolem is there's no such system and doing a clean-up would do nothing to change that.
Re: (Score:2)
Re: (Score:2)
Proper data o
So where is the speed? (Score:5, Insightful)
I imagine some of you out there, like myself, are starting to see problems with data integrity as the mountain of data you are sitting on climbs in to the petabytes. All I can say is: bit flips suck! Do you KNOW your data is intact? Do you REALLY believe your dozens of 750GB-1TB SATA drives are keeping your data safe? Do you think your RAID card knows what to do if your parity doesn't match on read - does it even CHECK? I hope your backup didn't copy over the silent corruption. I further hope you have the several days it will take to copy your data back over to your super big - super slow - hard drive.
Is anyone thinking optical? Or how about just straight flash? I have a whole stack of 2GB USB flash drives - should I put them in a RAID array?
Re: (Score:2, Informative)
Also, if Hitachi manages to get 4 TB onto a single or 2 platter arrangement, data density will be much higher now which should mean quite a bump in read/write speed (about 4 times, no?).
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Okay, that's great. Hard drives will get bigger. The problem is they aren't getting any faster.
Not true. Media transfer speed increases as the data capacity increases (though less than linearly) and seek "rate" improves in terms of number of tracks the head passes over in the same time. What doesn't increase much at all is rotation speed, which means that average seek time gets worse and worse over time in relation to transfer speed. It's still very fast though, currently about 6-7 ms for commodity drives. If you're unhappy with the overall performance of your disk system, it isn't the fault of
Ugh, no. (Score:2, Insightful)
Re: (Score:2)
I think we can all agree that the benefits clearly outweigh the disadvantages.
More About 2TMR head (Score:3, Interesting)
2TMR head: Tunnel Magneto-Resistance head A tunnel magneto-resistance device is composed of a three layer structure of an insulating film sandwiched between ferromagnetic films. The change in current resistance which occurs when the magnetization direction of the upper and lower ferromagnetic layers change (parallel or anti parallel) is known as the TMR effect, and ratio of electrical resistance between the two states is known as the magneto-resistance ratio.
Source: Official Press Release [hitachigst.com]
Shame (Score:2)
sweet (Score:2)
Re: (Score:2)
Re: (Score:2)
Eventually, sure, but at the moment the largest 2.5" SAS drive anyone'll sell you is 150GB
Re: (Score:2)
Re: (Score:2)
notice the use of the words "multi-terabyte disks" and not "multi-terabyte logical volumes"
Re:4 Terabytes? (Score:5, Interesting)
Sound nuts? Yes... but they do. Large clusters of many inexpensive machines set up in a redundant manner...
Re:4 Terabytes? (Score:4, Funny)
i very rarely use the preview button; there's a good chance i know about my typos, don't bother pointing them out.
(I couldn't help it, the Devnul made me do it!)
Re: (Score:2)
I know where I'd spend my money: I'd buy two of the SATA units and have a much more flexible system with redundancy.
Re: (Score:2, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2, Interesting)
They're talking about having this capacity available in another four years, and yet, 4 TB isn't even that much now. I have four drives in my computer totaling a little over 1 TB, and since the start of the year, it's mostly gone. A few uncompressed videos, a decent music collection, and a handful of the latest games... suddenly you're trying to decide what you need to delete before grabbing the camera and starting a new project.
(My work and hobbies all revolve a
Re: (Score:2, Funny)
Parkinson's Law (Score:2)
I think the data version of Parkinson's Law applies here: "Data expands to fill the space available for storage". [wikipedia.org]
Re: (Score:2)
Let's say that you download 25GB a month, which is not that much compared to a hardcore pirate like me, and probably quite common among young people. 25*12=300. 4000/300 = 13. That 4TB disk will be able to contain 13 years of your downloads. Sounds like a lot? Well, humans in general love to keep things, and 13 years isn't that long compared to the human life span.
Easy : Porn ! (Score:2)
They would erase the data and use the free space to store porn.
The fact that there was already porn before hand won't even cross their mind.
Re: (Score:2)
I keed, I keed!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Not only is 4TB not that big, but there are uses that I'm not even bothering to consider because disk storag
Re: (Score:2)
Re:Man (Score:5, Funny)
Re: (Score:2)
Where "TB" and "GB" refer to the SI/marketing quantifiers, 4TB ~= 3.6TiB:
As we expect "1 terabyte" to mean exactly 1024**4 bytes, the disk manufacturers would be short-changing us by about 370.7GiB.
Re: (Score:2)
No it isn't. Four terabytes is 4,000,000,000,000 bytes.
Re: (Score:2)
Only in the hard drive marketing world. The rest of the computing world uses the nearest power of 2.
Re: (Score:2)
Riiiiight. So how many calculations are there in a teraflop again? How many watts in a terawatt?
Re: (Score:2)
You might as well apply the law of gravity to storage, it makes much more sense.
Re: (Score:2)
That's a useful law.
Moore's Law and storage (Score:2)