Scientist Uses Nanodots To Create 4Tb Storage Chip 207
arcticstoat writes "Solid state disks could soon catch up with mechanical hard drives in terms of cost and capacity, thanks to a new data-packed chip developed by a scientist at the University of North Carolina. Using a uniform array of 10nm nanodots, each of which represents a single bit, Dr. Jay Narayan created a data density of 1 terabit per square centimeter. The end result was a 4cm2 chip that holds 4Tb of data (512GB), but the university says that the nanodots could have a diameter of just 6nm, enabling an even greater data density. The university explains that the nanodots are 'made of single, defect-free crystals, creating magnetic sensors that are integrated directly into a silicon electronic chip.' Dr. Narayan says he expects the technology overtaking traditional solid state disk technology within the next five years."
How long... (Score:5, Interesting)
...until I can get a decent (120GB+) sized SSD that doesn't cost as much as a new video card?
3 ... 2 ... 1 ... (Score:5, Informative)
Re: (Score:2)
Re:3 ... 2 ... 1 ... (Score:4, Insightful)
The problem with SSDs is:
1. cheap
2. big
3. reliable
Choose two!
But even then, you can only be sure of number 3, after some years have passed. For obvious reasons of there not being any test data for years of use, until years of use have passed. ^^
Re: (Score:2)
1. cheap
2. big
3. reliable
Choose two!
Please tell me where I can get cheap big SSDs.
The advantages I've seen:
1. Performance
2. Silent
3. Light
4. Higher shock resistance
Undecided/even:
1. More reliable
2. Smaller
3. Lower power
Disadvantages:
1. Cost
2. Size
Unfortunately, one of the things often touted about SSDs don't quite seem true, they don't really scale down. Big SSDs = more parallel channels = better read/write performance. Which is a shame, because if they could really scale down to 10-20 GB I'd see a huge market for dual storage laptops, SSD fo
Re: (Score:2)
But even then, you can only be sure of number 3, after some years have passed. For obvious reasons of there not being any test data for years of use, until years of use have passed. ^^
BEGIN PEDANTIC
In fact, if you assume a Poison distribution of failure (where two identical working chips are equally probably to fail, no matter that one of them is brand new and the other has had years of use), you could just put 10.000 to test, and, if after a year 100 of them did fail, rightfully claim that they have a MTBF of 100 years.
Of course this distribution does not fit other components (mainly those with mechanical parts -CDs, HDs-) because there is progressive wear so an older unit is more likel
Re: (Score:2)
Re: (Score:2)
1) new (ish)
I hope they are newer than the Quantum ESP5000 SSD that has been attached to my Sparc 10 for the past fifteen years.
Wow (Score:3, Insightful)
My first PC had 4k of RAM. I should be used to this type of growth by now... but it still makes my heart race a bit when I see ever increasing memory capacity in an ever decreasing form size.
I'll tell my grandkids about my first PC and they will roll their eyes as they leave my retirement home...
Re:Wow (Score:4, Insightful)
Problem is most software developers and OS makers also race to consume that memory. Honestly all the software today is a bloated blob that is horribly unoptimized for speed and efficiency.
It's disgusting how bloated most stuff is because we have 4gig of ram and 2 2.5ghz processors... why make it leand and mean? it compiles, ship it.
Re:Wow (Score:5, Insightful)
why make it leand and mean? it compiles, ship it.
And what's the answer to your question?
If it works, why optimize it? To save in storage space? How much would I be saving? 10$ in storage space for every hour of optimization?
It's not art, it's a business. You could as well ask why we don't replace steel by titanium in cars.
Re: (Score:2)
Indeed it is business, but it's also marketing. Sure, at home, I like digital photos, watching movies on my computer, etc, but the sad reality is that in your traditional every-day book-keeping style business, we're doing a lot of the same stuff that we were doing on computer 20+ years ago (evidenced by the fact that in a lot of cases the same COBOL programs running the servers back then are STILL running the servers now). It's just that marketing and increasing software bloat have convinced everyone that
Re: (Score:3, Insightful)
We should get benefits from newer, faster hardware. Instead we get increasingly lazy programmers and zero net benefit in speed, but with all the negative costs of new equipment purchases.
We do get benefits from newer, faster hardware. The possibility of hiring cheaper, less prepared, programmers.
Re: (Score:2)
Re: (Score:2)
You could as well ask why we don't replace steel by titanium in cars.
that is simple - they don't do that or something like that because then your car might last you a long time - and that would cost them money because they wouldn't be able to have you a recurring revenue stream.
Sorry but the accelerated use of plastics and cheap alloys isn't an accident or an improvement in cars..
Re:Wow (Score:4, Insightful)
that is simple - they don't do that or something like that because then your car might last you a long time - and that would cost them money because they wouldn't be able to have you a recurring revenue stream.
What does using titanium instead of steel have to do with cars lasting a long time? As long as you don't let salt corrode them away, steel-bodied cars will last pretty much forever. Here in the southwest, we don't have any problems with corrosion.
Besides, automakers wouldn't bother to apply undercoating if they wanted their customers' cars to rust away.
Sorry but the accelerated use of plastics and cheap alloys isn't an accident or an improvement in cars..
Now this is just plain stupid. Aluminum alloys improve performance in cars greatly by reducing weight, and also by making engines that perform far better. Most plastics are also a giant improvement; again, weight savings.
Re: (Score:2)
the titamin/steel was the other persons comment - i was going off of the idea
as for the alloys and plastics - sorry sure they save weight.. but the plastics ALL break down.. they all age poorly.. compared to actual metal parts - and alot of the newer alloys i've run into working on cars - do not last las long..
sure saving weight is important - but you know .. we can save weight in alot of other places than under the hood - when you start adding up the weight increase from the cosmetic parts of cars it is am
Re: (Score:2)
as for the alloys and plastics - sorry sure they save weight.. but the plastics ALL break down.. they all age poorly.. compared to actual metal parts
I haven't noticed any breaking down at all in my 15-year-old car.
Yes, most plastics will break down over time with UV exposure, but there are ways to mitigate this: certain additives, keeping them out of the sun, etc. Plastic parts in junkyards, for instance, break down and generally look like crap pretty quickly, because they're out in the sun all day (usual
Re: (Score:2)
Sorry but the accelerated use of plastics and cheap alloys isn't an accident or an improvement in cars.
There is the benefit that a largely plastic car that deforms on impact absorbs a lot of the energy that would otherwise be transferred to the occupants during a collision. I know I'd much rather be in a squishy modern car than a solid steel behemoth if I'm going to crash into something.
Re: (Score:2)
Re: (Score:2)
This makes me wonder: why are you loading all of your software on boot? Also, you could probably make use of hibernation to speed things up.
My machine does in fact boot up to the login screen in 10 seconds, Ubuntu 10.10 from an SSD :) Another 10 or so seconds (including loading up some extra apps I've installed) to a usable desktop, and that's with only a 1.6Ghz Atom..
Re: (Score:2)
Re: (Score:2)
To make it quicker. You may think that you have 1 GB of RAM available and a 2GHz CPU, but the L2 cache in a modern processor will only hold about 4MB--fetching data or insns from main memory might require as much as 100 CPU cycles. And that's before considering virtual memory, which requires millions of cycles to access.
Making it quicker is not a good enough result to pay the extra expenses of having to hire better programmers.
The current situation (bloated apps that only work on expensive hardware) is the optimal one, in terms of development cost vs results, with the current technology.
Re: (Score:2)
Your problem is that you are expecting AOL to write software that is not broken. Why are you using AIM instead of one of the many free and/or open source alternatives that offer equal or greater features on the same network, in addition to other networks like GTalk, Yahoo, and MSN?
A few examples:
-Pidgin (Windows/*nix using GTK, OSS)
-Adium (OSX using native Cocoa and based on Pidgin, OSS)
-Miranda-IM (Windows, OSS)
-Trillian (Windows/Mac/iPhone, Basic=Freeware/Pro=Payware)
And there are plenty of others [wikipedia.org].
Re: (Score:2)
I kind of agree, for some things anyway. Microsoft's Office is one of those. Word, Excel, Powerpoint -- they haven't significantly changed since Office 97. I mean, they are what they are. More wizards now, different toolbar, prettier graphs. But Office 97 was enough for 99% of the users. Email is the same way -- why does Thunderbird take 113 MB of memory to run, when it doesn't do much more than the 500K of Pegasus mail back in 1994. Web sites are definitely more complex, but Firefox is running at 35
Re: (Score:2)
Re:Wow (Score:4, Informative)
Problem is most software developers and OS makers also race to consume that memory. Honestly all the software today is a bloated blob that is horribly unoptimized for speed and efficiency.
It's disgusting how bloated most stuff is because we have 4gig of ram and 2 2.5ghz processors... why make it leand and mean? it compiles, ship it.
Sounds like a reasonable outcome of a cost/benefit analysis. Since when is efficiency an end in itself?
Re: (Score:3, Funny)
Market forces dictate software bloat, not some centralized cabal of scheming plotters designing an optimal return on investment. As long as people buy more and more bloated crap-ahem-itunes-ahem-ware, as long as managers get rewarded for their bloat factors, as long as developers get specs incorporating bloat, the trend will continue.
Re: (Score:2)
It's called betterment, "sometimes, you have to make sacrifices for the betterment of the entire group" - really? Yeah, nobody has been able to show what has been gained since we had for ex. 512KB memory for 2000+ online users? Processing is much faster today, response times 10+ times slower?? Same processing - the business hasn't changed? Nice pictures(?) - actually using nice graphical (and expensive) terminals - about the same! Yes, the price of hardware has gone down and a lot but do we really have to w
Re: (Score:2)
That's really an urban legend. Most software today is well optimized. It just does much, much more than software did in the past. It uses more memory because many algorithms are trading memory for cpu because memory is cheaper, or memory for disk access because disk accesses didn't keep up with the pace of advancement in cpu and memory (by a couple of orders of magnitude over the last couple of decades).
Re: (Score:3, Funny)
I agree... (Score:2)
With your complaints. So let's start a list of UNbloated software:
I'll start:
MicroXP.
Why software bloats (Score:2)
Bad software managers are rewarded for producing a lot of software. The more software, the more reward. As a result, you get increasingly useless or downright harmful crap rammed down your throat whenever you buy a commercial software product or a piece of hardware with bundled software. The latter is the worst, because in the case of commercial software there is at least a reality check which comes from the need to prevent the product from becoming so odious that no one will buy it.
Re: (Score:2)
Yes and no - I agree with Joel if the goal is to benefit (money?) one person or a small group. Now - if the benefits would be for larger group or even nations, whatever - per / use, etc cost structure is totally different. An interesting question, will never be solved, LOL!
On the other hand - I think it also has something to do with laziness and ignorance? Using ready made packages, objects, APIs, etc doesn't require even near the same skills as creating something yourself. Nothing (much) wrong in that, tha
Re: (Score:2)
You can't do that on today's hardware because of a couple of reasons:
1) the algorithms that are efficient are now well known, and taught at the university level. This means there is less room for improvement.
2) compilers are really good now. So you can't squeeze much out of careful assembly. In fact, there are only a handful of people in the world who can output better machine code than a modern compiler, and only with many hours of investment. So again, there is a lot less room for improvement.
3) the h
Re:Wow (Score:5, Insightful)
Increasing speed of chips and ram could help relieve that pressure mind you. As programmers can tade off more processing for less drive usage, or count on faster ram and compress everything. This will give us a bit more time. Beyond that we will simply have to get more inventive on how we use computers.
Very very fast internet could become important, if users feel they need access to 10million TB of data personally. That may not be physically feasible on a personal computer. So 'cloud' type services would be important. Having a few duplicates rather than 1million duplicates of any given song is clearly a big improvement. This of course feeding into the idea that when we made the internet we stopped making machines, we just started making components for the one ultimate computer. And when you think of it from that perspective there is tons of room for improvement even if some of the parts are nearing the useful maximums.
Re: (Score:2)
Would you need the 10million TB on yourself at all moments?
Maybe having a fridge sized data storage at home will become standard.
No need for such incredibly high speed communications if it's just for the volume that gets sent from your home computer to "personal" computer (the one you carry).
Lame Research? (Score:5, Informative)
It may be peaking soon though. 6nm is getting close to physical maximums for most techniques due to the casimir effect.
Not quite sure what the Casimir Effect has to do with magnetic dots, but I should mention that 6 nm is below the Superparamagnetic limit (which is typically tens of nanometers). That means you're magnetic nanodot probably isn't magnetic.
... Which brings me to my second point: This article says nothing about what this researcher actually did. It sounds like he just fabricated an array of nanodots, which is nothing particularly groundbreaking.
Does anyone have a link to the original abstract for the conference presentation? The dots must have been multilayer "stacks", otherwise there's a good chance they won't be ferromagnetic (there's a "superparamagnetic limit" that stops ferromagnetic particles from being ferromagnetic when they get around this size.)
Lastly, the article says they'll look at housing and using "laser technology" to read back from these nanodots. They mention that as a sidenote, but it's really the most important problem if you want to make something useful. The problem with most nanomagnetic memory techniques is that reading/writing is either impractical or not yet possible.
Re: (Score:2)
Exactly. And I had to wade through about 10 pages of silly off-topic comments to find confirmation of that. Thank you Game, thank you lame moderators.
Re: (Score:2)
It's true that eventually, we will reach a plateau, and in a sense, I
Re: (Score:3, Funny)
Techniques that push chips from 2d into 3d will be the next useful improvement. But after that point we have run out of easy options.
Just keep adding more dimensions... Duh.
Not a "chip", merely a "chip". (Score:5, Informative)
They have microdots at a 4Tb-per-sq-inch storage density; they don't have any controller that can read and write it.
This has been "accomplished" numerous times with holographic storage media before. They just never made the read-writers...
Re:Not a "chip", merely a "chip". (Score:5, Insightful)
Correct.
They have a storage medium with nothing to read or write it... yet.
Although they seem confident that this will come with time, it’s a bit early to be celebrating. Interesting technology, but time will tell whether it’ll ever be usable.
Re:Not a "chip", merely a "chip". (Score:5, Funny)
They have a storage medium with nothing to read or write it...
The perfect DRM! They'll make billions!
Re: (Score:2)
> They have a storage medium with nothing to read or write it... yet.
Put the dots on a "disk" in rings. Call them "tracks". Spin the "disk" and access the dots by scanning a laser radially so that it can read and write the dots in each "track" sequentially. There just might be some existing technology that could be adapted for this...
Re: (Score:2)
That would suck. Spindle drives are already too slow. Let's use something a tad faster...please?
Re: (Score:3, Interesting)
> Let'sLet's use something a tad faster...please?
They'll put a transisitor over each dot and couple it to the dot in some way so that it can be read and written. Then they'll add a matrix of metallization and logic to multiplex access to the transistors. Add decoding logic and drivers and you've got nonvolatile RAM. And your bit density has gone down by an order of magnitude or so. Still very useful, though, if it's fast enough. Nonvolatile RAM with densities and speeds similar to those of DRAM woul
Re: (Score:2)
Right now, anything with a decent $/GB that is on par with the better SSDs currently out will change things up pretty well.
I have 2 Dell D6400 laptops here, both with Win7Pro, with identical CPU/RAM/GPU. One has an SSD Drive, the other, a spindle drive (7200 RPM).
Without a doubt, the SSD Drive boots faster, opens apps faster, and reboots faster. "Faster" is actually a poor word to describe it. My jaw hit the floor.
With a few *minor* tweaks, we got that baby to boot, complete to an interactive desktop, in
Re: (Score:2)
Re: (Score:2)
Magnetic discs (hard and floppy) are recorded in rings (aka cylinders, IIRC).
Re: (Score:2)
> That's idiotic.
No it isn't. It's simple, robust, leverages existing technology, and is capable of transfer rates of 1000 Gb/sec.
> A pair of micromirrors will be able to point a laser at any point on the
> chip with far smaller seek times than moving the entire chip.
I guess that's why CDs, DVDs, and BluRays aren't spun.
Yours is an interesting approach, but there may be a reason why it has not been implemented for any of the existing optical technologies. The latency would be better than that of s
Re: (Score:2)
As I recently commented [slashdot.org], the hard-drive industry is having a hard time shrinking the magnetic domains on conventional hard drive platters, which use a magnetic thin film. (You can make domains smaller, but they start interacting with one another and not maintaining their magnetization properly.) One proposed
I hate this... (Score:2)
This sounds really cool but the artical that it links too is really short on details.
Things like speed? Storage life? How many read write cycles before it wears out? Addressing? is it byte level or page level?
I mean is this only a replacement for flash for is it a replacement for ram?
Cool but it just ticks me off. It is just a tease.
Yes they may not have those answers but I would be nice to know what they don't have answers for yet!
Re:I hate this... (Score:5, Insightful)
They don't have any of that information because they don't know any of it. They only have a way IN THE LAB to put a shitload of nanodots onto a medium. They mentioned that they have no packaging (way to read or even really write data into the dots) for an actual product.
It's like Ben Franklin saying, "Okay, I've discovered electricity. Computers should be along in about five years."
Okay, it's not that bad, but I hate that five year timeline that is rarely questioned but is thrown out to lure in investors and grant money.
Slashdot should have an automatic filter that looks for the five year estimate and flags with some "fat chance" special color.
Re: (Score:3, Informative)
That is why I hate this.
It reminds me of those Popular Science stores.
Or even better the one that sticks in my mind. The THOR drive from Radio Shack.
http://en.wikipedia.org/wiki/Thor-CD [wikipedia.org]
I was so hyped by this in 1988 it sound so cool and it was only a few years away...
It never came.
On the bright side we did eventually get CD-Rs and even CD-RWs but not for a good long time after the THOR drive was announced.
Re: (Score:3, Informative)
Yeah, that Thor drive was some great vapor. My painful promise memory, was hologram storage. Back in 1992, I remember holding on to a hologram storage article from MacWeek that described what was supposed to be a consumer product in a year or so.
The media was the size of a credit card and they promised it would hold 100x as much as the current best hard drives of the day. It's a real shame because you just know that there was some fairly fraudulent monkey business going on to motivate guys like that to h
Re: (Score:2)
Oh yea I remember that as well.
Good times.
read write speeds? # of r/w times? (Score:2)
Oh, great... (Score:2)
... means I'll have to buy 'The White Album' again...
NCSU != UNC (Score:2, Informative)
Disks? (Score:3, Interesting)
Solid state disks could soon catch up with mechanical hard drives[...]Dr Narayan says he expects the technology overtaking traditional solid state disk technology within the next five years.
Is shape important in Solid State? It almost seems as if the article is confusing Hard Disk Drives with Solid State Drives.
Re: (Score:2)
The third letter in "HDD" is only there because it has a motor. SSDs should really be called Solid State Storage, or SSS for short.
Another five-years-out technology (Score:3, Insightful)
Re: (Score:2)
What's depressing is the way that the press and /. alike eat up stories like this.
Sure, writing this density of nanodots is an impressive feat. But as you point out, it could be completely nonviable for creating an actual consumer product.
Why can't the Slashdot's front page be the kind of place where bullshit is called on researchers putting out this kind of nonsense. These guys should be shamed into putting out factual press releases. Whoring it up to get coverage from the general media while seeking in
Re: (Score:2)
Single Defect Free Crystal (Score:3, Insightful)
Ok my knee jerk Six-Sigma reflex has just kicked in. On the manfacturing of those defect-free crystals... and about the cost effect and scaling for "overtaking ... in 5 years..."
Ok, here is a tip:
Anytime a politician or scientist taks about 5,8, and 12 year targets there is a reason:
Two 4 year terms = 8 years; when the project falls out they can blame the canidate currently in office.
5 years = A single Term but just a touch beyond to provide an incentive for re-election because if you don't they might cancel the project
12 Year = Two terms for canidiate A and a term for his\her heir... "Don't let the evil Democrats\Republicans kill the project!"
Now last I checked more then a few grants come in at 3,5,8 and 12 year durations... I never hear things coming to fruition in 7 years, or 6 years, or 9 years, or 11 years, or 18 years, 6 months, and 3 days.
There is just something about 5, 8, and 12 they love. Which due to the frequency they cite those values implies there is some weird cosmic alignment which causes innovations to pop at those figures... or I smell 4/5 dentists approve BS.
Another one is the 20 years from now number. What is the maturity on that investment I made...
I would honestly have a lot more respect for senior scientists if they didn't spend every waking hour working on getting grant money leaving the actual work to low-paying interns and students then claiming the work as their own offering nothing more then a second hand "my team and I" comment...
Then there's the "Friedman unit" (Score:2)
Access speed? Throughput? (Score:2)
The article is failing to mention this slightly important topic. Also tapes can store a lot of data (well not really that lot) with ridiculous performances.
Delicious (Score:2)
If nanodots are anything like Dippin' Dots, they sound delicious.
Re:4Tb of data (512GB) (Score:5, Insightful)
4 Terabits = 512 Gigabytes.
Somewhat misleading? Yes. Inherently false? No.
Re: (Score:2)
4 Terabits = 512 Gigabytes.
How much is that in Terribibits or Gibibybytes?
It's 500GB (Score:2, Insightful)
4 Terabits = 512 Gigabytes
Except it doesn't.
4 Tb = 4 000 000 000 000 / 8 B = 500 000 000 000 B = 500 GB ~= 466 GiB
Did they mean 4 Tib?
4 398 046 511 104 / 8 B = 549 755 813 888 B = 512 GiB ~= 550 GB.
According to the scientist, it's the former:
"at 10nm per bit, 1cm square stores one terabit."
That would be (1cm / 10nm)^2 b = (1e-2 / 1e-8)^2 b = 1e12 b = 1 Tb.
Re: (Score:3, Informative)
I like the french word for Bytes. Octet. So there is no confusion between 4Tb and 4 To.
It's probably too late to change bytes to another word in english ;)
Either the French changed recently, or it didn't have a word for byte until recently. A byte is not strictly defined as 8 bits, it's just that the dominant architectures used 8-bit bytes. Other older architectures used other byte sizes.
Re: (Score:2)
I had a cow when I first saw variables named $clef, $valeur and $chaine.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I'm not aware of any other word being used, but then again I don't know any french oldtimers or specialists that might have worked with non-8-bit bytes.
Re: (Score:2)
Octet has the same meaning in english, it means a group of 8 things. However, the word byte does not mean 8 bits(though that is by far the most common), it means a sequence of bits processed as one unit of information. It can also mean a unit of information in a computer that stands for a letter, number, or symbol. There are some architectures that use a 10-bit byte, including measurements for SATA bandwidth. Are those called "octets" in france? That might cause even more confusion.
Re: (Score:3, Funny)
Nah. If it were the hard drive industry, 4 Tb would be 500 GB.
Re: (Score:2)
Re: (Score:2)
This is, in fact, the calculation used in the summary.
Bits are delimited in powers of 10, whereas bytes are delimited in powers of 2, hence:
4 Tb = 500 billion bytes.
In reality, 4Tb is more like 466 GB.
But practically speaking, with error correction and other overhead, 4Tb is closer to 400GB of useable space.
Re: (Score:2)
B (big "B") == "Bytes"
The hard drive industry isn't out to screw you. AFAIK, HDD storage has always been quoted in base 10 instead of base 2 (K=1,000 instead of 1,024, etc), but the difference was never really obvious until lately as the numbers got huge.
Re: (Score:2)
Talking about the capacities of single memory/storage chips, using losercase b (bit) figures has been the standard for years. Since only techies who care about the capacity of the actual chips read this, it's not that much of an issue.
As soon as you're talking about an assembled product (be it a RAM module, SSD or even a smartphone), it'll be B (for bytes) again.
Re: (Score:2)
Yes, and all it takes is one uninformed journalist, editor, or marketing agent to turn b into B, just like they turned 65536 into 65K back in the 1980s. You can't trust anything like this on the web unless it comes directly from the manufacturer, or they explicitly write out the words bits and bytes.
Re: (Score:2)
Hard drive industry? This is research... Given that each crystal is a bit, one terabits (one tera crystals) can be packed onto 1cm.
What's the problem?
Re: (Score:2)
It appears that the article does not even say 4tb, just that the device can hold 512gb
4Tb == 512GB. Terabits, not terabytes.
Why the hell they would measure in Tb instead of GB is beyond me though.
Re: (Score:2)
Re:4tb != 512gb (Score:4, Informative)
> Why the hell they would measure in Tb instead of GB is beyond me though.
Because each dot stores one bit. They are building chips with arrays of dots, not complete hard drives.
Re: (Score:2)
Because memory is always measured in bits? Not for the modules you buy for your PC, but if you ever bought it by the chip, you bet your ass.
Well, I just learned something new. Thanks. :)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
No, it doesn't matter whether you use SI or the silly gibi, mibi prefixes. 4 Tbit is not 512 GB or 512 GiB. It's 500 GB or about 476 GiB by my calculations. 4000000000 / 8 is 500000000 bytes.
Re: (Score:2)
When you’re talking about storing (or transmitting) data bit-by-bit, it’s pretty common to see the rates being expressed in terms of bits. Terabits, gigabits per second, etc.
It’s slightly confusing at times, I’ll admit.
Re: (Score:3, Funny)
Re: (Score:2)
512GB big enough for an entire porn collection?
You must be new here...
Re: (Score:2)
would a beowulf cluster of these be enough ?
Re: (Score:2)
I think some folks at the SEC may be interested in this technology. Less obvious then boxes of CD/DVD labeled "Hot babes" lying around the office.
Does anyone "collect" porn anymore? (Score:3, Funny)
Given that there's an infinite supply of ever-changing pr0n on the internet available for free, I have to wonder why anyone would bother stashing it on a local disk. It's like saving bottles of air.
Re: (Score:2)
I save time in a bottle - you never want to run out of time.