Exploring Advanced Format Hard Drive Technology 165
MojoKid writes "Hard drive capacities are sometimes broken down by the number of platters and the size of each. The first 1TB drives, for example, used five 200GB platters; current-generation 1TB drives use two 500GB platters. These values, however, only refer to the accessible storage capacity, not the total size of the platter itself. Invisible to the end-user, additional capacity is used to store positional information and for ECC. The latest Advanced Format hard drive technology changes a hard drive's sector size from 512 bytes to 4096 bytes. This allows the ECC data to be stored more efficiently. Advanced Format drives emulate a 512 byte sector size, to keep backwards compatibility intact, by mapping eight logical 512 byte sectors to a single physical sector. Unfortunately, this creates a problem for Windows XP users. The good news is, Western Digital has already solved the problem and HotHardware offers some insight into the technology and how it performs."
Large sector size good? (Score:2, Interesting)
I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.
Re: (Score:2)
I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.
OK, it's 4K (4096 bytes), not 4096K. I guess that's a bit more doable when we're talking about sizes greater than 1TB.
Re: (Score:3, Insightful)
You want the sector size to be smaller than the average file size or you're going to waste a lot of space. If your average file size is large, and writes are sequential, you want the largest possible sector sizes.
Re: (Score:2)
You rewrote it down to null? Did every term cancel out?
Re: (Score:2)
This doesn't matter with the new Advanced Format Slashdot, which rounds all posts up to 4K.
Re: (Score:2)
On topic, Witty. Hell, it deserves an Oscar!
Re: (Score:2)
By average file, I meant mean [wikipedia.org], not median. If average must mean median [wikipedia.org], then I guess I'd have to write
My main point was that a size is a magnitude; it has no size (or weight or temperature) itself. When I read smaller size, I picture a size printed in a smaller font. If you mean smaller, just write smaller.
Re: (Score:2)
Most file systems work by clusters, not sectors.
NTFS partitions use 4k clusters by default so you already have this problem.
Re: (Score:2)
Indeed that is why they are do this at 4k. most current FS's use a 4k file as it's base cluster size. By updating the sector size to match that of the average cluster anyways, they litterally cut down the size of the required ECC by 8. You can take two drives of the same physical characteristics and by increasing the sector size to 4k you gain hundreds of megabytes on the average 100 gigabyte drive.
Re: (Score:2)
You can take two drives of the same physical characteristics and by increasing the sector size to 4k you gain hundreds of megabytes on the average 100 gigabyte drive.
For the sake of argument, let's assume "hundreds of megabytes" equates to 500MB. That works out to be a saving of 0.5% of the capacity, which isn't really all that useful. If you are using your 100GB drive at peak capacity where 500MB will allow you to store a worthwhile amount of data, you're going to run into other issues such as considerable file fragmentation as there isn't enough free space to defrag it properly.
Re:Large sector size good? (Score:5, Insightful)
The filesystem's minimum allocation unit size doesn't necessarily need to have a strong relationship with the physical sector size. Some filesystems don't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit. (IIRC, Reiser is such an FS.)
Also, we are actually talking about 4 kilobyte sectors. TFS refers to it as 4096k, which would be a 4 megabyte sector. (Which is wildly wrong.) So, worst case for your example of a thousand 1k files is actually 4 megabytes, not 4 gigabytes as you suggest. And, really, if my 2 terabyte drive gets an extra 11% from the more efficient ECC with the 4k sectors, that gives me a free 220000 megabytes, which pretty adequately compensates for the 3 MB I theoretically lose in a worst case filesystem from your example thousand files.
Re: (Score:2)
Some filesystems don't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit. (IIRC, Reiser is such an FS.)
True, block suballocation [wikipedia.org] is a killer feature. But other than archive formats such as zip, are there any maintained file systems for Windows or Linux with this feature?
Re:Large sector size good? (Score:4, Informative)
IBM's GPFS is one, though it ain't free it does support Linux and Windows both mounting the same file system at the same time. They reckon the optimum block size for the file system is 1MB. I am not convinced of that myself, but always give my GPFS file systems 1MB block sizes.
Then there is XFS that for small files will put the data in with the metadata to save space. However unless you have millions of files forget about it. With modern drive sizes the loss of space is not important. If you have millions of files stop using the file system as a database.
Re: (Score:3, Interesting)
GPRS is a ridiculously fast os, probably the fastest in the world, when setup correctly. We used to use it for our cluster of 2000 cores.
Re: (Score:2)
OJFS (Score:3, Funny)
From the wikipedia page you linked to: Btrfs
Thanks.
ReiserFS and UFS2 are stable
I was looking for file systems with killer features, not a killer maintainer ;-)
Re: (Score:2)
Re: (Score:3, Interesting)
NTFS uses a limited form of block suballocation: if the file is small enough, the file data can share a block with the metadata.
Re: (Score:2)
Also, we are actually talking about 4 kilobyte sectors. TFS refers to it as 4096k, which would be a 4 megabyte sector. (Which is wildly wrong.)
Wanna bet TFS was written by a Verizon employee? ;)
Re: (Score:2)
Unless you use a clever filesystem which doesn't force file size to be a multiple of sector size.
Re: (Score:2)
I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.
You had me worried for a while there so I did a quick check. Turns out NONE of my movies or MP3's are less than 4096 bytes so it looks like I dodged a bullet there. However, when Hollywood perfects it's movie industry down to 512 different possible re-hashes of the same plot they might be able to store a movie with better space efficiency on a 512 byte/sector drive again.
Header files are a big one (Score:2)
Turns out NONE of my movies or MP3's are less than 4096 bytes so it looks like I dodged a bullet there.
But how big are script files and source code files and PNG icons?
Re: (Score:2)
Extract from ls -l /etc
-rw-r--r-- 1 root root 10788 2009-07-31 23:55 login.defs
-rw-r--r-- 1 root root 599 2008-10-09 18:11 logrotate.conf
-rw-r--r-- 1 root root 3844 2009-10-09 01:36 lsb-base-logging.sh
-rw-r--r-- 1 root root 97 2009-10-20 10:44 lsb-release
Re: (Score:3, Interesting)
You didn't dodge any bullet. Any file that has a size slightly over each 4096 border will take more space. For large amounts of larger files (such as an MP3 collection), you will, on average, have 2048 bytes of empty space in your drive's sectors. Lets say you have an archive which also uses some small files (e.g. playlists, small pictures) say that the overhead is about 3 KB per file, and the average file size is about 3MB. Since 3000000 / 3000 is about 1/1000 you could have a whopping 1 pro mille loss.
Re: (Score:2)
Sorry to reply on my own post here, FS block size should be minimum allocation size, which may be smaller than the physical sector size. So for your MP3 collection the overhead may be even lower...
Re: (Score:2)
It isn't that great for the OS's partition but it works out great for my Media partition
Re: (Score:2, Interesting)
Re: (Score:2)
I see what you mean but will it be like other parts of the computer? I do computation on CPUs, GPUs or FPGAs depending on what hardware is appropriate for the work that needs to be done. Is this similar?
You have data with certain attributes and store it appropriately.
Re: (Score:2)
NetWare has been doing block suballocation for a while now [novell.com]. Not a bad way to make use of a larger block size, and it was crucial when early 'large' drives had to tolerate large blocks, at least before LBA was common. Novell tackled a lot of these problems fairly early as they lead the way in PC servers and had to deal with big volumes fairly quickly. Today, we take a lot of this for granted, and we are swimming in disk space so it's not a big deal. But once upon a time, this was not so. 80MB was pricel
Re: (Score:2)
On the other hand, with your logic, 512 byte sectors are too big too, because I have lots of files that are much smaller than that...
Cluster Size (Score:3, Interesting)
I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K.
Most file system already use a cluster size of 4096 (clustering 8 sectors). The only file system I know of which used sector=cluster size was IBM's HPFS.
So NO, we don't use size. Still I am wary of this emulation stuff. First the 4096 byte sector is broken down to 8 512 byte "virtual" sectors and then those 8 virtual are clustered to one cluster. Would it not be better to use an intelligent file system which can handle 4096 bytes sectors natively? Any file system which can be formatted onto a DVD-RAM should
Re: (Score:2)
If you read the article carefully, the new size is only 4K, not 4096K. The 4K size actually matches very well with most common files ystems. The 4096K is an error in the article.
Here is a quote from the article:
Advanced Format changes a hard drive's sector size from 512 bytes (the standard for the past three decades) to 4096K
However, it seems correct in other places, like a graphic for example.
Re:Large sector size good? (Score:5, Funny)
If you read the article carefully, the new size is only 4K, not 4096K. The 4K size actually matches very well with most common files ystems.
Looks like they're not the only ones who miscalculated their block boundary.
Re: (Score:2)
If you read the article carefully ...
Well, if you read the article very carefully you'll note that it lists the WD AF drive as 5400 RPM. If true then they'll really see some performance gains from a 7200 RPM version. If it's just another typo/mistake/ooopsy then we should tag this article as "needs editor".
XP users (Score:4, Funny)
XP users do not need big hard drives to have problems.
...laura
Re: (Score:2)
But mac users go wild over big hard drives.
Re: (Score:2)
Mac users would go wild over spoiled moldy cheese as long as Stevie announced it on a stage while wearing a black turtleneck.
Re: (Score:2)
Like the 1TB drive in my Macbook? :)
Re: (Score:2)
XP users do not need big hard drives to have problems.
Tee hee giggle snort! So, besides porn, what do Linux and Mac users fill their hard drives with? Games?
VMWare (Score:2)
Virtual XP machines perhaps ;-)
What About Linux Systems? (Score:5, Interesting)
This place [slashdot.org] had something about it.
Re: (Score:2)
Some distro installers do it right and some do it wrong. Give it a few years and I'm sure it will all be sorted out.
Re:What About Linux Systems? (Score:5, Informative)
If Advanced Format drives were true 4k drives (i.e. they didn't lie to the OS and claim they were 512 byte drives), they'd work great on Linux (and not at all on XP). Since they lie, Linux tools will have to be updated to assume the drive lies and default to 4k alignment. Anyway, you can already use manual/advanced settings in most Linux parititioning tools to manually work around the issue.
Re: (Score:2, Informative)
Sector size (Score:2)
Actually XFS supports true 4096 sector size as well. For example you can format XFS onto a DVD-RAM (sector size= 2048) without trouble. So best for Linux would be if you can tell the drive not to lie about the sector size.
XFS (Score:2)
Actually it makes me wonder if the virtual 512 sector stuff can be switched off. XFS for example handles lagers sectors sizes gracefully.
Typo in Article (Score:2)
It says 4096K, they mean 4096 bytes (4K). Error is in the original.
Typo in Correction (Score:2)
Speed is irrelevant (Score:4, Interesting)
I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:
1. Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?
2. Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?
3. Are there differences in sustained read/write performance? E.g. is the new format more suitable for video editing than the old one?
For me, the first issue is the more important than all, given that owning huge 2T disks is in fact like playing Russian roulette: without proper backup strategies, you risk all your data at once.
Re: (Score:2)
I think the answer is that:
#1: only an idiot relies on the MTBF statistic as their backup strategy, so speed matters more (and helps you perform your routine backups faster).
#2: for energy efficiency, you don't buy a big spinning disk for your laptop, you use a solid state device.
#3: wait, i thought you didn't want them to talk about performance? This format should indeed be better performing for video editing, however, since you asked.
Re: (Score:2)
Yes. By packing the bits more efficiently, each cylinder will have more capacity, thus requiring fewer cylinders and fewer head movements for any given disk capacity.
Probably sl
Re: (Score:2)
> I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:
You really want to be able to copy your stuff. If your stuff is 2TB, then it makes sense that you would want to copy that 2TB in a timely manner.
So yeah... speed does matter. Sooner or later you will want that drive to be able to keep up with how big it is.
Re: (Score:2)
This is what raid, mirroring and script backups are for. If you can't write a batch file to copy shit to a USB/Firewire drive, or simply have another cheap blank 2TB disk in the same PC to copy to, you are failing at backup.
Hard drives are so cheap now that you should merely have massive redundancy, also flash USB sticks are good for one time files like documents and smaller stuff you want to keep.
And for an overview that knows how to do math... (Score:5, Informative)
Re:And for an overview that knows how to do math.. (Score:5, Funny)
Oh noez! (Score:2)
Unfortunately, this creates a problem for Windows XP users. The good news is, Western Digital has already solved the problem
Is there a particular reason that we should care that a new technology isn't backwards compatible with an obsolete technology? Especially in light that it actually is compatible?
Partitioning the right way fixes it (Score:2)
Partitioning the right way deals with it. You can use fdisk in Linux to do the partitioning for both Linux and Windows.
First, find out exactly how large the drive is in units of 512 byte sectors. Divide that number by 8192 and round any fractions up. Remember that as the number of cylinders. In fdisk, use the "x" command to enter expert commands. Do "s" to enter the number of sectors per track as 32. Do "h" to enter the number of heads (tracks) per cylinder as 256 (not 255). Do "c" to enter the numbe
Not just for hard drives (Score:2)
Those of us who work with RAID arrays have cared about partition alignment for a long time. If a write spans two RAID-5 stripes, the RAID controller has to work twice as hard to correctly update the parity information. Aligning partitions and filesystem structures on stripe boundaries is essential to obtaining good performance on certain types of RAID arrays.
Can someone clarify something? (Score:2)
From TFA: "Western Digital believes the technology will prove useful in the future and it's true that after thirty years, the 512 byte sector standard was creaking with age."
What does "creaking with age" really mean? I mean, the current format performs the same. The basic design is still the same, just with different magic numbers. I usually read "creaking with age" to mean that there's some kind of capacity or speed limit that we hit, but that's not the case. Is this more of a case of "why not" change it i
more size, more speed. (Score:2)
You could have 11% more capacity - but for some unknown reason WD did not exploit that.
If the drive would not lie about the sector size then you would have a little speed gain as well - but for some unknown reason WD went for compatibly instead.
So yes: there is some potential for larger sector size - but is was not exploited.
Re:1 byte = 10 bits? (Score:4, Informative)
No, that's totally wrong. The drive may well use 10 magnetic "cells" to store a byte (e.g. with 8b10b modulation or something similar), but that's an implementation detail. As far as everything else is concerned, bytes are 8 bits.
Re: (Score:2)
Re: (Score:3, Informative)
A.D. vs. 8-bit bytes (Score:2)
Re: (Score:2, Informative)
Depends on the drive. In recent electrical signalling (Gb ethernet, SATA/SAS, etc.) the 8b10b encoding scheme has been very popular; and is 10 bits to a byte. The extra bits are for recovering the clock signal. The HDD has to do the same, but the manufacturers don't have to adhere to any standards inside their case.
Now, if you're asking the question "how many bytes in a MB?" there is great debate. (The answer is, and has been from the first RAMAC*, 1,000,000. However, the binary bus people like to argu
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
Yes, of course file sizes under 4096 bytes (not 4096K) do still exist. As stated in the article, most FS on hard disks already use 4096 byte block sizes. Thus it won't make much difference for defrag, unless you misalign the data in which case the defrag suddenly could take a *very* long time.
Not that I care, my system drive is already SSD anyways and I've not filled a drive to the brim for a long time. If I would still download movies they would certainly go to a WD green drive or anything like that withou
Re: (Score:2)
Do file sizes under 4096K still exist?
win+r
cmd
echo Yes > file_size_under_4096k.txt
Tada!
Re: (Score:2)
Re:Dear Slashdot Sales Department (Score:5, Funny)
1 No one except LOSERS uses Windows XP.
Beck: I'm a loser, baby, 'cuz I'm usin' XP ...
2. What is Slashdot's commission on these shameful book plugs?
One free page from the book, randomly selected, until they've referred enough people to the publisher's site to receive the entire book. Unfortunately, it arrives as lose pages in no particular order. Cmdr Taco is never pleased with this.
Have a weekend, loozars.
Yours In Tashkent, K. Trout
Thanks, you too.
Re:Dear Slashdot Sales Department (Score:5, Funny)
Unfortunately, it arrives as lose pages in no particular order. Cmdr Taco is never pleased with this.
Have a weekend, loozars.
For all intensive purposes, you're post should of exploded the heads of any grammar nazis as they read they're screen. Which begs the question of what more damage could possibly be done to effect there sensibilities? Honestly, I could care less.
Re: (Score:2, Funny)
Re: (Score:2)
Ha ha ha. I literally laughed my head off.
Re:Dear Slashdot Sales Department (Score:5, Funny)
It's "for all intents and purposes" not "for all intensive purposes." When you say it you can get away with it wrong, but when you write it you just look dumb.
Indeed. Its a common mistake, but you're vigilance is dually noted. I'm just glad I didn't loose all credibility by making alot more mistakes.
Re: (Score:2)
I do hope that while you are still learning the language's grammar, you also are developing a better, keener sense of humor, as the GP made it clear by the abundance of grammatical errors put together into a somewhat coherent message, that he was, without any doubt, making a mockery, a grotesque sneer if you will of the GGP post. You missed it on many more levels than one, only picking up on a single grammatical error and completely letting go of the actual context of the message. I sincerely wish you to
Re: (Score:2)
How this all got past all the editors is beyond me.
You must be new here.
Re: (Score:2)
I know they pull this off about every odd article, but it perpetually amazes me anyway :)
Re:512x4=4MB?? (Score:5, Insightful)
No. Just no.
Never use the term 'KiB' for kiloBYTES ever again. Just don't do it. I don't CARE if it's "the new standard". Screw that, it's KB KiloBytes.
This "new" standard mandated by the IEC can eat me.
1024 bytes IS, and forever will be, 1 KiloByte (KB)
1000 bits IS, and forever will be, 1 KiloBit (Kb)
1999 and the IEC can DROP DEAD. I will never. EVER. Use the new """""""""""""standard"""""""""""".
That said, excellent job highlighting the dreadful editing, inaccuracies like that are so confusing to try and keep straight between what is written and what was MEANT. Thumps up for you!
Re:512x4=4MB?? (Score:5, Informative)
It was never KB and never will be. kB perhaps but not KB.
Free disk space: 1.21 Giblets (Score:5, Insightful)
"it's 4 KiB or just 4096 bytes."
No. Just no.
Never use the term 'KiB' for kiloBYTES ever again.
"kiB" is for kibibytes, not kilobytes...
The introduction of those new units always kind of grated me, as it went against all the 20-odd years of experience I'd had with computers up to that point. But, I have to say, "kilobytes" and "megabytes" and "gigabytes" had always been ambiguously defined. Usually RAM would use the power-of-two definitions and disks would use the power-of-ten definitions... As someone who appreciates precise language, I think this effort to disambiguate the terminology is a good thing, even if it goes against what I learned about computers as a kid. I don't think making the opposite change (i.e. keeping "kilobyte" = 1024 bytes and making a new term for 1000 bytes) would have made any sense at all - the "kilo" in "kilobyte" goes against the normal definition of "kilo". I think it was always kind of sleazy that hard drive manufacturers could tell you they were giving you a megabyte of storage and it would be less than what the computer considers a "megabyte" - but the prefix has a definition that predates its use in computing, and from that perspective I think that usage, while problematic and misleading, was legitimate.
Re: (Score:2)
Those new prefixes are just lame. It's like they are trying to punish computer users or something
for offending their sense of beaurocratic order for so long. A much more logical approach would be
to specify whether or not the prefix is meant to be base-10 or base-2.
Re: (Score:2)
Those new prefixes are just lame. It's like they are trying to punish computer users or something
for offending their sense of beaurocratic order for so long. A much more logical approach would be
to specify whether or not the prefix is meant to be base-10 or base-2.
Ummm. That's exactly what the new prefixes do. The i means binary.
Re: (Score:2)
Usually RAM would use the power-of-two definitions and disks would use the power-of-ten definitions.
No; disks used base-two definitions, too. A 360K floppy is 362,496 bytes formatted, and a Seagate ST-225 20 megabyte hard drive had a little over 21,000,000 bytes formatted. It wasn't until some hard drive manufacturer couldn't quite hit a gigabyte that they redefined "gigabyte" so that they could call their 976MB drive "1 gigabyte."
Marketing lies (Score:2)
Usually RAM would use the power-of-two definitions and disks would use the power-of-ten definitions...
That would be disk larger then aprox. 1GB. Before disk hit the GB mark they too where measured power of 2 - which is the right way to do it - sector size if a power of 2 after all.
It is marketing which uses power-of-ten. And as an engineer this pisses me of big time. And that kibi/gibi stuff only means that we have given up fighting and submitted to those marketing lies.
Martin
Re: (Score:3, Insightful)
And that kibi/gibi stuff only means that we have given up fighting and submitted to those marketing lies
Or maybe it just means that programmers have finally figured out the metric system?
In EVERY other discipline out there kilo means 1000. The reason it was defined as 1024 in IT wasn't out of some kind of brilliance, but rather shear laziness. Yes, I understand binary, and yes I understand why binary units are useful. So does the SI, which is why they invented the kibi prefix.
I could care less about mark
Still: Marketing lies (Score:2)
I do care that a storage medium that stores one 1 MB per cm^2 does not store 10GB per m^2 if you use IT lingo.
That won't be true anyway - unless you would increase the sector size by 10'000 as well. In the early days we had single density and double density disk. Single density used 128 byte per sector and double density 256 byte per sector. That would equivalent to 90 kb and 128 kb (for 40 tracks). If you formatted a double density disk with 128 byte per sectors (as Atari did for compatibility) you only get 127 kb.
That's because of all the overhead as described in the original article. Or read it up here: http:// [wikipedia.org]
Re: (Score:2)
That won't be true anyway - unless you would increase the sector size by 10'000 as well.
If 1cm^2 of material stores a given number of bits - 1m^2 of material can store 10,000 as many. I don't care of those bits are track boundaries, sector headers, ecc code, or whatever. I'm talking about raw bits stored on raw media. And, you completely missed my point.
In any other area of study SI prefixes are all the
same. km/s are the same as m/ms, g/mL are the same as kg/L, and so on. Throw a count of bits into t
Re: (Score:2)
Use whatever units you like, but don't go redefining SI prefixes. Whoever thought that was a good idea was just dumb, or at least was having a glaring moment of dumbness...
Human language is context based so having kilo being 1024 when discussing C is no worse than calling a computer a "Firewall" despite it not having any slowing effect on fires. Inventing a new word is the "dumb" move as that would at the end of the day still be confusing to people outside the industry and give very little if any benefit inside the industry as there's rarely any need to mix binary and decimal notation.
I at least can safely say I've never confused the two KBs or heard about anyone confusing
Re: (Score:2)
<fallacy>And gosh darnit, the best way to remedy the ambiguity would be to refuse to confront it.</fallacy>
Re: (Score:2)
I'll leave the discussion about KiB, it was a side issue, although the article does IMHO make it painfully clear why it's needed.
"That said, excellent job highlighting the dreadful editing, inaccuracies like that are so confusing to try and keep straight between what is written and what was MEANT. Thumps up for you!"
Thanks, have been moderated into oblivion anyway :P
Re: (Score:2)
Re: (Score:2)
Actually it’s kB and kb. With a small k. Since K already stands for kelvin. And what is a kelvin byte? ;)
Disk Alignment... Learn this! (Score:3, Informative)
This is especially important for all you who manage a SAN. Learn it, love it, live it.
To learn why disk partition alignment can be important, please reference the following blog post: http://clariionblogs.blogspot.com/2008/02/disk-alignment.html [blogspot.com]
Instructions for Stripe Alignment/Partition Alignment within a Windows Operating Systems
Reference the following link for info on DiskPart, http://support.microsoft.com/kb/300415 [microsoft.com]
1 - At a command prompt on a windows host type diskpart
2 -Type select disk X (X being t
The real meaning of this (Score:3, Interesting)
What this really means is that magnetomechanical media is dead.
When you're doing tricks like this to get a few extra bytes per block it means you have run out of physical media density technologies. It's kind of like when they moved the Earth, Moon and stars to get dial-up modems from 48.8Kbps to 56Kbps - redefining bps along the way. It's the End. It's an admission that we're out of magnetic media density improvements. There might be one more but after this but it's over and even now the density isn't
Re: (Score:2)
When you're doing tricks like this to get a few extra bytes per block it means you have run out of physical media density technologies.
No it doesn't. Hard drive manufacturing companies are not a single person. They can have a group of people working on one problem, and another group working on another problem. There is nothing wrong with having a team trying to improve the efficiency of data formatting while a different team works on improving the hardware capacities. In fact, something like improving the efficiency of data formatting is more of a software problem with the results being reusable across different underlying hardware.
Re: (Score:2)
Re: (Score:2)
All that is left for the magnetics is capacity and price, and while we are just tripping over 2TB on the platters, solid state isnt far behind. There are 512GB SATA SSD's on the market right now, and at least one 1TB PCIe solution.
It was only a few years ago that the spinners tripped over 1TB while SSD's were only 64GB on the high end. Now the
FC SAN, Tape guys (Score:2)
Look, I know the parent post is going to garner a boatload of hate from the FC SAN people who will protest for various reasons that their unicorns and rainbows magnify the effectiveness of the underlying storage until it's cheap and performant. I'm sorry, but you're all full of it (to be shkind). You need to find a new job.
When you figure the cost of FC storage, the network, the backup, the service contracts and whatnot, it's $30K-120K/TB. You guys got some cool stuff - I'll give you that. But it ain't
Re: (Score:2)
Sir, I had quite some trouble wading through your marketing speak -- "revolutionary [surely not, SSDs don't rotate?] rather than evolutionary", "a big change is coming", "an extinction level event", "the situation got dynamic" -- but I think what you were trying to get out over those seven paragraphs is "in the limit as technology becomes more perfect, stuff is slower to read if you have to physically move to it".
Well, maybe, but perfection is never attained. And, though Google is hypocritical enough to imp