Recovering Secret HD Space 849
An anonymous reader writes "Just browsing hardocp.com and noticed a link to this article.
'The Inquirer has posted a method of getting massive amounts of hard drive space from your current drive. Supposedly by following the steps outlined, they have gotten 150GB from an 80GB EIDE drive, 510GB from a 200GB SATA drive and so on.' Could this be true? I'm not about to try with my hard drive." Needless to say, this might be a time to avoid the bleeding edge. (See Jeff Garzik's warning in the letters page linked from the Register article.)
Uh, no (Score:5, Informative)
Did aureal density technology increase to 200GB/platter overnight? No.
Please refer to this thread [storagereview.net] on StorageReview.com for more information.
Simple corruption (Score:5, Informative)
This is just a method of corrupting your partition table so the same disk sectors appear more than once. If you try this, don't ask Symantec for help afterwards.
yeah right. (Score:4, Informative)
Some drives are known to short stroke their platters. This raises the more serious problem of this idiocy... The problem is modern drives store important information on those hidden inner areas of their platters (firmware, disk information, reallocated bad sectors), who knows what you could be overwriting whenever you use that space. Put something down in the wrong place and the drive will never start again or corrupt data at certain sectors. It's a lottery ticket everytime you write data in that partition. That's not what I call useable capacity.
Don't believe me? Go ahead and try it. You'll lose all those Buffy episodes you've downloaded on KaZaA, and instead you'll have to spank it to the Portman pictures your mom doesn't know you have stashed under your bed.
Re:Floppy / Drill fun (Score:4, Informative)
Manufacturer's view.. (Score:5, Informative)
Summary... (Score:5, Informative)
About the "recover unused space on your drive" article:
Working for a data-recovery company I know a thing or two about harddisks....
One is that if the vendors would be able to double the capacity for just about nothing, they would.
All this probably does is to create an invailid partition table which ends up having:
|*** new partition ***|
|*** old partition ***|
overlapping partitions. So writing either partition will corrupt the other. It probably so happens that whatever situation people tried it, it just so happened that the (quick) format of the "new" partition didn't corrupt the other partition to make it unbootable.
And the 200G -> 510Gb "upgrade" probably has ended up with three overlapping partitions....
Roger
inq (Score:3, Informative)
Floppys used to be better.. (Score:5, Informative)
These floppies were used almost daily for 3 years. (no hard disks available at that time). They were reformatted countless times.
Not single one of them ever failed. About a year ago, when failed to reformat and make a boot disk from several fresh-brought floppies I digged up one of them, reformatted again and succeeded in making a reliable boot disk.
Quality of todays media just makes me cry.
Re:Uh, no (Score:4, Informative)
That's probably because I can't type. You may want to read this reference for " areal [storagereview.com]" density, though.
Re:Uh, no (Score:2, Informative)
What is truly amazing is that some fool "discovered" this and actually believed he got ghost to double his HD size.
This does not in any way increase the physical disk size... this either overlaps partitions (bad thing) or creates a virtual partition inside the main one (stupid thing).
DONT DO THIS!!!!! (emphatic, not yelling)
Plagarism... (Score:1, Informative)
Find it here [storagereview.net]
Mod down!!
Re:Uh, no (Score:5, Informative)
Actually, this is exactly what they do. The difference, however, is that the lower-end (smaller) drives are identical except that they come with fewer platters. For example, a 160GB hard drive today likely has two 80GB platters, whereas an 80GB drive probably has one (though different combinations of different sizes are of course used, depending on when the hard drive was manufactured and other factors)
In some cases, a hard drive will be sold with a greater potential capacity than its available capacity. For example, a drive with two 60GB platters may be sold as a 100GB drive, the platters having been "short stroked". This has nothing to do with the absurd technique described in the Inquirer article, and I doubt that it is possible to recover the lost space.
Hard drives are the highest precision mechanical devices that most people have in their home--moreso than processors, high-end printer heads, or toasters. They are not something that you want to physically modify.
See the following highly informative and interesting (if you are a geek) posts by a Maxtor engineer:
Here [storagereview.net]
here [storagereview.net]
and here [storagereview.net]
Re:Uh, no (Score:4, Informative)
Re:Uh, no (Score:5, Informative)
damn i hope you are kidding (Score:5, Informative)
HD are sold in GB with GB "defined" as 1,000,000,000 bytes, which is ~7.4% less than a real GB (2^30 bytes). After formatting, (depending on your FS) a extra few percent goes away for your file table, sector marker, directory structure, etc. so in real GB (in units of 2^30 bytes), it'll be a lot less than 160, or whatever your "bought" size.
Don't expect to recover those.
RAM is sold with truthful advertising. 128MB = 128*2^20 bytes, which is like 134,217,728 bytes - despite the 134, it's still 128MB.
Re:Simple corruption (Score:1, Informative)
Re:Simple corruption (Score:4, Informative)
One flaw I found in the article is that they say you need two drives, both containing an OS. Later they ask you to swap out one of them for another drive with an OS. That whole section sounds like smoke and mirrors.
If this extra space really exists, why do you have to "trick" the OS into believing it is there? I was expecting some mention of a low level format at least, but there's no way this will work. I'll bet the didn't do any data integrity tests which would no doubt show right away the flaw in their system. Oh well, who needs proof if you're just storing appz and mp3s.
Re:Uh, no (Score:3, Informative)
CDROMs use constant data rate by varying the RPM of the drive depending on where you're located
hard drives have lower data rates at the inner diameter since they're spinning the same RPM all the time, so you simply get less linear distance to store data during the revolution
all of the sizes shipped to customers already account for this.
it would be possible to put more bits on the media by changing the speed that the disk rotates, but those loosened mechanical tolerances would give you a 4.7GB drive instead of a 300GB drive.
Re:damn i hope you are kidding (Score:3, Informative)
If anything, Windows and whatever other reporting software used is incorrect, because "Giga" is an SI standard prefix used in science and mathematics meaning "One billion", just like "mega" is "one million" and micro is "one millionth."
In the old days, "kilobyte" was used when referring to 2^10 (1024) bytes because it was conveniently close to 1000, which is the meaning of the "kilo" prefix. The base-2 to base-10 similarity becomes ever wider as the values multiply. Go ahead and look at the next two sequences in which binary and decimal powers are "close".
That said, ultimately common use is what defines the meaning of words, but the common use of a word by no means invalidates the original terminology from which it was derived!
Re:damn i hope you are kidding (Score:2, Informative)
1 Mebibyte = 2^20 = 1048576 bytes.
1 Megabyte = 10^6 = 1000000 bytes.
The "megabyte" as 2^20 was depreciated
Mega = 1000^2
Mebi = 1024^2
Re:Floppy / Drill fun (Score:5, Informative)
Back in the day of MFM and RLL controllers, the hard drive controller did much of what the drive electronics and firmware do in modern hard drives, that's why you could have MFM or RLL controllers. Hard drives still use RLL encoding today.
Re:Uh, no (Score:4, Informative)
Now if your 1st partition is a full disk - reserved and your second partition is full sized including reserved and the reserved aren't all at the end of the disk, your going to end up with partitions of the ratios they talk about.
However what happens you start putting windows on this thing? Well block sizes of big drives aren't your friend and most small files will end up in reserve clustors. Since directories are small files too and if they don't conflict, you should be able to load up a few gig of data on one of these disks before you start to find out that its overwriting other bits of the other partiion. I expect one of these 180 gig drives could be loaded up with at least 90gig of data before the directorys started acting funny. One cool bit about this is block related files (like mp3) will show up on the dir just fine but when you play it, it might switch songs in the middle. I don't think the RIAA could ask for a better gift.
Re:Uh, no (Score:5, Informative)
Secondly, ALL IDE type drives (and some SCSI) have soem reserved space (possibly 5%) which is intelligently remapped whenever a bad sector is found. (rememeber you are NOT supposed to Low Level format an IDE drrive). During manufacturing, it is inevitable that bad sectors WILL be found, but these are remapped to the hidden reserved section, whcih is why most Hard disks you buy now do not APPEAR to have bad sectors. The reason is they are already mapped into the reserved area. So the rule is, when you DO start seeing bad sectors on your IDE drive, you can be sure that the reserved space is now full and its time to start looking for a new Hard Drive.
"Recovering" the space allocated to the reserved section is NOT good at all, since you then bypass the IDE bad sector mapping mechanism, and if the drive is not suitably surfaced checked, you can bet yoru bottom dollar that you will see some bad sectors.
Beware.
Re:Damn. (Score:2, Informative)
utter bull (Score:3, Informative)
Re:Uh, no (Score:5, Informative)
It used to work on the old seagate drives, you just set the bios to the parameters of the 100GB drive of letting the bios autodetect the 60GB drive and you had an 100GB drive
No cigar, but... (Score:5, Informative)
Disks of today have no direct mapping from head, cylinder and track number to physical location on the platter. Rather there is an internal table of the mapping with room for remapping potential weak sectors to unused space. When the head signal is getting close to be inconclusive the just read sector is written at a spare sector, the mapping table is updated, and the old one is marked as bad.
If this article had show how to manipulate the disk so a number of the spare sectors could be used for enlarging the disk it would have been interesting...
This isn't like overclocking your hard drive... (Score:2, Informative)
Remember the "Good old days" where hard drive sizes were sub 540mb - We addressed hard drives using C/H/S size (Cylinder/Heads/Sectors) - It was common to scandisk and start seeing bad blocks (sectors) on your hard drive...
When we broke the 540mb 'barrier' we quit using C/H/S mappings and started using LBA mode, Logical Block Addressing. What this effectively did was take control of the physical drive access, data storage and retrieval, away from the operating system. This was because the OS/Bios would only recognize a maximum of 512 Cylinders.
Quick facts about hard drives:
1) There are *ALWAYS* defects on the hard drive surface. There is no such thing as a flawless platter.
2) As hard drive sizes have increased, all the innovations have taken place in your head.
Yes, there have been minor changes in the platter structure. As rotational speeds increased, sector sizes decreased, and operating temperatures increased, manufacturers had to move away from aluminum platters as they would shrink/grow too much as the drive reached operating temp. So they moved to glass. -- The surface of the drive has always been coated using the same exact ionization process.
However, the read/write head is where all the innovations have taken place. Because the size of the bits are getting smaller and smaller, a surface defect that previously would only wipe out a single bit would now wipe out an entire sector. For this reason, drive manufacturers allocate plenty of extra space on the drive to move data from failing areas of the drive (which is happening all the time). This drive maintenance happens independant of the operating system on the PC. It is an operation of the hard drive firmware. IT IS AUTOMATIC.
After drive manufacture, there is an initial low-level format of the drive (platter) where the drive establishes its sector boundaries. This is when it maps out the defective areas of the drive and stores it in the eeprom. As the drive operates and sectors fail, the drive automatically moves the data to a different area of the drive. These areas where the data is moved to are typically adjacent to the defective area. Space allocated to compensate for defects can be as much as 100% of the original drive space.
If the drive didn't maintain itself, then you'd see TONS of surface defects whenever you run scandisk, even on a brand new drive.
Think about it, when is the last time you ran a scankdisk and had it come back with surface errors. It doesn't happen anymore.
Anyhow... What these guys did was use a utility that creates a quick and dirty MBR(Master Boot Record) that likely archives the legitamate MBR within the 8mb partition while it does its business. These bozo's have essentially wiped out the MBR (READ: Defect Map) and formatted the full capacity of the entire disk.
Sure, you can install an OS, even run it, but as the hard drive tries to manage itself... well... I've explained enough here, be it suffice to say that you're fsck3d.
This isn't like Intel that creates a single chip and labels it 3 different speeds (The pentium 75/90/100 comes to mind) where you can overclock it...
It's a trap! (Score:5, Informative)
Do not try to delete both partitions on the drive so you can create one large partition. This will not work. (this is because they are overlapping and you won't see 'extra' space if you delete the overlap)
You have to leave the two partitions separate in order to use them. Windows disk management will have erroneous data (again alluding to the error in reporting space)
in that it will say drive size = manus stated drive size and then available size will equal ALL the available space with recovered partitions included.
How smart u are.. (Score:3, Informative)
This is possible and is regularly used by HDD manufacturers (if you bothered to read the article)
Intel does the same thing with processors. A 3.0Ghz processor may be sold as 2.4Ghz, simply because it didn't pass qualification at 3.0Ghz but did at a lower clock speed.
all hard drives reserve a certain amount of free space to use for reallocation of bad sectors. These "spare sectors" are free space on your drive... completely unused until your hard drive starts finding problems on the physical media.
Re:Uh, no (Score:5, Informative)
Depending on the form factor and the manufacturer, they can stuff 1, 2, 3, 4, 6, or even 15 platters in an enclosure.(That last is for a full height 5.25" drive. 6 only fit into the 1.7" heigh drives).
Suppose Quantum can fit 10Gb on one side of a platter. They will then make a family of drives: 10G (one platter, one head), 20Gb (one platter, two heads), 30Gb (two platters, three heads), 40Gb (two platters, 4 heads), and 60Gb. (Quantum only fits three platters in a 1" high 3.25" drive). This sequence holds for the quantum Fireball AS series by the way.
As you can see, there is half a platter (one side) unused in the 10 and 30Gb models. Quantum usually leaves that nice and shiny. IBM usually takes a sharp object and makes a big scratch on the surface....
In either case, it's quite possible that QA on that part of the disk failed, and that it would be unwise to use that part of the disk. Even if you managed to get a head able to read/write it....
IBM Thinkpad (r-series) has hidden space (Score:5, Informative)
It sounds fairly little, but on a 20 Gb drive that's 20%
Usually there is some kind of backup-image there, but it isnt really necessary (especially for us Linux people).
I was thinking first it was just bad DELL again (Score:4, Informative)
But yeah more then doubling the HD capacity sounds fishy and there are plenty of letters to the inquirer article explaining how and why it ain't true.
Re:Uh, no (Score:3, Informative)
Re:Uh, you're wrong sorta (Score:5, Informative)
CLV is constant linear velocity and is what the first generation CD players used. That meant the data passed under the head at a constant speed, 150kbytes/second. The further out on the disc the slower the disc turned as each turn had more data than close-in.
Once the speeds went up the manufacturers moved to CAV or constant angular velocity where the disc spins at a predetermined speed and the data comes in at different rates depending on the head position over the disc. What really happens is there's a table of different CAVs stored in the drive's firmware depending on the absolute position on the disc. Close into the hub the disc spins faster, further out it spins slower. If there are a lot of errors it will slow down to try and read the data better. On a 48x drive there might be as many as 12 different CAV speeds available to the firmware.
Re:How smart u are.. (Score:5, Informative)
Intel tests a sample from each batch of processors to determine which "bin" it goes into. That sample tested reliably at 2.4GHz? Okay, into the 2.4GHz pile. That sample tested at 2.8? Okay, into the 2.8 pile. The trick about processors running faster than labeled isn't because they're mislabeling processors, it's that they only test one processor out of the entire batch. Many processors within either batch could be capable of 3GHz, simply due to vagaries of production - you can give it a shot and find out, but don't be surprised when it develops unacceptable amounts of heat like the processor they tested.
HD manufacturers are quite different. When they release a new line of HDs, they are all based off common technologies, but over a wide range of hard drive sizes - because the NUMBER OF PLATTERS inside each model are different. Got a platter that can hold 100GB? Stick 1 inside, you've got a 100GB drive. 2 inside, 200GB. 3 inside, 300GB. There's three models (though drives typically contain substantially more platters). Now you stick 2 in heads for each platter (unless it's one of those old wacky Barracuda drives, which had 4 heads per platter), and firmware that is designed to control the hardware inside the sealed case - but usually even the controller is identical within a line.
One other important thing to remember is that they test the platters BEFORE the HD is fully assembled. This is very different from a processor, where you can't exact test individual components until the entire thing is built. That said, they certainly design in a certain amount of fudge room certainly, so they can remap bad sectors into the fudge room. No platter is perfect, so they need additional space to remap bad sectors. I would be very, very surprised if there's more than 10GB of available space on a 250GB drive...
Re:How? Reliability? (Score:4, Informative)
Company B get the business of people who are willing to shell out $200 for 200 GB HDD and the business of people who have a smaller budget.
Company A buys company B. The new Company AB sells both 150GB and 200GB drives, so they get money from everybody.
Except, of course, that Company AB is in competition with Company C, which makes a real 150GB drive which costs less to produce than company AB's "150GB" drive because it's not really a 200GB drive with modified firmware. Company C sells their 150GB drive for less, and starts driving company AB's margins down; Company C can keep doing this because their costs are lower.
Using unpatched ghost (Score:5, Informative)
Ghost 2003 Build 2003.775 (Be sure not to allow patching of this software)
That's because the patched version fixes A BUG that allowed the "ever expanding miracle".
Re:Uh, no (Score:1, Informative)
Yeah, there is no fdisk on NetBSD/alpha, much to my chagrin.
Re:damn i hope you are kidding (Score:3, Informative)
Re:Uh, no (Score:4, Informative)
You wish. Floppy format programs that could magically get 1 meg from a 720k floppy were all the rage for the Atari ST. You could explain to people that there weren't really 99 tracks on the drive, that the displayed space remaining was bogus, that it just didn't work and might damage the drive by banging the head into the end stop for tracks 83 onwards. It never worked. They would swear that it worked, and swear at you for telling them it didn't. Even asking them to do a test of putting 1 meg on the floppy and checking if it was all really there didn't work.
Re:This isn't like overclocking your hard drive... (Score:5, Informative)
The MBR does not store the bad block information. The MBR hasn't stored bad block information since IDE became popular and people stopped being able to low format your their hard drives (no a zero wipe is not a low level format, it simply gives the firmware a good time to reallocate developed bad sectors)
The bad block information is stored in areas of the drive that are completely unaccessable to the outside world, most probably near the servo information on the same track as the actual bad sector. It is only accessed by the LBA mapper in the drive firmware.
The drive actually keeps count of how many sectors it has had to reallocate in its life, and how many sectors it is waiting for a good moment to reallocate. You can get this info from most drives by inspecting the SMART values. Bad sectors do not ussually develop very often after the drive is shipped. You should not see this value be more then 1 or 2 in a young, properly working hard drive.
When the drive detects a sector is going bad, it does not automaticly reallocate it unless it can be correctly read. (or ECC corrected by the drive) This gives recovery software a slim chance of getting lucky and recoving the data from the bad block. The drive simply notes the sector is going bad. If it is read correctly at some late, the hard drive will automaticly reallocate it somewhere else. Alternatively, if a write is issued to a sector awaiting reallocation, then the drive will it perform then rather then wait for a good read.
Also, manufacturers still use aluminium platters in most drives. The embedded servo infomation is used to keep the drive tracking correctly regardless of the temperature of the drive (within specified limits)
Since you didn't read the article, nor any of the comments prevously written, you are completely wrong about this magical utility. It is simply an exploitation of a bug in Norton Ghost that makes your hard drive look larger then it is by overlapping partitions. Attempt to write data to one partition and you will trash the data on the other.
Nope (Score:5, Informative)
If it works at all, all it really accomplishes is trick windows into thinking the partition really is bigger than it is. There's NO WAY it could get any bigger in reality, since drive capacity is based on the number of sectors the drive reports to the computer, and that is a fixed, hard-coded number that can't be changed by Norton Ghost or any other utility. If you try to address sector maxcapacity+1, you'll just get an error message back from the drive, it won't actually do anything.
This is just a case of someone making sh** up in order to appear on the front page of hardware websites... A bit like participating in a 'reality show' on TV.
Comment removed (Score:5, Informative)
Not possible at all (Score:5, Informative)
On the subject of the Inquirer article.
The 200JB, or BB or whatever is clearly impossible. There is no hidden space on them to recover at all, let alone 310GB! I can't imagine what kind of idiocy provoked someone to believe that was even possible. Western Digital doesn't make drives with more than 3 platters! The 200GB Western Digitals are only available with 80GB/platters. They only have 5 heads. It's therfore impossible to recover any capacity from them at all (5*40GB=200GB).
Some of the other drives are known to short stroke their platters. This raises the more serious problem of this idiocy... The problem is modern drives store important information on those hidden inner areas of their platters (firmware, disk information, reallocated bad sectors), who knows what you could be overwriting whenever you use that space. Put something down in the wrong place and the drive will never start again or corrupt data at certain sectors. It's a lottery ticket everytime you write data in that partition. That's not what I call useable capacity.
Also, if this was working properly, the 80GB deskstar would yield:
either 90GB (+10GB) if it was a 180GXP (three heads on 60GB platters)
or 80GB (+0GB) if it was a 7K250 (2 heads on 80GB platters)
Anyone with most basic knowledge of hard drives should know that most of the numbers up there are simply impossible, not to mention simply ridiculous.
It's not that there aren't hard drives which are short stroked and sold at a capacity below that available for access in theory, but that something is clearly wrong with this method in that it is simply inventing space that physically can't be there. Perhaps hard drive manufacturers are shortstroking disks to the point that they are formatted with the capacity of drives with fewer platters or heads, but this could never justify the failure of this method on the 200GB Western Digital drive. This drive is a known quantity. No matter what, even if they got a disk that was a shortstroked 6 head drive (which would make no sense), the maximum capacity is 250GB, not 510GB. You would need 7 platters to get that capacity with todays technology!
Re:How smart u are.. (Score:5, Informative)
I daresay they've a statistical model that has them doing enough sampling to maximize profit, and the means minimizing the amount of irritated customers calling in about problems.
This is not like highway engineering, where they have to figure in weather, vehicles, and Aunt Tillie before posting a speed sign for a curve, so they lowball it heavily.
Re:Getting 5% more disk space (Score:3, Informative)
Hey, I've done that before. (Score:4, Informative)
The vfat partition stayed the same and the ext2 partition was non-zero size... woah....
Its just pesky random file corruption on both partitions you have to worry about...
In all seriousness:
*THIS IS VERY VERY VERY DANGEROUS* DO NOT DO THIS *PERIOD*. It may give neat apperances at first, and both filesystems may appear fundamentally functional, but it will *CORRUPT DATA* when the first partition is populated enough to creep into the partition overlay.
Re:Uh, no (Score:2, Informative)
Some ports (namely i386) ISTR have both disklabel and fdisk, your disklabel goes into one of the fdisk'd partitions set to the correct BSD partition ID. Whereas sparc, alpha, etc. just write their disklabels directly.
port-cobalt also has both under NetBSD.
Re:Uh, no (Score:4, Informative)
DMF. [winimage.com]
ATTEMPT TO CLEAR UP MISCONCEPTIONS RE ATAPI/PARTG. (Score:5, Informative)
The Host Protected Area is space on your hard drive that your bios, your operating system or even your applications can be set aside for certain management information. I take it that some backup programs (ab)use it to "hide" compressed boot images on hard drives. I wouldn't be very surprised if companies like Dell or IBM stole some of your hard disk so you can restore a windows installation.The "Host Protected Area" has nothing at all to do with the drive-internal handling of bad sectors or other drive-interal.Drive-internal information as well as sectors used for replacing sectors gone bad are not accessible through the ATAPI commandset for accessing the HPA.
The ANSI T13 Standard Document for ATAPI-6 (current) are overprized at $18.00 but you can download a draft of upcoming ATAPI-7 from the T13 working group's site at http://www.t13.org. There you will find in Section 4.9 of the document: "A reserved area for data storage outside the normal operating system file system is required for several specialized applications". Systems may wish to store configuration data or save memory to the device in a location that the operating system cannot change. The optional Host Protected Area feature set allows a portion of the device to be reserved for such an area when the device is initially configured. A device that implements the Host Protected Area feature set shall implement the following minimum set of commands:"
READ NATIVE MAX ADDRESS
SET MAX ADDRESS ... ...
I take it that READ NATIVE MAX ADDRESS tells you how many sectors of user addressable space have been configured on the drive and SET MAX ADDRESS lets you adjust that.
The way I see it there may be a lot of preinstalled hard drives out there with a compressed windows installation images on them "hidden" in the HPA. Maybe a new version of hdparm will allow linux users to reclaim that dead space.
Re:How smart u are.. (Score:2, Informative)
Okay I've heard this allot about processors and something has always nagged at be about this. How is it in something that I think of as precise as making chips is it not cretain how fast a chip will preform? If you make something the same way how is it that you have a variance from one to the next? Sorry for the dumb question but I wuld just like to understand this.
Re:Uh, no (Score:3, Informative)
Careful with the word impossible! Years ago, when I was learning electronics, the widely accepted maximum PSTN MODEM download speed was 9,600bps at 2,400 baud.
56k MODEM's still operate at 2,400 baud to this day, yet achieve so many times more than 9,600bps through tricky new techniques and removal of old hurdles.
I know what you mean though. Just pushing a head further than it is supposed to go is not always going to work. On something like an Atari, which has pretty consistent hardware, it might never work.
(By the 99 track method, at least.)
And there is your disclaimer! ; )
BTW, I beleive I have seen a Panasonic floppy drive advertised which claims to get 32MB from standard 1.44MB floppy disks using an encoding technology different from that typically used with floppies.
Re:How smart u are.. (Score:5, Informative)
1) The processor passes testing under extreme conditions at this speed. This gaurantees that the part has a high probability of never being returned as a defect (as silicon is used it ages due to electron-igration, which effectively makes it work slower/stop working eventually). The testing gaurantees that the user won't ever see this impact. In this case, a 2.4GHz binned part may work fine for you at 2.8GHz, but perhaps it will die in 3 years. Or perhaps a single instruction in SSE will return the wrong value 1 time in 100,000. Who knows.
2) Parts are binned to meet supply. The company says it will supply 10,000 2.8GHz parts, and 100,000 2.4GHz parts. However of the 110,000 parts, 40,000 ran at 2.8GHz, and the rest at 2.4GHz. To keep the price scale (and meet the contract) 30,000 oparts which are perfectly good at 2.8GHz will get sold as 2.4
The downside: There is no way to tell (1) from (2) as a consumer, so overclocking is all a game of craps.
Also remember that the tests are done under 'extreme' conditions, which means that all parts will likely work slightly faster than the bin they were assigned to.
Caveat: When a new frequency/design is released, it may be very difficult to get to the desired frequency, and the testing is relaxed somewhat to meet the quota (in which case very few parts will be overclockable)
Lastly, no testing is done above the top bin, so if 3.2 GHz is the current fastest sold, some percentage of those may run at 3.4 or 3.6, and they won't have been tested that far.
Re:Uh, no (Score:0, Informative)
I would have to agree, this is utter horse shit. No drive manufacture would release a 500 GB drive as a 250. Hell, if they had the process to make 500 GB harddrives they would be flying this fucker from they highest flag pole to see who saluted.
Re:How smart u are.. (Score:5, Informative)
Manufacturers test every single chip pretty much identically. Different companies differ in how they determine speed of parts (run some patterns at full speed, measure the delay of some known circuits, etc.) but each part is tested. There is too much variation across the wafer to do much else.
It's always possible to run a chip faster than a manufacturer's testing especially if it is kept cooler than the max spec, voltage is within tighter tolerance than spec, or if the user doesn't care about correct answers. I find the last point is what usually allows the greatest overclocking.
Also, some large manufacturers (Intel, AMD) have marketing needs to sell certain speed grades. So if all parts run at 3.0GHz, but users are demanding the cheaper 2.8GHz parts, then they'll sell some faster parts marked at 2.8GHz. In general, this is a temporary situation since re-pricing to reflect the increased yield will probably move the 3.0GHz price down shortly to increase pressure on the competition.
Sorry... (Score:2, Informative)
I too could use Norton Disk Edit to make the FAT table say lots of other intresting things...
Like I had a 300 gig drive on a 20 gig.
It's called a currupt FAT table.
Re:Uh, no (Score:2, Informative)
There are a few misconceptions about floppy disks, it seems. Let me try to clear some of them up:
Now, the effective capacity depends on the fomatting method. For standard PC formatting, you get 720 KB and 1.44 MB, respectively. However, some altertative formats offer more efficient formatting options. For example, my Commodore can get 800 KB and 1.6 MB from the same disks.
Re:that looks like a *bad* thing (Score:2, Informative)
It sounds to me like this is simply a case of ghost screwing up the geometry settings in the partition table, and then ofcourse there is yet another Windoze bug to exploit it - sorry, I mean get hosed by it...
This sounds sort of like something I used to do for automatic installation way back when, use 'dd' to dump then entire contents of "hdX" to some file
# dd if=/dev/hdN of=/tmp/dump
then dump the contents of that file to another HD that is the same size or bigger.
# dd if=/tmp/dump of=/dev/hdN+1
The result is that everything will work just fine, and running fdisk (on Linux) will show an uncorrupt partition table, BUT that geometry (obtained via BIOS) shows a much bigger drive, but DO NOT save the resulting table (w) - fdisk will rewrite it and then hose everything up! Pretty much just the opposite of this method....
Re:How smart u are.. (Score:5, Informative)
After a wafer is made a robotic tester probes each circuit before the wafer is cut up. If a circuit fails the basic tests, the probe squirts a little dot of red paint on that circuit. The "known defects" get a red dot without even being tested. After this initial probe test, the circuits are cut apart, the ones with red dots are discarded and the rest are mounted on carriers.
It is possible that a slight mask defect or wafer imperfection might cause a performance problem rather than a total functional failure. This could also be caused by a slightly out-of-spec doping or wafer heating. These are sorted out by further testing as mentioned by other posters.If all of the circuits on a wafer get the same doping and same heating, then you can sample one or two and assume that the rest of the circuits from that wafer will have similar performance. If you have a mask problem that causes degraded performance, you can automatically flag that die location as a "known slow" or a "known bad" depending on your criteria.
Re:I was thinking first it was just bad DELL again (Score:1, Informative)
The reason they don't always use the whole disk is because they use drives from different manufacturers which may come in slightly different sizes, but they want to have a common image that they can copy to all of them, so they just make it the size of the smallest one.
Simple repartitioning will fix that though.
Tried it, broke it. (Score:5, Informative)
I followed the directions to the letter. I ended up with a 1GB drive! (On a supposedly 540MB drive. In the end, FDISK claimed 965 MB.) I filled up the first partition (with mp3s, naturally.) I then started filling up the second partition...
Surprise, surprise. It crashed halfway through copying the mp3s. Reboot? BZZZT! Windows 98 crashed a quarter of the way through loading. Starting up from a DOS disk, and my directory structure is all frooed up on the C partition. Filenames with random ASCII characters in them, inaccessible directories, all sorts of data corruption goodness. The D partition had correct names, though. (So my second batch of mp3s was probably fine.)
(Or, more specifically, do not try this on a hard drive you want to keep, or with data you want to keep.)
Sometimes they do (Score:5, Informative)
Re:How smart u are.. (Score:5, Informative)
Maybe in the old full-height drives, but most consumer 3.5" drives nowdays only have 1-3 platters (as have most drives I have disassembled....my platter collection is at about 50), 4 in the ultra-top-of-the-line high-capacity drives. Each platter is about 1mm thick, but has space between the rest of the chassi and other platters for the head assymblies (which is 2 assymblies between platters, one for each). These take up more room, as the arm's design itself is usually thicker than the platter, and it has to be rasied off the platter so that it will not damage it as it swings back and forth rapidly. You also have to add in the case itself and the motor used to spin the platters. Theres not much room to cram in too many platters inside the case. Remeber the dimensions of a half-height 3.5" drive gives only about 1.6" of vertical space total.
You are correct though, in that lower capacity drives just remove platters and head assymblies from a higher capacity model. Specifically, I took apart two older Seagate drives, one had 1 platter, the other had 2 and was rated at almost double capacity, but where otherwise identical. In place of the platters, they just put in spacers on the drive axel.
Tm
ps: on a side note its interesting to see how the design of drives have changed over the years, from heads actuated by stepping motors to voice-coil actuators, and from the full-height monsters with 7 platters to single platter drives with 10x capacity, yet the platters have stayed the exact same radial size on every 3.5" drive I have taken apart. The only notable physical differnece other than color is the thickness. Newer platters are lighter in color and are ALOT thinner.
Re:FDformat did this (Score:4, Informative)
I used to use the program the parent speaks of, and it really did work. The format tool let you adjust the number of tracks and sectors on a floppy, with the 1.72 Meg combination working well but anything beyond that not working right. The space gains were quite real, back when my hard drive was a mere 40 megs I used this to offload things and make room. It used a small TSR program (i.e., a memory-resident driver) which had to be loaded, or you would get errors trying to read the disks.
Re:Uh, no (Score:4, Informative)
How old am I? (Score:3, Informative)
Now that's kickin' it old school.
60MB out of an ST-251, baybee!
Chris Owens
San Carlos, CA
Re:The 'trick' is to create a corrupt partition ta (Score:2, Informative)
The bits actually reside in a contiguous sector file in the root of the primary partition. This file may be 8-100MB. If your disk is too fragmented, Ghost cannot create it.
The real reason for this stunt file is to eliminate the need for a boot floppy to launch Ghost (a PC-DOS 7 program compiled with DJGPP)
Re:Sometimes they do... (Score:3, Informative)
In that case they probably weren't afraid of Cyrix/AMD so much as maybe the Mac - I don't think that Intel had a whole lot of competition on the 486.
Re:Uh, no (Score:2, Informative)
It worked very well. Most warez was distributed on such disks. Even some commerical games (try "maupiti island" for example : you can download disks images from http://www.lankhor.net/ if you want to check by yourself).
Processor Speed Binning - location on wafer (Score:3, Informative)
The _design_ is often very precise... (Score:3, Informative)
HOWEVER! The manufacturing process is much more of a crap shoot. You have to grow this perfect layer of silicon in the shape of a disc (usually it's cut from a cylinder), and grind it to be incredibly smooth. It has to be perfect. Then you expose it to one chemical, then light which reacts with it, then you expose it to another chemical to leave behind something where the light hit. And you do this over, and over again to deposit layers of different dopants to the chip to build it's structure.
Except if the tiniest bit of dust, or particle gets in the way, that whole chip is ruined. And you can't make it in a vacuum, so you have to have filtered air. But even then, you can't filter perfectly, so you have some loss.
And even then, the wafer is not guaranteed to be 100% flat all over to within a nanometer (whereas the chip components themselves are only 130-90nm these days) so there is going to be some chips whose parts are better lined up or formed more evenly than others, overall.
So you make about 200 or so on a wafer, then cut them apart and test them, to see which ones work, and how well they do.
It's the manufacturing that makes the cost-competetive tradeoffs...
Reminds me of an old Compact Flash reader I had. (Score:2, Informative)
Re:I was thinking first it was just bad DELL again (Score:3, Informative)
I might beleive this for our single case. But having seen a post of it happening elsewhere, I would tend to beleive that their profit margins are good enough for them to occasionally just take a little less profit to keep the customer happy. Especially on something like a rack mount server (1650), which might indicate to Dell that this customer could potentially buy more server gear and maybe even pallet loads of desktop gear during the next desktop upgrade session.
What else do you think these companies use the company info forms that you fill out for? If you filled out that you are an "ISP" then they might be less inclined to "make you happy" as they would had you filled out "legal firm" with "500-1000 staff". Cha ching! To them, keeping the IT department and purchasing happy is merely an investment for their (Dell) future.
Here is that mailing list post that I promised...
"Thanks, turns out to be a usless question now though, Dell is throwing in second CPU's for free
PS, I don't know why I thought this happened in another country, I don't seem to be able to see anything in that post to make me beleive that. I'm sure I read more than this but can find it right now. Perhaps it went off list, I can't recall.
Re:More Amiga quirks (Score:3, Informative)
MFM [wikipedia.org] and GCR [wikipedia.org], respectively.
--