Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Upgrades Hardware

Recovering Secret HD Space 849

An anonymous reader writes "Just browsing hardocp.com and noticed a link to this article. 'The Inquirer has posted a method of getting massive amounts of hard drive space from your current drive. Supposedly by following the steps outlined, they have gotten 150GB from an 80GB EIDE drive, 510GB from a 200GB SATA drive and so on.' Could this be true? I'm not about to try with my hard drive." Needless to say, this might be a time to avoid the bleeding edge. (See Jeff Garzik's warning in the letters page linked from the Register article.)
This discussion has been archived. No new comments can be posted.

Recovering Secret HD Space

Comments Filter:
  • Uh, no (Score:5, Informative)

    by Sivar ( 316343 ) <charlesnburns[ AT ]gmail DOT com> on Wednesday March 10, 2004 @04:23AM (#8518960)
    Sorry, but this is complete bullshit.
    Did aureal density technology increase to 200GB/platter overnight? No.

    Please refer to this thread [storagereview.net] on StorageReview.com for more information.
  • Simple corruption (Score:5, Informative)

    by gadfium ( 318941 ) on Wednesday March 10, 2004 @04:25AM (#8518967)
    I'm a Ghost developer.

    This is just a method of corrupting your partition table so the same disk sectors appear more than once. If you try this, don't ask Symantec for help afterwards.
  • yeah right. (Score:4, Informative)

    by User 956 ( 568564 ) on Wednesday March 10, 2004 @04:28AM (#8518979) Homepage
    So either the whole thing is a hoax, or, more likely, the OS is looking at a damaged drive (damaged partition table, at least) and seeing the same partition in multiple ways. Try to write on that shiny new partition and you'll be overwriting data on the old one. Guaranteed.

    Some drives are known to short stroke their platters. This raises the more serious problem of this idiocy... The problem is modern drives store important information on those hidden inner areas of their platters (firmware, disk information, reallocated bad sectors), who knows what you could be overwriting whenever you use that space. Put something down in the wrong place and the drive will never start again or corrupt data at certain sectors. It's a lottery ticket everytime you write data in that partition. That's not what I call useable capacity.

    Don't believe me? Go ahead and try it. You'll lose all those Buffy episodes you've downloaded on KaZaA, and instead you'll have to spank it to the Portman pictures your mom doesn't know you have stashed under your bed.
  • by Canadian1729 ( 760713 ) on Wednesday March 10, 2004 @04:29AM (#8518987)
    Then what kind of disks did you use? I did that to literally hundreds of disks more than 10 years ago, and they still work perfectly today; I've used some in the past week.
  • by Channard ( 693317 ) on Wednesday March 10, 2004 @04:31AM (#8519003) Journal
    'A representative for large hard drive distributor Bell Micro said: "This is NOT undocumented and we have done this in the past to load an image of the original installation of the software. When the client corrupted the o/s we had a boot floppy thatopened the unseen partition and copied it to the active or seen partition. It is a not a new feature or discovery. We use it ourselves without any qualms' Which, having worked for a PC sales company, I can confirm is true. And certainly, while earlier models had partitions you could wipe with partition software, later PC builds had this hidden space. But the space was 1GB at most - there's no way there was the kind of 40GB plus hidden space the article claims.
  • Summary... (Score:5, Informative)

    by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Wednesday March 10, 2004 @04:33AM (#8519010) Journal
    I think posting in the "letters" linked article sums it up pretty well:

    About the "recover unused space on your drive" article:

    Working for a data-recovery company I know a thing or two about harddisks....

    One is that if the vendors would be able to double the capacity for just about nothing, they would.

    All this probably does is to create an invailid partition table which ends up having:

    |*** new partition ***|
    |*** old partition ***|

    overlapping partitions. So writing either partition will corrupt the other. It probably so happens that whatever situation people tried it, it just so happened that the (quick) format of the "new" partition didn't corrupt the other partition to make it unbootable.

    And the 200G -> 510Gb "upgrade" probably has ended up with three overlapping partitions....

    Roger

  • inq (Score:3, Informative)

    by mr_tommy ( 619972 ) <tgraham@@@gmail...com> on Wednesday March 10, 2004 @04:34AM (#8519016) Journal
    I might note that it is the inquirer, not the register. Some editors might take offense ;)
  • by Zurgutt ( 131637 ) <kaarel@hiiuBALDWINmaa.ee minus author> on Wednesday March 10, 2004 @04:41AM (#8519051) Homepage
    In 1994 I bought a box of 720K single-density floppies by TDK. After discovering that making this extra hole could double the disk capacity, I crudely bashed the holes in them with the end of scissors.

    These floppies were used almost daily for 3 years. (no hard disks available at that time). They were reformatted countless times.

    Not single one of them ever failed. About a year ago, when failed to reformat and make a boot disk from several fresh-brought floppies I digged up one of them, reformatted again and succeeded in making a reliable boot disk.

    Quality of todays media just makes me cry.
  • Re:Uh, no (Score:4, Informative)

    by Sivar ( 316343 ) <charlesnburns[ AT ]gmail DOT com> on Wednesday March 10, 2004 @04:42AM (#8519059)
    Couldn't tell you what "aureal density" means.
    That's probably because I can't type. You may want to read this reference for " areal [storagereview.com]" density, though.
  • Re:Uh, no (Score:2, Informative)

    by No One's Zero ( 714010 ) on Wednesday March 10, 2004 @04:44AM (#8519065)
    I was really hoping for a cool firmware/bios hack to turn on unused platters or something cool... this is utterly dissapointing.

    What is truly amazing is that some fool "discovered" this and actually believed he got ghost to double his HD size.

    This does not in any way increase the physical disk size... this either overlaps partitions (bad thing) or creates a virtual partition inside the main one (stupid thing).

    DONT DO THIS!!!!! (emphatic, not yelling)
  • Plagarism... (Score:1, Informative)

    by Anonymous Coward on Wednesday March 10, 2004 @04:50AM (#8519097)
    Taken from the afore-mentioned StorageReview thread:

    Find it here [storagereview.net]

    Mod down!!
  • Re:Uh, no (Score:5, Informative)

    by Sivar ( 316343 ) <charlesnburns[ AT ]gmail DOT com> on Wednesday March 10, 2004 @04:51AM (#8519103)
    If this is real which is doubtfull it is probably a marketing trick. The drive manufactures proably make one drive and sell it as 3 different drives in different capacities.

    Actually, this is exactly what they do. The difference, however, is that the lower-end (smaller) drives are identical except that they come with fewer platters. For example, a 160GB hard drive today likely has two 80GB platters, whereas an 80GB drive probably has one (though different combinations of different sizes are of course used, depending on when the hard drive was manufactured and other factors)

    In some cases, a hard drive will be sold with a greater potential capacity than its available capacity. For example, a drive with two 60GB platters may be sold as a 100GB drive, the platters having been "short stroked". This has nothing to do with the absurd technique described in the Inquirer article, and I doubt that it is possible to recover the lost space.
    Hard drives are the highest precision mechanical devices that most people have in their home--moreso than processors, high-end printer heads, or toasters. They are not something that you want to physically modify.

    See the following highly informative and interesting (if you are a geek) posts by a Maxtor engineer:
    Here [storagereview.net]
    here [storagereview.net]
    and here [storagereview.net]
  • Re:Uh, no (Score:4, Informative)

    by Sivar ( 316343 ) <charlesnburns[ AT ]gmail DOT com> on Wednesday March 10, 2004 @04:55AM (#8519118)
    This is true, but there certainly aren't several GB of sectors reserved for errors. :)
  • Re:Uh, no (Score:5, Informative)

    by tap ( 18562 ) on Wednesday March 10, 2004 @04:57AM (#8519133) Homepage
    Couldn't tell you what "aureal density" means.
    "Aureal density" is a misspelling of "areal density". Areal means relating to area. In other words, the bits per square inch of the hard drive platters.
  • by lingqi ( 577227 ) on Wednesday March 10, 2004 @04:59AM (#8519143) Journal
    but in case you are not:

    HD are sold in GB with GB "defined" as 1,000,000,000 bytes, which is ~7.4% less than a real GB (2^30 bytes). After formatting, (depending on your FS) a extra few percent goes away for your file table, sector marker, directory structure, etc. so in real GB (in units of 2^30 bytes), it'll be a lot less than 160, or whatever your "bought" size.

    Don't expect to recover those.

    RAM is sold with truthful advertising. 128MB = 128*2^20 bytes, which is like 134,217,728 bytes - despite the 134, it's still 128MB.
  • Re:Simple corruption (Score:1, Informative)

    by tangent3 ( 449222 ) on Wednesday March 10, 2004 @05:00AM (#8519147)
    This must be one of the rare times where a comment needs to be modded +6.
  • Re:Simple corruption (Score:4, Informative)

    by whereiswaldo ( 459052 ) on Wednesday March 10, 2004 @05:10AM (#8519187) Journal

    One flaw I found in the article is that they say you need two drives, both containing an OS. Later they ask you to swap out one of them for another drive with an OS. That whole section sounds like smoke and mirrors.
    If this extra space really exists, why do you have to "trick" the OS into believing it is there? I was expecting some mention of a low level format at least, but there's no way this will work. I'll bet the didn't do any data integrity tests which would no doubt show right away the flaw in their system. Oh well, who needs proof if you're just storing appz and mp3s.
  • Re:Uh, no (Score:3, Informative)

    by edmudama ( 155475 ) on Wednesday March 10, 2004 @05:12AM (#8519193)
    that is incorrect

    CDROMs use constant data rate by varying the RPM of the drive depending on where you're located

    hard drives have lower data rates at the inner diameter since they're spinning the same RPM all the time, so you simply get less linear distance to store data during the revolution

    all of the sizes shipped to customers already account for this.

    it would be possible to put more bits on the media by changing the speed that the disk rotates, but those loosened mechanical tolerances would give you a 4.7GB drive instead of a 300GB drive.
  • by Sivar ( 316343 ) <charlesnburns[ AT ]gmail DOT com> on Wednesday March 10, 2004 @05:28AM (#8519252)
    Defining a gigabyte as 1,073,741,824 bytes is no more or less "real" than defining it as 1,000,000,000 bytes. Nothing about the way hard drives work makes it more logical to measure using the binary common use of the prefix over the traditional SI one.

    If anything, Windows and whatever other reporting software used is incorrect, because "Giga" is an SI standard prefix used in science and mathematics meaning "One billion", just like "mega" is "one million" and micro is "one millionth."

    In the old days, "kilobyte" was used when referring to 2^10 (1024) bytes because it was conveniently close to 1000, which is the meaning of the "kilo" prefix. The base-2 to base-10 similarity becomes ever wider as the values multiply. Go ahead and look at the next two sequences in which binary and decimal powers are "close".

    That said, ultimately common use is what defines the meaning of words, but the common use of a word by no means invalidates the original terminology from which it was derived!
  • by dirgotronix ( 576521 ) on Wednesday March 10, 2004 @05:29AM (#8519256) Homepage
    Actually, ram is falsly advertised. Ram is measured in mebibytes, while hard drives are measured in megabytes. Ram manufacturors just haven't caught on to the proper terminology.

    1 Mebibyte = 2^20 = 1048576 bytes.
    1 Megabyte = 10^6 = 1000000 bytes.

    The "megabyte" as 2^20 was depreciated /many/ years ago. See http://mathworld.wolfram.com/ and search for megabyte.

    Mega = 1000^2
    Mebi = 1024^2
  • by tap ( 18562 ) on Wednesday March 10, 2004 @05:33AM (#8519270) Homepage
    RLE is a kind of compression. RLL hard drive controllers didn't do any kind of hardware compression. RLL is just a more efficient and more complex way of turning bits into flux reversals on the hard drive platters. See here [storagereview.com] for a good description.

    Back in the day of MFM and RLL controllers, the hard drive controller did much of what the drive electronics and firmware do in modern hard drives, that's why you could have MFM or RLL controllers. Hard drives still use RLL encoding today.

  • Re:Uh, no (Score:4, Informative)

    by thogard ( 43403 ) on Wednesday March 10, 2004 @05:33AM (#8519271) Homepage
    A standard "make one partition full sized" uses only the parts of the drive that aren't reserverd. If there was a way to use the disk size including the bits reserverd to fixup bad sectors, then you could get more space.
    Now if your 1st partition is a full disk - reserved and your second partition is full sized including reserved and the reserved aren't all at the end of the disk, your going to end up with partitions of the ratios they talk about.

    However what happens you start putting windows on this thing? Well block sizes of big drives aren't your friend and most small files will end up in reserve clustors. Since directories are small files too and if they don't conflict, you should be able to load up a few gig of data on one of these disks before you start to find out that its overwriting other bits of the other partiion. I expect one of these 180 gig drives could be loaded up with at least 90gig of data before the directorys started acting funny. One cool bit about this is block related files (like mp3) will show up on the dir just fine but when you play it, it might switch songs in the middle. I don't think the RIAA could ask for a better gift.
  • Re:Uh, no (Score:5, Informative)

    by SenseiLeNoir ( 699164 ) on Wednesday March 10, 2004 @05:45AM (#8519322)
    Firstly, this is just resectoring, and is HIGHLY dangerous as all it is doing is making some sectors appear twice. Physically its just one sector.

    Secondly, ALL IDE type drives (and some SCSI) have soem reserved space (possibly 5%) which is intelligently remapped whenever a bad sector is found. (rememeber you are NOT supposed to Low Level format an IDE drrive). During manufacturing, it is inevitable that bad sectors WILL be found, but these are remapped to the hidden reserved section, whcih is why most Hard disks you buy now do not APPEAR to have bad sectors. The reason is they are already mapped into the reserved area. So the rule is, when you DO start seeing bad sectors on your IDE drive, you can be sure that the reserved space is now full and its time to start looking for a new Hard Drive.

    "Recovering" the space allocated to the reserved section is NOT good at all, since you then bypass the IDE bad sector mapping mechanism, and if the drive is not suitably surfaced checked, you can bet yoru bottom dollar that you will see some bad sectors.

    Beware.
  • Re:Damn. (Score:2, Informative)

    by CrystalChronicles ( 706620 ) on Wednesday March 10, 2004 @05:58AM (#8519355)
    Before Ghost was bought out by Symantec it was a NZ company so there are probably still NZers working for them.
  • utter bull (Score:3, Informative)

    by rev_karol ( 735616 ) on Wednesday March 10, 2004 @06:05AM (#8519375)
    CHECK IT OUT [theinquirer.net] before you rape your hd
  • Re:Uh, no (Score:5, Informative)

    by kuiken ( 115647 ) on Wednesday March 10, 2004 @06:10AM (#8519393) Homepage
    For example, a drive with two 60GB platters may be sold as a 100GB drive, the platters having been "short stroked". This has nothing to do with the absurd technique described in the Inquirer article, and I doubt that it is possible to recover the lost space.

    It used to work on the old seagate drives, you just set the bios to the parameters of the 100GB drive of letting the bios autodetect the 60GB drive and you had an 100GB drive

  • No cigar, but... (Score:5, Informative)

    by thomasj ( 36355 ) on Wednesday March 10, 2004 @06:20AM (#8519426) Homepage
    The way harddisk are made these days it would be possible to claim an increase in useable space, if you could find some way to hack into the firmware.

    Disks of today have no direct mapping from head, cylinder and track number to physical location on the platter. Rather there is an internal table of the mapping with room for remapping potential weak sectors to unused space. When the head signal is getting close to be inconclusive the just read sector is written at a spare sector, the mapping table is updated, and the old one is marked as bad.

    If this article had show how to manipulate the disk so a number of the spare sectors could be used for enlarging the disk it would have been interesting...

  • by JRHelgeson ( 576325 ) on Wednesday March 10, 2004 @06:21AM (#8519434) Homepage Journal
    I used to do a lot of data recovery... lemme tell you whats happening here.

    Remember the "Good old days" where hard drive sizes were sub 540mb - We addressed hard drives using C/H/S size (Cylinder/Heads/Sectors) - It was common to scandisk and start seeing bad blocks (sectors) on your hard drive...

    When we broke the 540mb 'barrier' we quit using C/H/S mappings and started using LBA mode, Logical Block Addressing. What this effectively did was take control of the physical drive access, data storage and retrieval, away from the operating system. This was because the OS/Bios would only recognize a maximum of 512 Cylinders.

    Quick facts about hard drives:
    1) There are *ALWAYS* defects on the hard drive surface. There is no such thing as a flawless platter.
    2) As hard drive sizes have increased, all the innovations have taken place in your head. :)

    Yes, there have been minor changes in the platter structure. As rotational speeds increased, sector sizes decreased, and operating temperatures increased, manufacturers had to move away from aluminum platters as they would shrink/grow too much as the drive reached operating temp. So they moved to glass. -- The surface of the drive has always been coated using the same exact ionization process.

    However, the read/write head is where all the innovations have taken place. Because the size of the bits are getting smaller and smaller, a surface defect that previously would only wipe out a single bit would now wipe out an entire sector. For this reason, drive manufacturers allocate plenty of extra space on the drive to move data from failing areas of the drive (which is happening all the time). This drive maintenance happens independant of the operating system on the PC. It is an operation of the hard drive firmware. IT IS AUTOMATIC.

    After drive manufacture, there is an initial low-level format of the drive (platter) where the drive establishes its sector boundaries. This is when it maps out the defective areas of the drive and stores it in the eeprom. As the drive operates and sectors fail, the drive automatically moves the data to a different area of the drive. These areas where the data is moved to are typically adjacent to the defective area. Space allocated to compensate for defects can be as much as 100% of the original drive space.

    If the drive didn't maintain itself, then you'd see TONS of surface defects whenever you run scandisk, even on a brand new drive.

    Think about it, when is the last time you ran a scankdisk and had it come back with surface errors. It doesn't happen anymore.

    Anyhow... What these guys did was use a utility that creates a quick and dirty MBR(Master Boot Record) that likely archives the legitamate MBR within the 8mb partition while it does its business. These bozo's have essentially wiped out the MBR (READ: Defect Map) and formatted the full capacity of the entire disk.

    Sure, you can install an OS, even run it, but as the hard drive tries to manage itself... well... I've explained enough here, be it suffice to say that you're fsck3d.

    This isn't like Intel that creates a single chip and labels it 3 different speeds (The pentium 75/90/100 comes to mind) where you can overclock it...
  • It's a trap! (Score:5, Informative)

    by glassesmonkey ( 684291 ) on Wednesday March 10, 2004 @06:35AM (#8519465) Homepage Journal
    The only saving grace of this article it that even the most intelligent person would have trouble following the Computurs-Fer-Nascar-Dads style instructions. From the article:

    Do not try to delete both partitions on the drive so you can create one large partition. This will not work. (this is because they are overlapping and you won't see 'extra' space if you delete the overlap)

    You have to leave the two partitions separate in order to use them. Windows disk management will have erroneous data (again alluding to the error in reporting space)

    in that it will say drive size = manus stated drive size and then available size will equal ALL the available space with recovered partitions included. ... It has worked completely fine with no loss before and it has also lost the data on the drive before. (so it obviously WILL 'lost' your data)
  • How smart u are.. (Score:3, Informative)

    by essreenim ( 647659 ) on Wednesday March 10, 2004 @06:36AM (#8519468)
    I can tell your intelligence by your signature.
    This is possible and is regularly used by HDD manufacturers (if you bothered to read the article)
    ..The 120GB hard drive you purchased may have been physically identical to a 250GB hard drive, but simply it only passed qualification at 120GB.
    Intel does the same thing with processors. A 3.0Ghz processor may be sold as 2.4Ghz, simply because it didn't pass qualification at 3.0Ghz but did at a lower clock speed.
    all hard drives reserve a certain amount of free space to use for reallocation of bad sectors. These "spare sectors" are free space on your drive... completely unused until your hard drive starts finding problems on the physical media.
  • Re:Uh, no (Score:5, Informative)

    by rew ( 6140 ) <r.e.wolff@BitWizard.nl> on Wednesday March 10, 2004 @06:42AM (#8519485) Homepage
    Working for a data-recovery company I've opened up quite a bunch of drives. So I know what's going on inside.

    Depending on the form factor and the manufacturer, they can stuff 1, 2, 3, 4, 6, or even 15 platters in an enclosure.(That last is for a full height 5.25" drive. 6 only fit into the 1.7" heigh drives).

    Suppose Quantum can fit 10Gb on one side of a platter. They will then make a family of drives: 10G (one platter, one head), 20Gb (one platter, two heads), 30Gb (two platters, three heads), 40Gb (two platters, 4 heads), and 60Gb. (Quantum only fits three platters in a 1" high 3.25" drive). This sequence holds for the quantum Fireball AS series by the way.

    As you can see, there is half a platter (one side) unused in the 10 and 30Gb models. Quantum usually leaves that nice and shiny. IBM usually takes a sharp object and makes a big scratch on the surface....

    In either case, it's quite possible that QA on that part of the disk failed, and that it would be unwise to use that part of the disk. Even if you managed to get a head able to read/write it....
  • by Eudial ( 590661 ) on Wednesday March 10, 2004 @06:55AM (#8519510)
    The IBM Thinkpad (R-series atleast) has 4 Gb of hidden diskspace that you can enable for ordinary usage in BIOS.
    It sounds fairly little, but on a 20 Gb drive that's 20%

    Usually there is some kind of backup-image there, but it isnt really necessary (especially for us Linux people).
  • by SmallFurryCreature ( 593017 ) on Wednesday March 10, 2004 @06:55AM (#8519511) Journal
    You see I have encountered numerous Dells wich has only a portion of their HD partioned. Not hidden or recovery partitions. Just 6gb of a 8gb disk used. Maybe the machine was sold as 6 but 8 was cheaper or they ran out of part but it still mean't an easy upgrade. (was the time of napster so everyone needed more HD space)

    But yeah more then doubling the HD capacity sounds fishy and there are plenty of letters to the inquirer article explaining how and why it ain't true.

  • Re:Uh, no (Score:3, Informative)

    by tap ( 18562 ) on Wednesday March 10, 2004 @07:16AM (#8519580) Homepage
    Seems these misspelling are not that uncommon. Google hits:
    aureal density
    688
    aural density
    16,300
    areal density
    70,700
    The first hit for areal density is about hard drives, for aural the fist hit is about music, but the second is about hard drives, and for aureal the first is about hard drives and the second is about Aureal's soundcards.
  • by nojayuk ( 567177 ) on Wednesday March 10, 2004 @07:17AM (#8519582)

    CLV is constant linear velocity and is what the first generation CD players used. That meant the data passed under the head at a constant speed, 150kbytes/second. The further out on the disc the slower the disc turned as each turn had more data than close-in.

    Once the speeds went up the manufacturers moved to CAV or constant angular velocity where the disc spins at a predetermined speed and the data comes in at different rates depending on the head position over the disc. What really happens is there's a table of different CAVs stored in the drive's firmware depending on the absolute position on the disc. Close into the hub the disc spins faster, further out it spins slower. If there are a lot of errors it will slow down to try and read the data better. On a 48x drive there might be as many as 12 different CAV speeds available to the firmware.

  • Re:How smart u are.. (Score:5, Informative)

    by Anonymous Coward on Wednesday March 10, 2004 @07:21AM (#8519597)
    Er. Except that's not how it works.

    Intel tests a sample from each batch of processors to determine which "bin" it goes into. That sample tested reliably at 2.4GHz? Okay, into the 2.4GHz pile. That sample tested at 2.8? Okay, into the 2.8 pile. The trick about processors running faster than labeled isn't because they're mislabeling processors, it's that they only test one processor out of the entire batch. Many processors within either batch could be capable of 3GHz, simply due to vagaries of production - you can give it a shot and find out, but don't be surprised when it develops unacceptable amounts of heat like the processor they tested.

    HD manufacturers are quite different. When they release a new line of HDs, they are all based off common technologies, but over a wide range of hard drive sizes - because the NUMBER OF PLATTERS inside each model are different. Got a platter that can hold 100GB? Stick 1 inside, you've got a 100GB drive. 2 inside, 200GB. 3 inside, 300GB. There's three models (though drives typically contain substantially more platters). Now you stick 2 in heads for each platter (unless it's one of those old wacky Barracuda drives, which had 4 heads per platter), and firmware that is designed to control the hardware inside the sealed case - but usually even the controller is identical within a line.

    One other important thing to remember is that they test the platters BEFORE the HD is fully assembled. This is very different from a processor, where you can't exact test individual components until the entire thing is built. That said, they certainly design in a certain amount of fudge room certainly, so they can remap bad sectors into the fudge room. No platter is perfect, so they need additional space to remap bad sectors. I would be very, very surprised if there's more than 10GB of available space on a 250GB drive...
  • Re:How? Reliability? (Score:4, Informative)

    by Phroggy ( 441 ) * <slashdot3@ p h roggy.com> on Wednesday March 10, 2004 @08:05AM (#8519768) Homepage
    Company A gets the business of people who are willing to shell out $200 for a 200 GB HDD. Company A does not get the business who have a budget of less than $200 for their HDD purchase.

    Company B get the business of people who are willing to shell out $200 for 200 GB HDD and the business of people who have a smaller budget.


    Company A buys company B. The new Company AB sells both 150GB and 200GB drives, so they get money from everybody.

    Except, of course, that Company AB is in competition with Company C, which makes a real 150GB drive which costs less to produce than company AB's "150GB" drive because it's not really a 200GB drive with modified firmware. Company C sells their 150GB drive for less, and starts driving company AB's margins down; Company C can keep doing this because their costs are lower.
  • by rackoon ( 632226 ) on Wednesday March 10, 2004 @08:10AM (#8519790)
    Notice how they say an unpatched version of ghost is required:

    Ghost 2003 Build 2003.775 (Be sure not to allow patching of this software)

    That's because the patched version fixes A BUG that allowed the "ever expanding miracle".
  • Re:Uh, no (Score:1, Informative)

    by Anonymous Coward on Wednesday March 10, 2004 @08:13AM (#8519808)
    disklabel

    Yeah, there is no fdisk on NetBSD/alpha, much to my chagrin.
  • by Phroggy ( 441 ) * <slashdot3@ p h roggy.com> on Wednesday March 10, 2004 @08:16AM (#8519818) Homepage
    Somebody has tried to end this confusion by renaming what you call a "real GB" to "GiB", keeping what HD mfgrs call a "GB" to mean 1,000,000,000 bytes. Obviously things aren't any less confusing yet, since most people don't use the new units yet. ;-)
  • Re:Uh, no (Score:4, Informative)

    by AndroidCat ( 229562 ) on Wednesday March 10, 2004 @08:27AM (#8519861) Homepage
    nobody who knows nothing about computers would possibly attempt to do this.

    You wish. Floppy format programs that could magically get 1 meg from a 720k floppy were all the rage for the Atari ST. You could explain to people that there weren't really 99 tracks on the drive, that the displayed space remaining was bogus, that it just didn't work and might damage the drive by banging the head into the end stop for tracks 83 onwards. It never worked. They would swear that it worked, and swear at you for telling them it didn't. Even asking them to do a test of putting 1 meg on the floppy and checking if it was all really there didn't work.

  • by rugger ( 61955 ) on Wednesday March 10, 2004 @08:35AM (#8519894)
    Your information is off. Either you haven't used hard drives for about 15 years, or you are making the whole thing up.

    The MBR does not store the bad block information. The MBR hasn't stored bad block information since IDE became popular and people stopped being able to low format your their hard drives (no a zero wipe is not a low level format, it simply gives the firmware a good time to reallocate developed bad sectors)

    The bad block information is stored in areas of the drive that are completely unaccessable to the outside world, most probably near the servo information on the same track as the actual bad sector. It is only accessed by the LBA mapper in the drive firmware.

    The drive actually keeps count of how many sectors it has had to reallocate in its life, and how many sectors it is waiting for a good moment to reallocate. You can get this info from most drives by inspecting the SMART values. Bad sectors do not ussually develop very often after the drive is shipped. You should not see this value be more then 1 or 2 in a young, properly working hard drive.

    When the drive detects a sector is going bad, it does not automaticly reallocate it unless it can be correctly read. (or ECC corrected by the drive) This gives recovery software a slim chance of getting lucky and recoving the data from the bad block. The drive simply notes the sector is going bad. If it is read correctly at some late, the hard drive will automaticly reallocate it somewhere else. Alternatively, if a write is issued to a sector awaiting reallocation, then the drive will it perform then rather then wait for a good read.

    Also, manufacturers still use aluminium platters in most drives. The embedded servo infomation is used to keep the drive tracking correctly regardless of the temperature of the drive (within specified limits)

    Since you didn't read the article, nor any of the comments prevously written, you are completely wrong about this magical utility. It is simply an exploitation of a bug in Norton Ghost that makes your hard drive look larger then it is by overlapping partitions. Attempt to write data to one partition and you will trash the data on the other.
  • Nope (Score:5, Informative)

    by pcmanjon ( 735165 ) on Wednesday March 10, 2004 @09:17AM (#8520110)
    I can't possibly see how this would work. They're reporting a (more than?) 2x size increase on the largest harddrive they alledgedly did this trick on.

    If it works at all, all it really accomplishes is trick windows into thinking the partition really is bigger than it is. There's NO WAY it could get any bigger in reality, since drive capacity is based on the number of sectors the drive reports to the computer, and that is a fixed, hard-coded number that can't be changed by Norton Ghost or any other utility. If you try to address sector maxcapacity+1, you'll just get an error message back from the drive, it won't actually do anything.

    This is just a case of someone making sh** up in order to appear on the front page of hardware websites... A bit like participating in a 'reality show' on TV.
  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Wednesday March 10, 2004 @09:20AM (#8520136)
    Comment removed based on user account deletion
  • Not possible at all (Score:5, Informative)

    by pcmanjon ( 735165 ) on Wednesday March 10, 2004 @09:21AM (#8520141)
    You're joking right?

    On the subject of the Inquirer article.

    The 200JB, or BB or whatever is clearly impossible. There is no hidden space on them to recover at all, let alone 310GB! I can't imagine what kind of idiocy provoked someone to believe that was even possible. Western Digital doesn't make drives with more than 3 platters! The 200GB Western Digitals are only available with 80GB/platters. They only have 5 heads. It's therfore impossible to recover any capacity from them at all (5*40GB=200GB).

    Some of the other drives are known to short stroke their platters. This raises the more serious problem of this idiocy... The problem is modern drives store important information on those hidden inner areas of their platters (firmware, disk information, reallocated bad sectors), who knows what you could be overwriting whenever you use that space. Put something down in the wrong place and the drive will never start again or corrupt data at certain sectors. It's a lottery ticket everytime you write data in that partition. That's not what I call useable capacity.

    Also, if this was working properly, the 80GB deskstar would yield:

    either 90GB (+10GB) if it was a 180GXP (three heads on 60GB platters)
    or 80GB (+0GB) if it was a 7K250 (2 heads on 80GB platters)

    Anyone with most basic knowledge of hard drives should know that most of the numbers up there are simply impossible, not to mention simply ridiculous.

    It's not that there aren't hard drives which are short stroked and sold at a capacity below that available for access in theory, but that something is clearly wrong with this method in that it is simply inventing space that physically can't be there. Perhaps hard drive manufacturers are shortstroking disks to the point that they are formatted with the capacity of drives with fewer platters or heads, but this could never justify the failure of this method on the 200GB Western Digital drive. This drive is a known quantity. No matter what, even if they got a disk that was a shortstroked 6 head drive (which would make no sense), the maximum capacity is 250GB, not 510GB. You would need 7 platters to get that capacity with todays technology!
  • Re:How smart u are.. (Score:5, Informative)

    by smittyoneeach ( 243267 ) on Wednesday March 10, 2004 @09:26AM (#8520195) Homepage Journal
    I would carry your analysis a step further and note that chip and drive manufacturers don't make money by downgrading their product.
    I daresay they've a statistical model that has them doing enough sampling to maximize profit, and the means minimizing the amount of irritated customers calling in about problems.
    This is not like highway engineering, where they have to figure in weather, vehicles, and Aunt Tillie before posting a speed sign for a curve, so they lowball it heavily.
  • by Junta ( 36770 ) on Wednesday March 10, 2004 @09:40AM (#8520299)
    Not quite true. This only affects how much the super user has reserved. You'll see that df reports the same for the 'size' column. The Avail column goes up simply because it reports with respect to what a normal user can write. System files owned by root could still be created when no space was available for user owned files. -m is not for file system repair, it is so that no user can make the system unusable for root. Don't set to zero. Even if your private workstation, if something goes awry and consumes your disk space as a user, your system can still log, can still write system tmp files and do that sort of thing allowing the user to fix the situation or else a super user to still log in, work with the system, and rectify the situation while still having persistant storage to work with.
  • by Junta ( 36770 ) on Wednesday March 10, 2004 @09:57AM (#8520426)
    Yeah, its amazing.... I changed the partition table without updating the vfat table and put an ext2 filesystem in the second partition.

    The vfat partition stayed the same and the ext2 partition was non-zero size... woah....

    Its just pesky random file corruption on both partitions you have to worry about...

    In all seriousness:
    *THIS IS VERY VERY VERY DANGEROUS* DO NOT DO THIS *PERIOD*. It may give neat apperances at first, and both filesystems may appear fundamentally functional, but it will *CORRUPT DATA* when the first partition is populated enough to creep into the partition overlay.
  • Re:Uh, no (Score:2, Informative)

    by bodgit ( 658527 ) on Wednesday March 10, 2004 @10:03AM (#8520474)

    Some ports (namely i386) ISTR have both disklabel and fdisk, your disklabel goes into one of the fdisk'd partitions set to the correct BSD partition ID. Whereas sparc, alpha, etc. just write their disklabels directly.

    port-cobalt also has both under NetBSD.

  • Re:Uh, no (Score:4, Informative)

    by Shanep ( 68243 ) on Wednesday March 10, 2004 @10:14AM (#8520539) Homepage
    Microsoft developed tools to get 1.6 and 1.8MB out of a 1.44MB floppy.

    DMF. [winimage.com]
  • by gd23ka ( 324741 ) on Wednesday March 10, 2004 @10:18AM (#8520571) Homepage
    (Most) ATAPI-4 and later hard drives have a way dividing up drive space in user-addressable space and host protected space (Host Protected Area). The "user" in this context is the bios of your computer or your operating system of course.

    The Host Protected Area is space on your hard drive that your bios, your operating system or even your applications can be set aside for certain management information. I take it that some backup programs (ab)use it to "hide" compressed boot images on hard drives. I wouldn't be very surprised if companies like Dell or IBM stole some of your hard disk so you can restore a windows installation.The "Host Protected Area" has nothing at all to do with the drive-internal handling of bad sectors or other drive-interal.Drive-internal information as well as sectors used for replacing sectors gone bad are not accessible through the ATAPI commandset for accessing the HPA.

    The ANSI T13 Standard Document for ATAPI-6 (current) are overprized at $18.00 but you can download a draft of upcoming ATAPI-7 from the T13 working group's site at http://www.t13.org. There you will find in Section 4.9 of the document: "A reserved area for data storage outside the normal operating system file system is required for several specialized applications". Systems may wish to store configuration data or save memory to the device in a location that the operating system cannot change. The optional Host Protected Area feature set allows a portion of the device to be reserved for such an area when the device is initially configured. A device that implements the Host Protected Area feature set shall implement the following minimum set of commands:"

    READ NATIVE MAX ADDRESS

    SET MAX ADDRESS ... ... I take it that READ NATIVE MAX ADDRESS tells you how many sectors of user addressable space have been configured on the drive and SET MAX ADDRESS lets you adjust that.

    The way I see it there may be a lot of preinstalled hard drives out there with a compressed windows installation images on them "hidden" in the HPA. Maybe a new version of hdparm will allow linux users to reclaim that dead space.

  • Re:How smart u are.. (Score:2, Informative)

    by shawn(at)fsu ( 447153 ) on Wednesday March 10, 2004 @10:29AM (#8520652) Homepage
    Educate me please...
    Okay I've heard this allot about processors and something has always nagged at be about this. How is it in something that I think of as precise as making chips is it not cretain how fast a chip will preform? If you make something the same way how is it that you have a variance from one to the next? Sorry for the dumb question but I wuld just like to understand this.
  • Re:Uh, no (Score:3, Informative)

    by Shanep ( 68243 ) on Wednesday March 10, 2004 @10:29AM (#8520654) Homepage
    Yes, and those tools made it harder to explain why 1 meg was physically impossible.

    Careful with the word impossible! Years ago, when I was learning electronics, the widely accepted maximum PSTN MODEM download speed was 9,600bps at 2,400 baud.

    56k MODEM's still operate at 2,400 baud to this day, yet achieve so many times more than 9,600bps through tricky new techniques and removal of old hurdles.

    I know what you mean though. Just pushing a head further than it is supposed to go is not always going to work. On something like an Atari, which has pretty consistent hardware, it might never work.

    (By the 99 track method, at least.)

    And there is your disclaimer! ; )

    BTW, I beleive I have seen a Panasonic floppy drive advertised which claims to get 32MB from standard 1.44MB floppy disks using an encoding technology different from that typically used with floppies.
  • Re:How smart u are.. (Score:5, Informative)

    by Anonymous Coward on Wednesday March 10, 2004 @10:44AM (#8520777)
    I happen to work in the processor inducstry, and your statement is untrue. Every processor gets tested. The 'bins' are chosen due to 2 factors:
    1) The processor passes testing under extreme conditions at this speed. This gaurantees that the part has a high probability of never being returned as a defect (as silicon is used it ages due to electron-igration, which effectively makes it work slower/stop working eventually). The testing gaurantees that the user won't ever see this impact. In this case, a 2.4GHz binned part may work fine for you at 2.8GHz, but perhaps it will die in 3 years. Or perhaps a single instruction in SSE will return the wrong value 1 time in 100,000. Who knows.

    2) Parts are binned to meet supply. The company says it will supply 10,000 2.8GHz parts, and 100,000 2.4GHz parts. However of the 110,000 parts, 40,000 ran at 2.8GHz, and the rest at 2.4GHz. To keep the price scale (and meet the contract) 30,000 oparts which are perfectly good at 2.8GHz will get sold as 2.4

    The downside: There is no way to tell (1) from (2) as a consumer, so overclocking is all a game of craps.

    Also remember that the tests are done under 'extreme' conditions, which means that all parts will likely work slightly faster than the bin they were assigned to.

    Caveat: When a new frequency/design is released, it may be very difficult to get to the desired frequency, and the testing is relaxed somewhat to meet the quota (in which case very few parts will be overclockable)

    Lastly, no testing is done above the top bin, so if 3.2 GHz is the current fastest sold, some percentage of those may run at 3.4 or 3.6, and they won't have been tested that far.
  • Re:Uh, no (Score:0, Informative)

    by jwhyche ( 6192 ) on Wednesday March 10, 2004 @10:52AM (#8520863) Homepage

    I would have to agree, this is utter horse shit. No drive manufacture would release a 500 GB drive as a 250. Hell, if they had the process to make 500 GB harddrives they would be flying this fucker from they highest flag pole to see who saluted.

  • Re:How smart u are.. (Score:5, Informative)

    by kent.dickey ( 685796 ) on Wednesday March 10, 2004 @10:59AM (#8520920)
    The parent post is incorrect in regards to chip testing.

    Manufacturers test every single chip pretty much identically. Different companies differ in how they determine speed of parts (run some patterns at full speed, measure the delay of some known circuits, etc.) but each part is tested. There is too much variation across the wafer to do much else.

    It's always possible to run a chip faster than a manufacturer's testing especially if it is kept cooler than the max spec, voltage is within tighter tolerance than spec, or if the user doesn't care about correct answers. I find the last point is what usually allows the greatest overclocking.

    Also, some large manufacturers (Intel, AMD) have marketing needs to sell certain speed grades. So if all parts run at 3.0GHz, but users are demanding the cheaper 2.8GHz parts, then they'll sell some faster parts marked at 2.8GHz. In general, this is a temporary situation since re-pricing to reflect the increased yield will probably move the 3.0GHz price down shortly to increase pressure on the competition.
  • Sorry... (Score:2, Informative)

    by neogeek ( 455804 ) * on Wednesday March 10, 2004 @11:12AM (#8521031)
    but, just becuase the FAT table says that the partition is (x) size does not mean once you get past the true phyical limitation of the hard drive does not mean the whole house of cards is not going to come down.
    I too could use Norton Disk Edit to make the FAT table say lots of other intresting things...
    Like I had a 300 gig drive on a 20 gig.
    It's called a currupt FAT table.
  • Re:Uh, no (Score:2, Informative)

    by FuzzyBad-Mofo ( 184327 ) <fuzzybad@nOSPAm.gmail.com> on Wednesday March 10, 2004 @11:31AM (#8521202)

    There are a few misconceptions about floppy disks, it seems. Let me try to clear some of them up:

    • 3.5" DD - 1 MB unformatted
    • 3.5" HD - 2 MB unformatted
    • 3.5" ED - 4 MB unformatted

    Now, the effective capacity depends on the fomatting method. For standard PC formatting, you get 720 KB and 1.44 MB, respectively. However, some altertative formats offer more efficient formatting options. For example, my Commodore can get 800 KB and 1.6 MB from the same disks.

  • by penguinbrat ( 711309 ) on Wednesday March 10, 2004 @11:34AM (#8521247)
    "Select the file system type you prefer and format with quick format" This should be your first clue, this only rewrites the fs table (TOC).

    It sounds to me like this is simply a case of ghost screwing up the geometry settings in the partition table, and then ofcourse there is yet another Windoze bug to exploit it - sorry, I mean get hosed by it...

    This sounds sort of like something I used to do for automatic installation way back when, use 'dd' to dump then entire contents of "hdX" to some file

    # dd if=/dev/hdN of=/tmp/dump

    then dump the contents of that file to another HD that is the same size or bigger.

    # dd if=/tmp/dump of=/dev/hdN+1

    The result is that everything will work just fine, and running fdisk (on Linux) will show an uncorrupt partition table, BUT that geometry (obtained via BIOS) shows a much bigger drive, but DO NOT save the resulting table (w) - fdisk will rewrite it and then hose everything up! Pretty much just the opposite of this method....

  • Re:How smart u are.. (Score:5, Informative)

    by Rick.C ( 626083 ) on Wednesday March 10, 2004 @12:00PM (#8521483)
    I once toured a wafer plant and this is how it was done there. When a die mask is made, there may be small imperfections in it. Say the mask contains a 10x10 grid of supposedly identical circuits, but there were a couple of flaws when the mask was made that messed up the copies at grid locations A1 and A2. Every wafer that gets made with that mask will automatically have the A1 and A2 circuits marked for rejection before testing because they are "known defects".

    After a wafer is made a robotic tester probes each circuit before the wafer is cut up. If a circuit fails the basic tests, the probe squirts a little dot of red paint on that circuit. The "known defects" get a red dot without even being tested. After this initial probe test, the circuits are cut apart, the ones with red dots are discarded and the rest are mounted on carriers.

    It is possible that a slight mask defect or wafer imperfection might cause a performance problem rather than a total functional failure. This could also be caused by a slightly out-of-spec doping or wafer heating. These are sorted out by further testing as mentioned by other posters.

    If all of the circuits on a wafer get the same doping and same heating, then you can sample one or two and assume that the rest of the circuits from that wafer will have similar performance. If you have a mask problem that causes degraded performance, you can automatically flag that die location as a "known slow" or a "known bad" depending on your criteria.

  • by Anonymous Coward on Wednesday March 10, 2004 @12:05PM (#8521534)
    Apple has a page describing this (I don't feel like finding it now though).

    The reason they don't always use the whole disk is because they use drives from different manufacturers which may come in slightly different sizes, but they want to have a common image that they can copy to all of them, so they just make it the size of the smallest one.

    Simple repartitioning will fix that though.
  • Tried it, broke it. (Score:5, Informative)

    by Anonymous Freak ( 16973 ) <anonymousfreak@nOspam.icloud.com> on Wednesday March 10, 2004 @12:16PM (#8521637) Journal
    Okay, I have an old 540MB hard drive lying around, so I decided to try it, just for kicks. (And to silence those who are saying that either those who don't try it are cowards, or who actually think it works.

    I followed the directions to the letter. I ended up with a 1GB drive! (On a supposedly 540MB drive. In the end, FDISK claimed 965 MB.) I filled up the first partition (with mp3s, naturally.) I then started filling up the second partition...

    Surprise, surprise. It crashed halfway through copying the mp3s. Reboot? BZZZT! Windows 98 crashed a quarter of the way through loading. Starting up from a DOS disk, and my directory structure is all frooed up on the C partition. Filenames with random ASCII characters in them, inaccessible directories, all sorts of data corruption goodness. The D partition had correct names, though. (So my second batch of mp3s was probably fine.)

    ** DO ** *** NOT *** ** TRY ** ** THIS ** !!!!!!!


    (Or, more specifically, do not try this on a hard drive you want to keep, or with data you want to keep.)
  • Sometimes they do (Score:5, Informative)

    by macdaddy ( 38372 ) on Wednesday March 10, 2004 @12:26PM (#8521758) Homepage Journal
    ATI is a perfect example I think. Ya'll remember the various mods to convert their otherwise identical top-of-the-line video card into their top-of-the-line 3D rendering graphics pro card? Sometimes the designs are basically identical for good reason. Cost savings comes to mind. They simply use software and/or a few well-placed jumpers to differentiate between the two.
  • Re:How smart u are.. (Score:5, Informative)

    by Tmack ( 593755 ) on Wednesday March 10, 2004 @12:32PM (#8521809) Homepage Journal
    though drives typically contain substantially more platters

    Maybe in the old full-height drives, but most consumer 3.5" drives nowdays only have 1-3 platters (as have most drives I have disassembled....my platter collection is at about 50), 4 in the ultra-top-of-the-line high-capacity drives. Each platter is about 1mm thick, but has space between the rest of the chassi and other platters for the head assymblies (which is 2 assymblies between platters, one for each). These take up more room, as the arm's design itself is usually thicker than the platter, and it has to be rasied off the platter so that it will not damage it as it swings back and forth rapidly. You also have to add in the case itself and the motor used to spin the platters. Theres not much room to cram in too many platters inside the case. Remeber the dimensions of a half-height 3.5" drive gives only about 1.6" of vertical space total.

    You are correct though, in that lower capacity drives just remove platters and head assymblies from a higher capacity model. Specifically, I took apart two older Seagate drives, one had 1 platter, the other had 2 and was rated at almost double capacity, but where otherwise identical. In place of the platters, they just put in spacers on the drive axel.

    Tm

    ps: on a side note its interesting to see how the design of drives have changed over the years, from heads actuated by stepping motors to voice-coil actuators, and from the full-height monsters with 7 platters to single platter drives with 10x capacity, yet the platters have stayed the exact same radial size on every 3.5" drive I have taken apart. The only notable physical differnece other than color is the thickness. Newer platters are lighter in color and are ALOT thinner.

  • Re:FDformat did this (Score:4, Informative)

    by Experiment 626 ( 698257 ) on Wednesday March 10, 2004 @12:39PM (#8521884)

    I used to use the program the parent speaks of, and it really did work. The format tool let you adjust the number of tracks and sectors on a floppy, with the 1.72 Meg combination working well but anything beyond that not working right. The space gains were quite real, back when my hard drive was a mere 40 megs I used this to offload things and make room. It used a small TSR program (i.e., a memory-resident driver) which had to be loaded, or you would get errors trying to read the disks.

  • Re:Uh, no (Score:4, Informative)

    by rew ( 6140 ) <r.e.wolff@BitWizard.nl> on Wednesday March 10, 2004 @01:12PM (#8522177) Homepage
    No, the head would be missing.
  • How old am I? (Score:3, Informative)

    by scalveg ( 35414 ) on Wednesday March 10, 2004 @01:22PM (#8522300) Homepage
    I wonder how many slashdotters (including me) hooked their MFM hard drive up to an RLL controller to get that extra 50% out of it?

    Now that's kickin' it old school.

    60MB out of an ST-251, baybee!

    Chris Owens
    San Carlos, CA
  • by gurudyne ( 126096 ) on Wednesday March 10, 2004 @01:35PM (#8522516)
    As someone who QA'ed Ghost 2003 for Symantec, I agree with you. The VPSGHBOOT stands for Virtual Partition Symantec Ghost Boot. Notice the word Virtual.

    The bits actually reside in a contiguous sector file in the root of the primary partition. This file may be 8-100MB. If your disk is too fragmented, Ghost cannot create it.

    The real reason for this stunt file is to eliminate the need for a boot floppy to launch Ghost (a PC-DOS 7 program compiled with DJGPP)
  • by Rich0 ( 548339 ) on Wednesday March 10, 2004 @01:56PM (#8522767) Homepage
    I believe they did something like this with the 486's as well. I believe that their manufacturing process started getting so good that they weren't turning out enough 20 and 25MHz chips - just lots of 33's and up. However, people weren't willing to pay for the higher-end processors. So, Intel segmented the market by selling the same product at two different prices - albeit rebranded in the one case.

    In that case they probably weren't afraid of Cyrix/AMD so much as maybe the Mac - I don't think that Intel had a whole lot of competition on the 486.
  • Re:Uh, no (Score:2, Informative)

    by elendril ( 15418 ) on Wednesday March 10, 2004 @02:44PM (#8523366) Homepage
    Most software used on the Atari ST to increase the disk space simply increased the number of sector per track from 9 (720kb) to 10 (800kb) or even 11 (and sometimes marginally increased the number of tracks).
    It worked very well. Most warez was distributed on such disks. Even some commerical games (try "maupiti island" for example : you can download disks images from http://www.lankhor.net/ if you want to check by yourself).
  • by adisakp ( 705706 ) on Wednesday March 10, 2004 @03:48PM (#8524059) Journal
    Once a fab process has had the kinks worked out, they chips undergo much less thorough speed binning. Intel often uses dies near center of the wafer(where focus is more exact) for higher speeds and dies nearer the edge of the wafer for lower speeds. It's a lot simpler than testing every processor at every speed.
  • by Ayanami Rei ( 621112 ) * <rayanami&gmail,com> on Wednesday March 10, 2004 @05:15PM (#8525101) Journal
    the entire chip is "scale-free" which means it is designed to work at a variety of speeds and tolerances.

    HOWEVER! The manufacturing process is much more of a crap shoot. You have to grow this perfect layer of silicon in the shape of a disc (usually it's cut from a cylinder), and grind it to be incredibly smooth. It has to be perfect. Then you expose it to one chemical, then light which reacts with it, then you expose it to another chemical to leave behind something where the light hit. And you do this over, and over again to deposit layers of different dopants to the chip to build it's structure.
    Except if the tiniest bit of dust, or particle gets in the way, that whole chip is ruined. And you can't make it in a vacuum, so you have to have filtered air. But even then, you can't filter perfectly, so you have some loss.
    And even then, the wafer is not guaranteed to be 100% flat all over to within a nanometer (whereas the chip components themselves are only 130-90nm these days) so there is going to be some chips whose parts are better lined up or formed more evenly than others, overall.
    So you make about 200 or so on a wafer, then cut them apart and test them, to see which ones work, and how well they do.

    It's the manufacturing that makes the cost-competetive tradeoffs...
  • by KingRobot ( 703860 ) on Wednesday March 10, 2004 @09:18PM (#8527573) Homepage
    I once had an compact flash hard reader, that for whatever reason, couldn't properly access the partition table of the CF cards. I was the greatest thing though!! Those crazy CF card companies were hiding Gigabytes of space from me. Here were my results: 32 Mb -> 60 Gb 64 Mb -> 40 Gb 128 Mb -> 90 Gb And best of all, I have one very special CF card: 256 Mb -> 1.2 Tb. Yes, I acutally had this happen, right there in the logical disk manager under Windows XP, the disk showed up as 1.2 Terabytes. It was great hearing SimpleTech's support guy: You what!!?? A 1.2 Terabyte CF Card? He said I should hang on to it... I did. Later on, I got a Zaurus, and just for kicks popped in the CF card. A few commands later, I had rebuilt the partition, and was back in business. Bottom line: Busted partition tables != extra space.
  • by Shanep ( 68243 ) on Thursday March 11, 2004 @12:47AM (#8529035) Homepage
    I would feel pretty confident that it's an error in some part of their distribution/quality control... maybe something as simple as a barcode incorrectly put into the database or something of that sort.

    I might beleive this for our single case. But having seen a post of it happening elsewhere, I would tend to beleive that their profit margins are good enough for them to occasionally just take a little less profit to keep the customer happy. Especially on something like a rack mount server (1650), which might indicate to Dell that this customer could potentially buy more server gear and maybe even pallet loads of desktop gear during the next desktop upgrade session.

    What else do you think these companies use the company info forms that you fill out for? If you filled out that you are an "ISP" then they might be less inclined to "make you happy" as they would had you filled out "legal firm" with "500-1000 staff". Cha ching! To them, keeping the IT department and purchasing happy is merely an investment for their (Dell) future.

    Here is that mailing list post that I promised...

    "Thanks, turns out to be a usless question now though, Dell is throwing in second CPU's for free :) No doubt cleaning out stock for the new 3Ghz chips." [theaimsgroup.com]

    PS, I don't know why I thought this happened in another country, I don't seem to be able to see anything in that post to make me beleive that. I'm sure I read more than this but can find it right now. Perhaps it went off list, I can't recall.
  • Re:More Amiga quirks (Score:3, Informative)

    by cowbutt ( 21077 ) on Thursday March 11, 2004 @04:50AM (#8530037) Journal
    In other words, while the standard - and supported - mode used 200% the space to store data, the 5-bit mode used 125% space.

    MFM [wikipedia.org] and GCR [wikipedia.org], respectively.

    --

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...