Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Data Storage Upgrades Hardware

Recovering Secret HD Space 849

An anonymous reader writes "Just browsing and noticed a link to this article. 'The Inquirer has posted a method of getting massive amounts of hard drive space from your current drive. Supposedly by following the steps outlined, they have gotten 150GB from an 80GB EIDE drive, 510GB from a 200GB SATA drive and so on.' Could this be true? I'm not about to try with my hard drive." Needless to say, this might be a time to avoid the bleeding edge. (See Jeff Garzik's warning in the letters page linked from the Register article.)
This discussion has been archived. No new comments can be posted.

Recovering Secret HD Space

Comments Filter:
  • I call (Score:5, Insightful)

    by ANY5546 ( 454547 ) on Wednesday March 10, 2004 @04:25AM (#8518966) Homepage

    No way in heck can you increase the amount of storage a HDD has so drastically. I mean, the physical disks can only hold so much, and no matter what you do, they arent going to magically double or triple.

    These are physical disks, they have a set number of sectors. One size and one size only.

    Unless you get into the whole mega vs. mibi byte but thats a whole nother can of worms!
  • by atlasheavy ( 169115 ) on Wednesday March 10, 2004 @04:26AM (#8518969) Homepage
    I have to agree with all of the naysayers on this. As much as I'd love to double my hard disk space for free, there's no such thing as a free lunch. This looks like a really terrific way to hose all of the data on your hard drive. You're really better off just shopping around for a reasonably priced 100gb hard drive or something instead.
  • by altamira ( 639298 ) on Wednesday March 10, 2004 @04:28AM (#8518981) Journal
    In other news, witnesses reported UFO sightings all over the country...
  • Disk is cheap. (Score:5, Insightful)

    by djh101010 ( 656795 ) on Wednesday March 10, 2004 @04:29AM (#8518986) Homepage Journal
    My data is way more important than squeezing a bit extra out of an 80 dollar drive. Interesting idea and all that, but this isn't like in the old days of the "punch a new hole to make your 5-1/4 inch floppy double sided", where if you screw up, you lose only a disk worth of data - with this, if you screw up, you lose a _disk worth_ of data.

    If I need more space, I'll buy a bigger drive, they keep getting cheaper and faster and bigger all the time anyway.
  • Reminds me of the old trick in which you could turn a single-sided diskette into a double-sided one by punching a hole through one corner.

    Slight problem: the diskette usually failed a few weeks later.

    The trick with this hard disk "expansion" is to reclaim space that has been reserved for error correction, or which failed quality control.

    It's a lot like over-clocking a CPU, with a big difference: when it fails, you can't just reboot, you lose all your data. Personally, with HD prices so cheap, it hardly seems worthwhile.
  • by Anonymous Coward on Wednesday March 10, 2004 @04:46AM (#8519072)
    The problem is that drive manufacturers insist on claiming HDD sizes with a gigabyte meaning a thousand million bytes. It isn't meant to mean that, it's meant to mean 1024*1024*1024.

    So you lose a few percent of capacity there.

    There's also always some overhead of index tables and so forth for your filesystem, but you can't really complain about that - you kinda need it.
  • by Ulven ( 679148 ) on Wednesday March 10, 2004 @05:01AM (#8519149)
    You can overclock your processor, RAM and video card. Assuming you do it properly and they still work otherwise, isn't that close enough to a free lunch?

    Almost makes sense to be able to do the same thing to your hard drive.

    Especially if they are all manufactured to the same specs, and then get rated during testing.

    There aren't that many seperate capacity levels. 80 GB, 120 GB, 160 GB etc. Your 80GB drive might well have managed 110 GB, not passed at 120 GB, and so been rated at the lower capacity.

    Or something like that.

    I'm not saying that the method in the article is the way to go about it, merely that the general idea may have merit.
  • by sjwt ( 161428 ) on Wednesday March 10, 2004 @05:09AM (#8519184)
    yerh, i meen this woudl be a stupid as selling
    a 486 DX as an SX...

  • by 0x0d0a ( 568518 ) on Wednesday March 10, 2004 @05:19AM (#8519216) Journal
    I'm still confused.

    Jeff Garzik, the Linux SATA guy (I thought Garzik was the Linux Ethernet guy after the Garzik/Becker fallout, but whatever), wrote in to say that this was host-protected space. He implied that this might be used when bad blocks crop up.

    I'm very dubious about this. It doesn't make much sense technically.

    Someone said that some OEM dumped OS space for storing an OS. Yeeesss...that could be right. However, we are talking upwards of ten gigs. I don't buy that they're asking for a third of the hard drive for the OEM.

    I would *damn* well not be monkeying around with my drive until some other people test this out and (potentially) destroy their drives. I'm not currently sure how nasty this is, but if Garzik is right on almost any of his guesses, you have the potential to physically destroy your drive.

    Here's one more possibility (a positive one). Garzik pointed out that factory cert time is when drive sizes are calculated. It's possible that, since drives are sold at particular sizes (120 GB, etc), if a hard drive can store 170 GB, not enough to get up to the next storage capacity (180 GB), the manufacturer just does not use space after a certain point to obtain a uniform line of drives. In this case, "unlocking" this space is equivalent to overclocking processors. Reasons for supporting this guess is that the sizes are uniformly large, but not large enough to push drives into the next storage bin.

    A couple of points I'd worry about: clearly, the manufacturer did not intend you to be using this space. As such, they may allow space to pass cert and sit in a protected partition...but presumably they're going to put the least-reliable area (inside or outside of the disk) in this partition. This would be the least-reliable section of the disk.

    This may become a valid technique (if unreliable), but I'm not sure if I'd do it. I'm pretty uncomfortable with the reliability of certified, used-as-manufacturer-considered-safe consumer IDE hard drives already (1 yr warranty, numerous nasty batches in the last few years, etc). If you OC your processor...big deal, you're out a processor and maybe a motherboard. If you lose your hard drive, you lose a lot of data...and hard drives are awfully cheap these days.

    There are no guarantees that the drive firmware is going to not have subtle bugs relating to mucking around in a partition that's supposed to be hidden.

    It may be that error-correction space is not allocated for this partition.

    It may be that other metadata that the drive allocates about space that you normally need (I dunno, SMART related data or something), and that isn't existant for the hidden area.

    Finally, there's no guarantee that if this works properly for one drive, that it will work properly for other drives. Heck, what if there's a mechanical or firmware revision within a single model (as Creative Labs likes to do with their soundcard products), and things work properly with one drive and not with another?

    Doesn't mean that this might not be useful for someone...just that if I have to cut corners to save money somewhere, I think I'd rather do it on a lot of things other than hard drive reliability. Keep in mind also that if I'm right about the bin size, you're saving less than one bin size -- probably less than $20.

    Finally, cheap drives fail a lot these days. If your drive starts the click of death within a year or three years or whatever your manufacturer warranty is, they may refuse to send you a new drive if you've been mucking around with low-level stuff on the drive.
  • by 0x0d0a ( 568518 ) on Wednesday March 10, 2004 @05:31AM (#8519262) Journal
    Just to be a bastard, I gotta point out that this could probably be considered a Ghost bug. While there might not be anything Symantec could *do* to help someone that's mucked up their drive, I could reasonably see them complaining to Symantec about it.
  • by karstux ( 681641 ) on Wednesday March 10, 2004 @06:00AM (#8519363) Homepage
    Symantec seems to have the same opinion - note how they say in the article to use a very specific version of Ghost? Obviously, the bug has been patched.
  • Re:I call (Score:3, Insightful)

    by Magic5Ball ( 188725 ) on Wednesday March 10, 2004 @06:05AM (#8519373)
    These are physical disks, they have a set number of sectors. One size and one size only.

    Indeed. However, it is quite easy to write incorrect information to file allocation tables and such (for example, to over-report the number of free sectors, or the cluster size, etc) which software trusts as being correct. This happens with some frequency with corrupt floppy disks, which can report hundreds of megabytes of data or free space (or both!) if the FAT is corrupted in the right way.

    Editing a FAT12/FAT16 as above using the DOS debug tool, Norton Disk Editor, or other utility is left as an excercise for the reader.
  • Re:Summary... (Score:5, Insightful)

    by MyFourthAccount ( 719363 ) on Wednesday March 10, 2004 @06:05AM (#8519374)
    Yes, that post sums it up pretty much, other than that 'probably' should be replaced with 'absolutely'.

    Basically this idiot has found an incredibly cumbersome way to screw up his partition table. (see below for more details)

    Then of course this gets posted and linked to all over the planet for everyone to try for themselves. Who are these fucking idiots that post this kinda stuff? They should get 'gullible' tatood on their forehead.

    Hint: nowhere in the article is it said that they actually tried to use all the space and verify all data remained intact. Wouldn't that be the first thing you'd do before posting something like this online?

    Anyways, I've written several IDE drivers (and worked on the IDE core for BIOSs) and I can tell you that there is NO way you can increase the size of a 200GB drive to 510GB, especially not with the tools that are described (Ghost).

    Look at the 80GB example: they got 150GB? That's interesting, because that would mean that the drive all of a sudden became a 48-bit LBA drive. Older drives are limited to 137.4GB in size and to get 150GB capacity you need 48-bit LBA. I don't think Ghost is going to reflash the firmware of the drive to add support for that (yes, that's meant to sound sarcastic).

    Ghost works at the partition level. A drive reports it's size in sectors. This is basically a lower (or closer to the hardware) level.

    All they do is move partitions around. But the drive will keep reporting the same number of sectors. Where do the extra sectors come from?

    Why don't these people run an IDE identify program on those harddrives. They'll see that the drive still reports the original number of sectors. Exactly the same amount of sectors you can get to through /dev/hda.

    It's true that some OSs don't create the most ideal partitions so you lose _some_ sectors but nothing in the order of magnitude described though.

    Initially I thought maybe they where using the extra error-detection/recovery bytes that each sector has (which would be a very stupid idea), but that would never give you that much increase.

    Or that they were removing some factory/OEM predefined partition, which is basically the only relatively safe thing you can do to reclaim some disk space. Again, not the same order of magnitude, plus you'd never go over the size that the disk is sold as.
  • Re:Uh, no (Score:3, Insightful)

    by WorkEmail ( 707052 ) on Wednesday March 10, 2004 @06:06AM (#8519379)
    You know what they say, if it sounds to good to be true it most often is. :)
  • by Lord Kano ( 13027 ) on Wednesday March 10, 2004 @07:06AM (#8519538) Homepage Journal
    No sane company is going to sell a 150 GB drive as an 80 GB because they pay as much to manufacture platters and heads no matter how they're used. The cost of the unused parts would come right out of their profits.

    You'd be correct if there was just one HDD maker in the marketplace, but that isn't so.

    First off, let me say that I think this whole isue is bunk. But let's pretend for a moment.

    Company A and Company B are both in the business of making and selling HDDs. Company A makes only 200 GB HDDs which cost them about $100 each to manufacture and they then sell them for $200. Company B makes a 200GB HDD which costs them $100 to make and they then sell it for $200. Company B also does this, they modify the firmware of the drive to that only 150 GB are usable. They sell these "150 GB" HDDs for $150.

    Company A gets the business of people who are willing to shell out $200 for a 200 GB HDD. Company A does not get the business who have a budget of less than $200 for their HDD purchase.

    Company B get the business of people who are willing to shell out $200 for 200 GB HDD and the business of people who have a smaller budget.

    By crippling the drive they protect the value of their "high end" product while at the same time making some money on the "mid range" as well

    Company A's profits can be calculated like this profit = (X1xP1) whereas X=The number of units sold and P=The profit margin on the unit. #=The model of the HDD

    Company B's profits can be calculated like this profit = (X1xP1)+(X2xP2).

    This same business principle is a part of the reason why some 2.4 Ghz processors will run at 3 Ghz when overclocked.

    I have no doubt that there could be a fair bit of space on a drive that is unavailable to the user, but double or triple capacity? Of course not!

  • by Anonymous Coward on Wednesday March 10, 2004 @07:07AM (#8519546)
    I am reading through all your /.er's posts here, but no one of you ever tried it???? How can you make conclusions before experiements...
    Of course, I've not tried it yet. I am waiting for your results..
  • by Erik Hensema ( 12898 ) on Wednesday March 10, 2004 @08:47AM (#8519938) Homepage
    You should see an 8 meg partition labeled VPSGHBOOT or similar on the slave HDD (hard drive T) along with a large section of unallocated space that did not show before. DO NOT DELETE VPSGHBOOT yet.

    What probably happens here is: ghost creates a special file, or at least writes to an empty part of your filesystem. Then, it writes a complete mini-os to this 8 MB region.

    It backs up the original MBR (which is the bootsector, it also hold the partition table) and writes its own MBR. This MBR has a partition table which includes an 8 MB partion. The boundaries of the partition are the boundaries of the special file.

    Since this MBR isn't meant to be used in any normal operation environment, it's not quite legal. Some (not all, the MBR can only hold 4) of the original partitions still show up in the new MBR. Therefore, the 8 MB partition lies inside a much larger partition.

    This probably confuses fdisk, which lets you create a partition directly after the 8 MB partition, but inside your original partition.

    When you subsequently delete the 8 MB partition, fdisk is probably confused again. The end of the original partition is probably obscured by the new, overlapping partition. So it lets you create yet another partition, from the beginning of the disk to the start of the overlapping partition.

    The end result is: one large partition holding two small partitions inside it. This will exactly double your diskspace. Just don't try to use it :-)

  • Re:Uh, no (Score:1, Insightful)

    by Anonymous Coward on Wednesday March 10, 2004 @09:20AM (#8520140)
    See now, if your parents hadn't been so cheap and had bought you an Amiga like you wanted (Oh come on, admit it!) you really could have gotten 960k onto those "880k" floppies without any "tricks" or danger of damage to your hardware. The Amiga had a super-wicked-cool floppy controller, after all.
  • by kju ( 327 ) on Wednesday March 10, 2004 @11:02AM (#8520955)
    Testing only one processor for a whole batch won't make sense and be dangerous. What would be if you happen to test the one processor of a dozen which can run with 3.0 GHz, while the others only can do 2.4 GHz? You would sell a bunch of overrated processors. Therefore EACH processor is tested.
  • by drinkypoo ( 153816 ) <> on Wednesday March 10, 2004 @11:29AM (#8521194) Homepage Journal
    It's not possible. As close as you get to this is that the drive is hooked up to a test rig which tests all the sectors on the drive, locks out the bad ones, and remaps them to unused sectors at the end of the drive. (It would be nice, and it may even be true, that during the original manufacturer lockout process, they don't remap them to the end of the drive, they just skip a sector and move on. Anyone know?)

    All modern drives reserve spare sectors at the end for remapping. (Older drives only allowed you to lock out sectors.) However, this is a small percentage of the total size of the hard disk. If it cost some hard drive manufacturer the same amount to make a 250GB disk as it did to make a 125GB disk, they'd just make the 250GB and they'd sell it for only half again what the 125GB costs, and put everyone else out of business.

  • by smittyoneeach ( 243267 ) on Wednesday March 10, 2004 @12:10PM (#8521586) Homepage Journal
    Coudn't disagree more. Their strategic goal is always profit, but that may be obscured by tactical goals, e.g. market share.
    I submit that a commonplace, bad assumption in statements like
    Not always is their goal to make a profit, but rather market share...
    is that there is only one goal driver at a time. Such thinking rarely models the real situation.
    Hope this doesn't sound a flame. ;)
  • by barc0001 ( 173002 ) on Wednesday March 10, 2004 @12:40PM (#8521903)
    Sorry, I gotta start a flame :)

    Maybe I should have been a bit clearer by stating
    "Not always is their goal to make a profit *THIS MINUTE*, but rather longer term make more by locking up market share and inflating prices once you've got the market share"

    The world is full of examples of companies eschewing short-term profits in favor of long-view profits from market share:

    - Gilette made it famous "Give away the razor, make it up on the blades"

    - Microsoft and a ton of other companies sell their "academic" versions of software to college kids for pennies on the dollar compared to the stuff in the computer shop down the road. If they didn't the little bastards would probably use something like that pinko OpenOffice and Linux. ;). Instead they "hook" them using the stuff now so it's harder to change later.

    -Let people pirate your graphics software easily so they get used to screwing around with it *Cough*Photoshop*Cough*. When it comes time to get a job doing graphics, and the company asks what software to buy you for your workstation, well, it's a one-horse race, isn't it?

    -Microsoft execs including Steve Ballmer himself, have said repeatedly that if people in asia were to pirate software, Microsoft would prefer that it was their software that was being pirated.

    Short term loss, long term gain because of.. market share.
  • by osu-neko ( 2604 ) on Wednesday March 10, 2004 @02:13PM (#8522965)
    Nothing about the way hard drives work makes it more logical to measure using the binary common use of the prefix over the traditional SI one.

    False. Memory and hard drives always format to units that divide out well into base 2 but rarely into base 10 units. For example, your floppy disk holds EXACTLY 1440 KB using the base-2 KB definition. Using the base 10 definition, it holds 1474.56 kB. And the larger the drive, the more and more digits you need to start adding after the decimal point to be accurate, or eventually you just start approximating. It's much easier to be both concise and accurate using the base-2 versions of these terms...

    If anything, Windows and whatever other reporting software used is incorrect, because "Giga" is an SI standard prefix...

    Neither "bits" nor "bytes" are an SI unit, so this argument is screwed from the get-go...

    ...used in science and mathematics...

    And here's the real key -- terminology in any field is defined by the practitioners of that field. If computer scientists define the terms differently, then using them the way mathematicians use them in a computer science context is wrong. Quantum physicists use the terms "strangeness", "charm", and "color" in ways that vary from the way these terms are used in other fields, that doesn't make them wrong, it makes those who use the other definitions while talking about quantum physics wrong. Saying a megabyte is one million bytes is every bit as wrong as saying the charm of a subatomic partical is a measure of its charisma...

  • by Bill, Shooter of Bul ( 629286 ) on Wednesday March 10, 2004 @02:37PM (#8523294) Journal
    My coworker had some downtime a few years ago, so he thought it would be cool to mess with the FAT of a floppy. He changed it so there was one directory. Inside that directory there was a 30k file and another directory. He changed the FAT so that the inner folder pointed back to the outer folder. So essentially it was a recusive file that had a 30k file in it. He had some fun asking various OS how much used space there was. Windows 98 eventually gave an error that the pathname was too long, NT just kept on going. It was really cool, never tried it in linux though. That would be cool.

Someone is unenthusiastic about your work.