Recovering Secret HD Space 849
An anonymous reader writes "Just browsing hardocp.com and noticed a link to this article.
'The Inquirer has posted a method of getting massive amounts of hard drive space from your current drive. Supposedly by following the steps outlined, they have gotten 150GB from an 80GB EIDE drive, 510GB from a 200GB SATA drive and so on.' Could this be true? I'm not about to try with my hard drive." Needless to say, this might be a time to avoid the bleeding edge. (See Jeff Garzik's warning in the letters page linked from the Register article.)
Uh, no (Score:5, Informative)
Did aureal density technology increase to 200GB/platter overnight? No.
Please refer to this thread [storagereview.net] on StorageReview.com for more information.
Re:Uh, no (Score:5, Funny)
Re:Uh, no (Score:5, Funny)
Re:How smart u are.. (Score:5, Informative)
Intel tests a sample from each batch of processors to determine which "bin" it goes into. That sample tested reliably at 2.4GHz? Okay, into the 2.4GHz pile. That sample tested at 2.8? Okay, into the 2.8 pile. The trick about processors running faster than labeled isn't because they're mislabeling processors, it's that they only test one processor out of the entire batch. Many processors within either batch could be capable of 3GHz, simply due to vagaries of production - you can give it a shot and find out, but don't be surprised when it develops unacceptable amounts of heat like the processor they tested.
HD manufacturers are quite different. When they release a new line of HDs, they are all based off common technologies, but over a wide range of hard drive sizes - because the NUMBER OF PLATTERS inside each model are different. Got a platter that can hold 100GB? Stick 1 inside, you've got a 100GB drive. 2 inside, 200GB. 3 inside, 300GB. There's three models (though drives typically contain substantially more platters). Now you stick 2 in heads for each platter (unless it's one of those old wacky Barracuda drives, which had 4 heads per platter), and firmware that is designed to control the hardware inside the sealed case - but usually even the controller is identical within a line.
One other important thing to remember is that they test the platters BEFORE the HD is fully assembled. This is very different from a processor, where you can't exact test individual components until the entire thing is built. That said, they certainly design in a certain amount of fudge room certainly, so they can remap bad sectors into the fudge room. No platter is perfect, so they need additional space to remap bad sectors. I would be very, very surprised if there's more than 10GB of available space on a 250GB drive...
Re:How smart u are.. (Score:5, Informative)
I daresay they've a statistical model that has them doing enough sampling to maximize profit, and the means minimizing the amount of irritated customers calling in about problems.
This is not like highway engineering, where they have to figure in weather, vehicles, and Aunt Tillie before posting a speed sign for a curve, so they lowball it heavily.
Sometimes they do... (Score:5, Interesting)
The best example of this is the Celeron 300A debacle for Intel. Switch back to those days of yore for a moment...
Intel introduced the Celeron line to help blunt AMD's advance into the low end post-Pentium I market. One problem: The Celeron 233 and 266 with NO L2 cache suck so much ass nobody wanted them, but they couldn't just change over the production line to a new Celeron design at the drop of a hat. What to do, Andy? Easy. That production line in Malaysia that's pumping out the Deschutes 450 PIIs to the rescue! So Intel took a whack of those chips, gave them a lower L2 cache, dropped their "rated" bus speed to 66MHz and branded them Celeron 300As. Which is why pretty much every Malaysian Celeron 300A runs just fine at 450 MHz with the stock Intel cooler, no adjustment required.
Intel actually lost money doing it, but they didn't lose the low end market. But the damage the current batch of crap they call a Celeron is doing to their reputation down there seems to indicate they will lose it soon...
Re:Sometimes they do... (Score:5, Insightful)
Maybe I should have been a bit clearer by stating
"Not always is their goal to make a profit *THIS MINUTE*, but rather longer term make more by locking up market share and inflating prices once you've got the market share"
The world is full of examples of companies eschewing short-term profits in favor of long-view profits from market share:
- Gilette made it famous "Give away the razor, make it up on the blades"
- Microsoft and a ton of other companies sell their "academic" versions of software to college kids for pennies on the dollar compared to the stuff in the computer shop down the road. If they didn't the little bastards would probably use something like that pinko OpenOffice and Linux.
-Let people pirate your graphics software easily so they get used to screwing around with it *Cough*Photoshop*Cough*. When it comes time to get a job doing graphics, and the company asks what software to buy you for your workstation, well, it's a one-horse race, isn't it?
-Microsoft execs including Steve Ballmer himself, have said repeatedly that if people in asia were to pirate software, Microsoft would prefer that it was their software that was being pirated.
Short term loss, long term gain because of.. market share.
Sometimes they do (Score:5, Informative)
Re:How smart u are.. (Score:5, Informative)
1) The processor passes testing under extreme conditions at this speed. This gaurantees that the part has a high probability of never being returned as a defect (as silicon is used it ages due to electron-igration, which effectively makes it work slower/stop working eventually). The testing gaurantees that the user won't ever see this impact. In this case, a 2.4GHz binned part may work fine for you at 2.8GHz, but perhaps it will die in 3 years. Or perhaps a single instruction in SSE will return the wrong value 1 time in 100,000. Who knows.
2) Parts are binned to meet supply. The company says it will supply 10,000 2.8GHz parts, and 100,000 2.4GHz parts. However of the 110,000 parts, 40,000 ran at 2.8GHz, and the rest at 2.4GHz. To keep the price scale (and meet the contract) 30,000 oparts which are perfectly good at 2.8GHz will get sold as 2.4
The downside: There is no way to tell (1) from (2) as a consumer, so overclocking is all a game of craps.
Also remember that the tests are done under 'extreme' conditions, which means that all parts will likely work slightly faster than the bin they were assigned to.
Caveat: When a new frequency/design is released, it may be very difficult to get to the desired frequency, and the testing is relaxed somewhat to meet the quota (in which case very few parts will be overclockable)
Lastly, no testing is done above the top bin, so if 3.2 GHz is the current fastest sold, some percentage of those may run at 3.4 or 3.6, and they won't have been tested that far.
Re:How smart u are.. (Score:5, Informative)
Manufacturers test every single chip pretty much identically. Different companies differ in how they determine speed of parts (run some patterns at full speed, measure the delay of some known circuits, etc.) but each part is tested. There is too much variation across the wafer to do much else.
It's always possible to run a chip faster than a manufacturer's testing especially if it is kept cooler than the max spec, voltage is within tighter tolerance than spec, or if the user doesn't care about correct answers. I find the last point is what usually allows the greatest overclocking.
Also, some large manufacturers (Intel, AMD) have marketing needs to sell certain speed grades. So if all parts run at 3.0GHz, but users are demanding the cheaper 2.8GHz parts, then they'll sell some faster parts marked at 2.8GHz. In general, this is a temporary situation since re-pricing to reflect the increased yield will probably move the 3.0GHz price down shortly to increase pressure on the competition.
This does not make sense (Score:5, Insightful)
Re:How smart u are.. (Score:5, Informative)
Maybe in the old full-height drives, but most consumer 3.5" drives nowdays only have 1-3 platters (as have most drives I have disassembled....my platter collection is at about 50), 4 in the ultra-top-of-the-line high-capacity drives. Each platter is about 1mm thick, but has space between the rest of the chassi and other platters for the head assymblies (which is 2 assymblies between platters, one for each). These take up more room, as the arm's design itself is usually thicker than the platter, and it has to be rasied off the platter so that it will not damage it as it swings back and forth rapidly. You also have to add in the case itself and the motor used to spin the platters. Theres not much room to cram in too many platters inside the case. Remeber the dimensions of a half-height 3.5" drive gives only about 1.6" of vertical space total.
You are correct though, in that lower capacity drives just remove platters and head assymblies from a higher capacity model. Specifically, I took apart two older Seagate drives, one had 1 platter, the other had 2 and was rated at almost double capacity, but where otherwise identical. In place of the platters, they just put in spacers on the drive axel.
Tm
ps: on a side note its interesting to see how the design of drives have changed over the years, from heads actuated by stepping motors to voice-coil actuators, and from the full-height monsters with 7 platters to single platter drives with 10x capacity, yet the platters have stayed the exact same radial size on every 3.5" drive I have taken apart. The only notable physical differnece other than color is the thickness. Newer platters are lighter in color and are ALOT thinner.
Re:How smart u are.. (Score:5, Informative)
After a wafer is made a robotic tester probes each circuit before the wafer is cut up. If a circuit fails the basic tests, the probe squirts a little dot of red paint on that circuit. The "known defects" get a red dot without even being tested. After this initial probe test, the circuits are cut apart, the ones with red dots are discarded and the rest are mounted on carriers.
It is possible that a slight mask defect or wafer imperfection might cause a performance problem rather than a total functional failure. This could also be caused by a slightly out-of-spec doping or wafer heating. These are sorted out by further testing as mentioned by other posters.If all of the circuits on a wafer get the same doping and same heating, then you can sample one or two and assume that the rest of the circuits from that wafer will have similar performance. If you have a mask problem that causes degraded performance, you can automatically flag that die location as a "known slow" or a "known bad" depending on your criteria.
I was thinking first it was just bad DELL again (Score:4, Informative)
But yeah more then doubling the HD capacity sounds fishy and there are plenty of letters to the inquirer article explaining how and why it ain't true.
Comment removed (Score:5, Informative)
Re:Uh, no (Score:5, Interesting)
That's probably because the Amiga floppy controller wrote track-at-once, rather than secton-at-once but without either the controller or the trackdisk.device verifying that the entire track had been written correctly. Hence, if you updated a single sector on a track, the entire track would be re-written, and the "unmodified" tracks may get corrupted in the process.
There was a nice hack called TrackSalve [funet.fi] which hacked the trackdisk.device so that it performed an automatic verify of tracks after writing. ISTR equivalent functionality may have been incorporated into trackdisk.device in 2.04/3.0+ Kickstarts, but before I started using TrackSalve, I used to frequently end up with corrupted diskette bitmaps (probably the most-rewritten track on an Amiga floppy).
Another, probably less significant factor is that the Amiga disk hardware wrote tracks with no gaps between sectors in order to get that extra 160KBytes. If a PC disk controller encountered an error in the inter-sector gaps, I doubt it would cause it many problems, but for Amigas, it increases the probability that an error will occur in an occupied cell of the disk.
--
Re:Uh, no (Score:5, Interesting)
There are lots of internal sectors that are reserved for errors. There are builtin algorithms on the disk to diagnose and correct physical errors. You just don't notice them because the disk remaps those sectors transparently.
Hooray! I learned something in class for once!
Re:Uh, no (Score:4, Informative)
Re:Uh, no (Score:5, Funny)
Virus ? (Score:5, Funny)
Is this the first tech info virus ? Follow instructions to destroy your own HD. Seems like just putting a hammer through it would be easier, but it would probably work with the clueless. Hmmm, yeah not a bad idea I guess in a very twisted way.
Re:Virus ? (Score:5, Funny)
Re:Uh, no (Score:4, Informative)
Now if your 1st partition is a full disk - reserved and your second partition is full sized including reserved and the reserved aren't all at the end of the disk, your going to end up with partitions of the ratios they talk about.
However what happens you start putting windows on this thing? Well block sizes of big drives aren't your friend and most small files will end up in reserve clustors. Since directories are small files too and if they don't conflict, you should be able to load up a few gig of data on one of these disks before you start to find out that its overwriting other bits of the other partiion. I expect one of these 180 gig drives could be loaded up with at least 90gig of data before the directorys started acting funny. One cool bit about this is block related files (like mp3) will show up on the dir just fine but when you play it, it might switch songs in the middle. I don't think the RIAA could ask for a better gift.
Re:Uh, no (Score:5, Informative)
Secondly, ALL IDE type drives (and some SCSI) have soem reserved space (possibly 5%) which is intelligently remapped whenever a bad sector is found. (rememeber you are NOT supposed to Low Level format an IDE drrive). During manufacturing, it is inevitable that bad sectors WILL be found, but these are remapped to the hidden reserved section, whcih is why most Hard disks you buy now do not APPEAR to have bad sectors. The reason is they are already mapped into the reserved area. So the rule is, when you DO start seeing bad sectors on your IDE drive, you can be sure that the reserved space is now full and its time to start looking for a new Hard Drive.
"Recovering" the space allocated to the reserved section is NOT good at all, since you then bypass the IDE bad sector mapping mechanism, and if the drive is not suitably surfaced checked, you can bet yoru bottom dollar that you will see some bad sectors.
Beware.
Re:Uh, no (Score:5, Interesting)
Yes. I call it corrupting your partition table. ; )
Years ago, when an 800MB drive was "big", a friend of mine tried to convince myself and a group of IT staff friends, that he could get around BIOS limits of a particular DEC workstation, through some tricky settings of the geometry in the BIOS. LBA was not big in those days and MS OS were still using the BIOS for disk access beyond the boot process.
Anyway, my friend managed to "trick" the BIOS into seeing 800MB (previously 504MB).
So, in an attempt to prove him wrong, I then proceeded to format the drive. MS-DOS format claimed it was formatting the drive as 800MB, but this did not deter me. I knew that MS-DOS was simply fooled into thinking that 800MB was actually addressable on that particular (504MB through BIOS limited) machine.
The format completed fine! But I was still not detered. I said, "ok, now we start to fill this drive up...".
I started copying a large directory over and over to fill the drive. When we approached about 500MB... "Seek error: sector not found.". The drive no longer booted either.
What had happened, was that we managed to force the BIOS to accept geometry values which it could not fully address. Most Significant Bits which MS-DOS would send, would never get seen by the drive, since the BIOS could not go beyond a certain address width. So while formatting, MS-DOS would be sending write commands which would be honored by the drive, but the BIOS would be passively stripping some of the highest MSB's out of shere lack of support of them.
The end effect, was that at the 504MB point, the drive head would be about 504MB's in to the 800MB, then at 505MB, the address would go back to zero and the head would come back to the start! That first sector would be formatted again, the drive would report success, and MS-DOS format would think nothing of it. When it got to "800MB", it would have all appeared to format ok to MS-DOS.
The end result was an 800MB drive, with a partition table which that BIOS was never going to be able to fully service, even though MS-DOS format "saw the proof" that all was fine. ; ) When someone tried to copy data to the next "safe" sector beyond what the BIOS could address, what they were actually doing was writing back over the beginning of the disk! Corrupting the partition table.
; )
I was delighted, because everyone else was on my friends side, even though one of my buddies also had a background in electronics and should have known what I was talking about. Anyway, modern drives DO have secret areas set aside for remapping of bad sectors (to give you the consumer the perception of zero bad sectors and all the space you legally purchased), but this space is way smaller than what these jokers are claiming and it is normally not user accessible.
So, save yourself the hassle of wondering in a few months time, why your drive has "crashed". You might not remember the "magic" that you did to your drive.
Re:Uh, no (Score:4, Informative)
That's probably because I can't type. You may want to read this reference for " areal [storagereview.com]" density, though.
Re:Uh, no (Score:5, Funny)
er, fdisk
Re:Uh, no (Score:5, Informative)
Re:Uh, no (Score:5, Funny)
Re:Uh, no (Score:5, Funny)
(Aw crud, maybe four per person. Dictionary.com wants to call part of the Iris an areole...)
Re:Uh, no (Score:5, Funny)
a high aureal density is those dark nipples and
a low aureal density is those bright pink nipples.
Right?
Re:Uh, no (Score:5, Informative)
Actually, this is exactly what they do. The difference, however, is that the lower-end (smaller) drives are identical except that they come with fewer platters. For example, a 160GB hard drive today likely has two 80GB platters, whereas an 80GB drive probably has one (though different combinations of different sizes are of course used, depending on when the hard drive was manufactured and other factors)
In some cases, a hard drive will be sold with a greater potential capacity than its available capacity. For example, a drive with two 60GB platters may be sold as a 100GB drive, the platters having been "short stroked". This has nothing to do with the absurd technique described in the Inquirer article, and I doubt that it is possible to recover the lost space.
Hard drives are the highest precision mechanical devices that most people have in their home--moreso than processors, high-end printer heads, or toasters. They are not something that you want to physically modify.
See the following highly informative and interesting (if you are a geek) posts by a Maxtor engineer:
Here [storagereview.net]
here [storagereview.net]
and here [storagereview.net]
Re:Uh, no (Score:5, Informative)
It used to work on the old seagate drives, you just set the bios to the parameters of the 100GB drive of letting the bios autodetect the 60GB drive and you had an 100GB drive
Re:Uh, no (Score:5, Informative)
Depending on the form factor and the manufacturer, they can stuff 1, 2, 3, 4, 6, or even 15 platters in an enclosure.(That last is for a full height 5.25" drive. 6 only fit into the 1.7" heigh drives).
Suppose Quantum can fit 10Gb on one side of a platter. They will then make a family of drives: 10G (one platter, one head), 20Gb (one platter, two heads), 30Gb (two platters, three heads), 40Gb (two platters, 4 heads), and 60Gb. (Quantum only fits three platters in a 1" high 3.25" drive). This sequence holds for the quantum Fireball AS series by the way.
As you can see, there is half a platter (one side) unused in the 10 and 30Gb models. Quantum usually leaves that nice and shiny. IBM usually takes a sharp object and makes a big scratch on the surface....
In either case, it's quite possible that QA on that part of the disk failed, and that it would be unwise to use that part of the disk. Even if you managed to get a head able to read/write it....
Re:Uh, no (Score:5, Funny)
CDROMs use constant data rate by varying the RPM of the drive depending on where you're located
I can vouch for the fact that the RPM is greater in the heady latitudes of the UK. People living nearer to the equator will experience slightly longer seek times, and I wonder if those in places like Barrow AK & North Norway actually appreciate the extra performance.
Maybe someone from New Zealand or nearby could chime in and verify that there data is read from the drive in the opposite direction.
Re:Uh, no (Score:5, Funny)
0xffff, 0xffff, and 0xffff. But, we get no errors.
(Hear are some replys for you consideration:
- Isn't Australia part of New Zealand?
- Isn't New Zealand part of Australia?
- That is the lamest piece of shit I have ever
read.
Re:Uh, no (Score:5, Funny)
-- I thought hard drives in Australia had to be installed upside down.
-- I read your post backwards, you insensitive clod.
-- You must be new around here, in Australia your hard drive reads you.
-- Imagine a beaowulf cluster of Australia bits!
Re:Uh, you're wrong sorta (Score:5, Informative)
CLV is constant linear velocity and is what the first generation CD players used. That meant the data passed under the head at a constant speed, 150kbytes/second. The further out on the disc the slower the disc turned as each turn had more data than close-in.
Once the speeds went up the manufacturers moved to CAV or constant angular velocity where the disc spins at a predetermined speed and the data comes in at different rates depending on the head position over the disc. What really happens is there's a table of different CAVs stored in the drive's firmware depending on the absolute position on the disc. Close into the hub the disc spins faster, further out it spins slower. If there are a lot of errors it will slow down to try and read the data better. On a 48x drive there might be as many as 12 different CAV speeds available to the firmware.
I call (Score:5, Insightful)
No way in heck can you increase the amount of storage a HDD has so drastically. I mean, the physical disks can only hold so much, and no matter what you do, they arent going to magically double or triple.
These are physical disks, they have a set number of sectors. One size and one size only.
Unless you get into the whole mega vs. mibi byte but thats a whole nother can of worms!
Re:I call (Score:5, Funny)
unless the disks were secretly, specifically designed this way.
for example, for the benefit of spooks who want the device to maintain a rolling log of disk data for some period of time after the unsuspecting user thinks it's been deleted/reformatted/security-wiped.
Re:I call (Score:5, Funny)
Simple corruption (Score:5, Informative)
This is just a method of corrupting your partition table so the same disk sectors appear more than once. If you try this, don't ask Symantec for help afterwards.
Damn. (Score:5, Funny)
Ahhhhhhhhhhhhhhhhhhhhhh!!!!!
Re:Damn. (Score:5, Funny)
Re:Simple corruption (Score:5, Funny)
Almost every slashdotter wants to find new and interesting ways to hose their data.
Its only natural.
Re:Simple corruption (Score:5, Funny)
It's pretty easy to set your hard drive to whatever "size" you want it to be... just dont expect it to work properly
Having said that, there were a few proggies floating around back then that could make your floppies slightly larger by formatting them with a weird, non-standard configuration.
You could do wonderful things with them, from 1.7-1.8 meg floppies, that were a bit slower and less reliable, to some magic 1.22 meg format that mysteriously made my floppies faster.
Ahh, those were the days
I have very *ahem* fond memories of spending the whole day formatting and copying Civ2 to 96 floppies... ouch!
Re:Simple corruption (Score:5, Interesting)
Anyone wanting to try such amazing technology today can use a Catweasel [jschoenfeld.de], although I'm not sure if it supports anything more exotic than standard Mac/Amiga floppies.
Re:Simple corruption (Score:5, Funny)
1.7-1.8 meg floppies, that were a bit slower and less reliable,
You made floppies even slower and less reliable I wouldn't have thought that was even possible. Obviously some kind of WORN file system (Write Once Read Never!)
Re:Simple corruption (Score:4, Informative)
One flaw I found in the article is that they say you need two drives, both containing an OS. Later they ask you to swap out one of them for another drive with an OS. That whole section sounds like smoke and mirrors.
If this extra space really exists, why do you have to "trick" the OS into believing it is there? I was expecting some mention of a low level format at least, but there's no way this will work. I'll bet the didn't do any data integrity tests which would no doubt show right away the flaw in their system. Oh well, who needs proof if you're just storing appz and mp3s.
Just to be a bastard (Score:5, Insightful)
ATTEMPT TO CLEAR UP MISCONCEPTIONS RE ATAPI/PARTG. (Score:5, Informative)
The Host Protected Area is space on your hard drive that your bios, your operating system or even your applications can be set aside for certain management information. I take it that some backup programs (ab)use it to "hide" compressed boot images on hard drives. I wouldn't be very surprised if companies like Dell or IBM stole some of your hard disk so you can restore a windows installation.The "Host Protected Area" has nothing at all to do with the drive-internal handling of bad sectors or other drive-interal.Drive-internal information as well as sectors used for replacing sectors gone bad are not accessible through the ATAPI commandset for accessing the HPA.
The ANSI T13 Standard Document for ATAPI-6 (current) are overprized at $18.00 but you can download a draft of upcoming ATAPI-7 from the T13 working group's site at http://www.t13.org. There you will find in Section 4.9 of the document: "A reserved area for data storage outside the normal operating system file system is required for several specialized applications". Systems may wish to store configuration data or save memory to the device in a location that the operating system cannot change. The optional Host Protected Area feature set allows a portion of the device to be reserved for such an area when the device is initially configured. A device that implements the Host Protected Area feature set shall implement the following minimum set of commands:"
READ NATIVE MAX ADDRESS
SET MAX ADDRESS ... ...
I take it that READ NATIVE MAX ADDRESS tells you how many sectors of user addressable space have been configured on the drive and SET MAX ADDRESS lets you adjust that.
The way I see it there may be a lot of preinstalled hard drives out there with a compressed windows installation images on them "hidden" in the HPA. Maybe a new version of hdparm will allow linux users to reclaim that dead space.
Re:Simple corruption (Score:5, Funny)
Floppy / Drill fun (Score:4, Interesting)
Re:Floppy / Drill fun (Score:4, Informative)
Re:Floppy / Drill fun (Score:5, Interesting)
That worked because RLL encoded the data using a different method than MFM.
This, though, is smoke and mirrors.
Anyone remember NaBob? (Score:5, Interesting)
Back in the days of the "archive format wars" somebody made a program called NaBob that was pretty funny. It made archives that were so perfectly compressed that they approached singularity. That is, every archive turned out to be one byte long.
The various compression methods, it was said, were named after different types of quarks. So, as the files were compressed, it would report, "upping," "downing", "charming," "stranging," etc.
The file extension was
When you ran the uncompress process, all your files would be mysteriously "extracted" from the archive again. Amazing! It really stored all that data in a single byte!
Of course, all it was really doing was setting the hidden file bit on all your files and creating a one-byte file with the
That program always cracked me up, so I just thought I'd share.
There was even worse stuff.... (Score:5, Interesting)
the faq of comp.compression has a lot of really wired stuff...
Re:Anyone remember NaBob? (Score:5, Funny)
It has a non-GPL compliant license [sourceforge.net] though. Pity.
Re:Floppy / Drill fun (Score:5, Informative)
Back in the day of MFM and RLL controllers, the hard drive controller did much of what the drive electronics and firmware do in modern hard drives, that's why you could have MFM or RLL controllers. Hard drives still use RLL encoding today.
Floppys used to be better.. (Score:5, Informative)
These floppies were used almost daily for 3 years. (no hard disks available at that time). They were reformatted countless times.
Not single one of them ever failed. About a year ago, when failed to reformat and make a boot disk from several fresh-brought floppies I digged up one of them, reformatted again and succeeded in making a reliable boot disk.
Quality of todays media just makes me cry.
Re:Floppys used to be better.. (Score:5, Interesting)
Of course, it doesn't help that now it's not just the computer geeks using these things and a bunch of stupid college kids are storing all of their term papers on these crappy things. Then they run around with them jammed in their back pocket or backpack until crushed, bent, or otherwise destroyed.
My job involves me helping people use the computer, but I'm about to put a sign up that help with college work will cost extra.
that looks like a *bad* thing (Score:5, Insightful)
yeah right. (Score:4, Informative)
Some drives are known to short stroke their platters. This raises the more serious problem of this idiocy... The problem is modern drives store important information on those hidden inner areas of their platters (firmware, disk information, reallocated bad sectors), who knows what you could be overwriting whenever you use that space. Put something down in the wrong place and the drive will never start again or corrupt data at certain sectors. It's a lottery ticket everytime you write data in that partition. That's not what I call useable capacity.
Don't believe me? Go ahead and try it. You'll lose all those Buffy episodes you've downloaded on KaZaA, and instead you'll have to spank it to the Portman pictures your mom doesn't know you have stashed under your bed.
Re:yeah right. (Score:5, Funny)
Is that what kids are calling it nowadays?
Re:yeah right. (Score:5, Funny)
What *idiot* dared to post this on /.? (Score:4, Insightful)
I HAVE seen UFOs (Score:5, Funny)
LK
Disk is cheap. (Score:5, Insightful)
If I need more space, I'll buy a bigger drive, they keep getting cheaper and faster and bigger all the time anyway.
Manufacturer's view.. (Score:5, Informative)
Enlarge your HardDrive (Score:5, Funny)
Sorry.
Summary... (Score:5, Informative)
About the "recover unused space on your drive" article:
Working for a data-recovery company I know a thing or two about harddisks....
One is that if the vendors would be able to double the capacity for just about nothing, they would.
All this probably does is to create an invailid partition table which ends up having:
|*** new partition ***|
|*** old partition ***|
overlapping partitions. So writing either partition will corrupt the other. It probably so happens that whatever situation people tried it, it just so happened that the (quick) format of the "new" partition didn't corrupt the other partition to make it unbootable.
And the 200G -> 510Gb "upgrade" probably has ended up with three overlapping partitions....
Roger
Re:Summary... (Score:5, Insightful)
Basically this idiot has found an incredibly cumbersome way to screw up his partition table. (see below for more details)
Then of course this gets posted and linked to all over the planet for everyone to try for themselves. Who are these fucking idiots that post this kinda stuff? They should get 'gullible' tatood on their forehead.
Hint: nowhere in the article is it said that they actually tried to use all the space and verify all data remained intact. Wouldn't that be the first thing you'd do before posting something like this online?
Anyways, I've written several IDE drivers (and worked on the IDE core for BIOSs) and I can tell you that there is NO way you can increase the size of a 200GB drive to 510GB, especially not with the tools that are described (Ghost).
Look at the 80GB example: they got 150GB? That's interesting, because that would mean that the drive all of a sudden became a 48-bit LBA drive. Older drives are limited to 137.4GB in size and to get 150GB capacity you need 48-bit LBA. I don't think Ghost is going to reflash the firmware of the drive to add support for that (yes, that's meant to sound sarcastic).
Ghost works at the partition level. A drive reports it's size in sectors. This is basically a lower (or closer to the hardware) level.
All they do is move partitions around. But the drive will keep reporting the same number of sectors. Where do the extra sectors come from?
Why don't these people run an IDE identify program on those harddrives. They'll see that the drive still reports the original number of sectors. Exactly the same amount of sectors you can get to through
It's true that some OSs don't create the most ideal partitions so you lose _some_ sectors but nothing in the order of magnitude described though.
Initially I thought maybe they where using the extra error-detection/recovery bytes that each sector has (which would be a very stupid idea), but that would never give you that much increase.
Or that they were removing some factory/OEM predefined partition, which is basically the only relatively safe thing you can do to reclaim some disk space. Again, not the same order of magnitude, plus you'd never go over the size that the disk is sold as.
Andre Hedrick (Score:5, Interesting)
This was on the linux-kernel list a while back, too lazy too find it. (And it's possible I misunderstood -- Hedrick is a crackpot who is barely able to articulate what he is thinking.)
Everybody that tries this (Score:5, Funny)
I thought this was going to be helpfull (Score:5, Funny)
I thought this would actually let you use up that lost space somehow, you did buy the drive, it should contain the space, but it doesn't. RAM is just the opposite, you buy 512, it has 560 or so, well any ram I bought did. Anyway, is their a way to recover this lost space? Is their something I'm doing wrong? It seems to be worse in linux (but I heard that's cause it reserves space for root to access.)
damn i hope you are kidding (Score:5, Informative)
HD are sold in GB with GB "defined" as 1,000,000,000 bytes, which is ~7.4% less than a real GB (2^30 bytes). After formatting, (depending on your FS) a extra few percent goes away for your file table, sector marker, directory structure, etc. so in real GB (in units of 2^30 bytes), it'll be a lot less than 160, or whatever your "bought" size.
Don't expect to recover those.
RAM is sold with truthful advertising. 128MB = 128*2^20 bytes, which is like 134,217,728 bytes - despite the 134, it's still 128MB.
Re:damn i hope you are kidding (Score:4, Funny)
Personally I thought that the people who sugessted "Mebi" were taken out back and given a good kicking, and the rest of us sane people who understood the word context continued using mega knowing that we meant 1024 when refering to computers.
We also realised that the hard disk manufacturers would continue to use out of context numbers but feel that they may one day have to change due to the ever inceasing discrepency making them look stupid.
But maybe that's just me?
It might SHOW that it's more (Score:4, Interesting)
I'm suprised (Score:5, Funny)
Gigabytes Song (Score:5, Funny)
Ten little gigabytes, waiting on line
one caught a virus, then there were nine.
Nine little gigabytes, holding just the date,
someone jammed a write protect, then there were eight.
Eight little gigabytes, should have been eleven,
then they cut the budget, now there are seven.
Seven little gigabytes, involved in mathematics
stored an even larger prime, now there are six.
Six little gigabytes, working like a hive,
one died of overwork, now there are five.
Five little gigabytes, trying to add more
plugged in the wrong lead, now there are four.
Four little gigabytes, failing frequently,
one used for spare parts, now there are three.
Three little gigabytes, have too much to do
service man on holiday, now there are two.
Two little gigabytes, badly overrun,
took the work elsewhere, now just need one.
One little gigabyte, systems far too small
shut the whole thing down, now there's none at all.
It works!!!! (Score:5, Funny)
It works, but be careful (Score:4, Interesting)
On a side note a freand of mine tried this with his 20GB drive at around the same time, cranked it up to 32GB... Funny thing is it still fully works. Amazing isn't. Just don't try it at home
This is just the kind of article... (Score:5, Interesting)
I mean tricking an OS into seeing the partition table twice hardly counts for doubling the actual drive capacity. Geeez.
Mmmm.. already dreaming of (Score: +4, top news) and (Score: -1, dupe)
Great..... (Score:4, Funny)
Increase your harddrive size by 150mb! Women don't like men with small harddrives. Trustmeeee and click this blind link and giveme your CCnfo and I promise thisvkpj&$(*)#Hf89h0eq2987y
How to do this in Linux (Score:5, Funny)
mkdir
mount
mkdir
mount
Tada! now when you `df` you'll have twice as much total space!
Fun with Norton (Score:4, Funny)
In other news (Score:5, Funny)
Users report that 486to586.exe actually works.
"It works, it really works", "My machine feels much faster" was some of the comments from the happy users.
Karma whoring: But after some investigation, it was identified as a renamed copy of loadlin.exe
No cigar, but... (Score:5, Informative)
Disks of today have no direct mapping from head, cylinder and track number to physical location on the platter. Rather there is an internal table of the mapping with room for remapping potential weak sectors to unused space. When the head signal is getting close to be inconclusive the just read sector is written at a spare sector, the mapping table is updated, and the old one is marked as bad.
If this article had show how to manipulate the disk so a number of the spare sectors could be used for enlarging the disk it would have been interesting...
It's a trap! (Score:5, Informative)
Do not try to delete both partitions on the drive so you can create one large partition. This will not work. (this is because they are overlapping and you won't see 'extra' space if you delete the overlap)
You have to leave the two partitions separate in order to use them. Windows disk management will have erroneous data (again alluding to the error in reporting space)
in that it will say drive size = manus stated drive size and then available size will equal ALL the available space with recovered partitions included.
IBM Thinkpad (r-series) has hidden space (Score:5, Informative)
It sounds fairly little, but on a 20 Gb drive that's 20%
Usually there is some kind of backup-image there, but it isnt really necessary (especially for us Linux people).
Using unpatched ghost (Score:5, Informative)
Ghost 2003 Build 2003.775 (Be sure not to allow patching of this software)
That's because the patched version fixes A BUG that allowed the "ever expanding miracle".
and didja know?! (Score:5, Funny)
And didja know you can re-zip all your zip files to make the ONE QUARTER their original size?!?!
The 'trick' is to create a corrupt partition table (Score:5, Insightful)
What probably happens here is: ghost creates a special file, or at least writes to an empty part of your filesystem. Then, it writes a complete mini-os to this 8 MB region.
It backs up the original MBR (which is the bootsector, it also hold the partition table) and writes its own MBR. This MBR has a partition table which includes an 8 MB partion. The boundaries of the partition are the boundaries of the special file.
Since this MBR isn't meant to be used in any normal operation environment, it's not quite legal. Some (not all, the MBR can only hold 4) of the original partitions still show up in the new MBR. Therefore, the 8 MB partition lies inside a much larger partition.
This probably confuses fdisk, which lets you create a partition directly after the 8 MB partition, but inside your original partition.
When you subsequently delete the 8 MB partition, fdisk is probably confused again. The end of the original partition is probably obscured by the new, overlapping partition. So it lets you create yet another partition, from the beginning of the disk to the start of the overlapping partition.
The end result is: one large partition holding two small partitions inside it. This will exactly double your diskspace. Just don't try to use it :-)
Nope (Score:5, Informative)
If it works at all, all it really accomplishes is trick windows into thinking the partition really is bigger than it is. There's NO WAY it could get any bigger in reality, since drive capacity is based on the number of sectors the drive reports to the computer, and that is a fixed, hard-coded number that can't be changed by Norton Ghost or any other utility. If you try to address sector maxcapacity+1, you'll just get an error message back from the drive, it won't actually do anything.
This is just a case of someone making sh** up in order to appear on the front page of hardware websites... A bit like participating in a 'reality show' on TV.
Not possible at all (Score:5, Informative)
On the subject of the Inquirer article.
The 200JB, or BB or whatever is clearly impossible. There is no hidden space on them to recover at all, let alone 310GB! I can't imagine what kind of idiocy provoked someone to believe that was even possible. Western Digital doesn't make drives with more than 3 platters! The 200GB Western Digitals are only available with 80GB/platters. They only have 5 heads. It's therfore impossible to recover any capacity from them at all (5*40GB=200GB).
Some of the other drives are known to short stroke their platters. This raises the more serious problem of this idiocy... The problem is modern drives store important information on those hidden inner areas of their platters (firmware, disk information, reallocated bad sectors), who knows what you could be overwriting whenever you use that space. Put something down in the wrong place and the drive will never start again or corrupt data at certain sectors. It's a lottery ticket everytime you write data in that partition. That's not what I call useable capacity.
Also, if this was working properly, the 80GB deskstar would yield:
either 90GB (+10GB) if it was a 180GXP (three heads on 60GB platters)
or 80GB (+0GB) if it was a 7K250 (2 heads on 80GB platters)
Anyone with most basic knowledge of hard drives should know that most of the numbers up there are simply impossible, not to mention simply ridiculous.
It's not that there aren't hard drives which are short stroked and sold at a capacity below that available for access in theory, but that something is clearly wrong with this method in that it is simply inventing space that physically can't be there. Perhaps hard drive manufacturers are shortstroking disks to the point that they are formatted with the capacity of drives with fewer platters or heads, but this could never justify the failure of this method on the 200GB Western Digital drive. This drive is a known quantity. No matter what, even if they got a disk that was a shortstroked 6 head drive (which would make no sense), the maximum capacity is 250GB, not 510GB. You would need 7 platters to get that capacity with todays technology!
Tried it, broke it. (Score:5, Informative)
I followed the directions to the letter. I ended up with a 1GB drive! (On a supposedly 540MB drive. In the end, FDISK claimed 965 MB.) I filled up the first partition (with mp3s, naturally.) I then started filling up the second partition...
Surprise, surprise. It crashed halfway through copying the mp3s. Reboot? BZZZT! Windows 98 crashed a quarter of the way through loading. Starting up from a DOS disk, and my directory structure is all frooed up on the C partition. Filenames with random ASCII characters in them, inaccessible directories, all sorts of data corruption goodness. The D partition had correct names, though. (So my second batch of mp3s was probably fine.)
(Or, more specifically, do not try this on a hard drive you want to keep, or with data you want to keep.)
Re:How? Reliability? (Score:5, Interesting)
Partition a from 0 to 200 GB
Partition b from 1 to 200 GB etc.
You could probably get it to say almost any amount, but it wouldn't be usable space.
Some drives may have a little extra space but not 70 GB on a 80GB drive. No sane company is going to sell a 150 GB drive as an 80 GB because they pay as much to manufacture platters and heads no matter how they're used. The cost of the unused parts would come right out of their profits. Also, sometimes there is "unused space" used for the hard drive's bios, or for relocating data from bad sectors.
Re:How? Reliability? (Score:5, Insightful)
You'd be correct if there was just one HDD maker in the marketplace, but that isn't so.
First off, let me say that I think this whole isue is bunk. But let's pretend for a moment.
Company A and Company B are both in the business of making and selling HDDs. Company A makes only 200 GB HDDs which cost them about $100 each to manufacture and they then sell them for $200. Company B makes a 200GB HDD which costs them $100 to make and they then sell it for $200. Company B also does this, they modify the firmware of the drive to that only 150 GB are usable. They sell these "150 GB" HDDs for $150.
Company A gets the business of people who are willing to shell out $200 for a 200 GB HDD. Company A does not get the business who have a budget of less than $200 for their HDD purchase.
Company B get the business of people who are willing to shell out $200 for 200 GB HDD and the business of people who have a smaller budget.
By crippling the drive they protect the value of their "high end" product while at the same time making some money on the "mid range" as well
Company A's profits can be calculated like this profit = (X1xP1) whereas X=The number of units sold and P=The profit margin on the unit. #=The model of the HDD
Company B's profits can be calculated like this profit = (X1xP1)+(X2xP2).
This same business principle is a part of the reason why some 2.4 Ghz processors will run at 3 Ghz when overclocked.
I have no doubt that there could be a fair bit of space on a drive that is unavailable to the user, but double or triple capacity? Of course not!
LK
Re:Modder (Score:5, Interesting)
CPU overclocker - okay
Grapic card overclocker - okay
HD modder - ???
Actually there are guys that mod their harddrives [bp6.com].
Notice the less than clean working area with metal particles from the dremeling everywhere. This is less than wise, as the probability that foreign material will get in the drive and act like sandpaper is high. I certainly wouldn't put a modded drive like this in a production machine.
I think modding is great, but this is where I draw the line.
Re:This isn't like overclocking your hard drive... (Score:5, Informative)
The MBR does not store the bad block information. The MBR hasn't stored bad block information since IDE became popular and people stopped being able to low format your their hard drives (no a zero wipe is not a low level format, it simply gives the firmware a good time to reallocate developed bad sectors)
The bad block information is stored in areas of the drive that are completely unaccessable to the outside world, most probably near the servo information on the same track as the actual bad sector. It is only accessed by the LBA mapper in the drive firmware.
The drive actually keeps count of how many sectors it has had to reallocate in its life, and how many sectors it is waiting for a good moment to reallocate. You can get this info from most drives by inspecting the SMART values. Bad sectors do not ussually develop very often after the drive is shipped. You should not see this value be more then 1 or 2 in a young, properly working hard drive.
When the drive detects a sector is going bad, it does not automaticly reallocate it unless it can be correctly read. (or ECC corrected by the drive) This gives recovery software a slim chance of getting lucky and recoving the data from the bad block. The drive simply notes the sector is going bad. If it is read correctly at some late, the hard drive will automaticly reallocate it somewhere else. Alternatively, if a write is issued to a sector awaiting reallocation, then the drive will it perform then rather then wait for a good read.
Also, manufacturers still use aluminium platters in most drives. The embedded servo infomation is used to keep the drive tracking correctly regardless of the temperature of the drive (within specified limits)
Since you didn't read the article, nor any of the comments prevously written, you are completely wrong about this magical utility. It is simply an exploitation of a bug in Norton Ghost that makes your hard drive look larger then it is by overlapping partitions. Attempt to write data to one partition and you will trash the data on the other.
Re:If it's real.... (Score:5, Funny)