Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Microsoft Windows Hardware

Looking Back At Microsoft's Rocky History In Storage Tech 241

nk497 writes "Following the demise of Windows Home Server's Drive Extender, Jon Honeyball looks back on Microsoft's long, long list of storage disasters, from the dodgy DriveSpace to the Cairo Object File System, and on to the debacle that was WinFS."
This discussion has been archived. No new comments can be posted.

Looking Back At Microsoft's Rocky History In Storage Tech

Comments Filter:
  • Missing ADS (Score:5, Interesting)

    by EdIII ( 1114411 ) on Saturday February 12, 2011 @03:14AM (#35184154)

    I would have to include NTFS alternate data streams as well. It sounded like a good idea, but in practice it just left huge security holes.

    • Re:Missing ADS (Score:4, Insightful)

      by TheRaven64 ( 641858 ) on Saturday February 12, 2011 @06:45AM (#35184868) Journal
      ADS was introduced for one reason: to allow NT servers to support Apple clients, without the server needing to do some crazy transforms (like MacOS does when writing to a FAT drive, which make it trivial to break the files if you touch them with a non-Mac system). The problem was that most of the rest of the system was not updated - it was an operating system feature written for a single application, which is a pretty good way of introducing security holes.
      • Re:Missing ADS (Score:4, Interesting)

        by bit01 ( 644603 ) on Saturday February 12, 2011 @07:27AM (#35185054)

        ADS was introduced for one reason: to allow NT servers to support Apple clients, without the server needing to do some crazy transforms

        Umm, ADS is doing crazy transforms. Some would say giving it a different name and use different OS calls to access different data is worse than using different names and the same OS call to access the different data.

        Some people, programmers or otherwise, can't tell the difference between giving something a different name/label and actually doing something different.

        This problem is endless in the computer software industry, mainly because of the amorphous nature of software. e.g. redoing OS apps inside a web browser or reinventing file systems inside databases or reinventing hierarchical file systems inside XML and calling it all "new" and "innovative". While there is some invention going on in web browsers, databases and XML, most is just reinventing the wheel. Such software is often necessary for compatibility reasons but recognizing that it is a compatibility layer and putting that compatibility layer at the appropriate interface is the important skill.

        Or in other words meta-data is data. Sorry, but until you understand that in your bones you are not a decent programmer.

        ---

        Has the Least Patentable Unit reached zero yet?

        • Umm, ADS is doing crazy transforms. Some would say giving it a different name and use different OS calls to access different data is worse than using different names and the same OS call to access the different data.

          Correct me if I'm wrong, but can't applications access alternate streams by doing something so simple as accessing a different filename?

        • I'm actually unsure. Isn't it like saying format info is like the text it formats, or maybe even that a program is like the data it accesses ?

        • by nmb3000 ( 741169 )

          Or in other words meta-data is data. Sorry, but until you understand that in your bones you are not a decent programmer.

          Metadata is data, yes, but it is not the data. Users largely don't care if metadata is lost because a file is copied to an incompatible filesystem on a flash drive, synced to Dropbox, emailed to a friend, or maybe even printed out.

          Things like ADS provide a way to store data about a file which doesn't have an integrated mechanism to store metadata (e.g., EXIF). Keeping it in a separate name and using an alternative API call makes sense from a compatibility and a simplicity point-of-view. Should old softwar

          • by bit01 ( 644603 )

            Things like ADS provide a way to store data about a file which doesn't have an integrated mechanism to store metadata (e.g., EXIF). Keeping it in a separate name and using an alternative API call makes sense from a compatibility and a simplicity point-of-view.

            No it doesn't. It increases the complexity of every program that deals with the file, makes both the files and the accessing programs less portable, hides things from the user they may need to know (mystery program behavior anyone?) and generally jus

    • by nmb3000 ( 741169 )

      I would have to include NTFS alternate data streams as well. It sounded like a good idea, but in practice it just left huge security holes.

      Ignoring the fact that alternate data streams are incredibly useful, how exactly are they "huge security holes"? That doesn't even make a little sense.

      The only argument might be that it makes it easier to hide things from a user, but if that's the case you could just bury a file somewhere in the filesystem, protect the parent directory with ACLs (so it can't be found by searching the filesystem), and be done with it.

      ADS is not a security problem. If you have rights to the ADS, you have rights to the main

  • I fail to see why the fact that NTFS is still around essentially unchanged is a problem. It serves its purpose well. While MS's internal factionalism has hurt their position in the massive storage arena, the continued stamina of NTFS is a good thing.

    • it not open source?? Someone more knowledgeable then me on the subject can expand this im sure. I just know that I recently started trying to learn linux, it not being able to read my windows partitions if f'n annoying.
      • by SuricouRaven ( 1897204 ) on Saturday February 12, 2011 @03:52AM (#35184268)
        It's not just closed source, but closed standard. Microsoft keeps the specification officially secret (Though I believe you can see if it you agree to an agreemet saying you won't disclose or actually impliment it). That linux can use NTFS is a tribute to many hours of dedicated reverse-engineering and various tidbits of information that escaped until a full picture could be assembled,
      • by rts008 ( 812749 )

        Try a more recent version of Ubuntu[from your comment later on/further down].

        I run Kubuntu, which is Debian-based Ubuntu with a KDE user interface, instead of Ubuntu default Gnome desktop envoirment/user interface.
        The full read and write ability for NTFS has been present in the default install since 8.04, IRC.
        I remember downloading NTFS-3g from the repository in 6.06[?] Dapper Drake for read only, but don't remember having to do so with 8.04.

        Currently I am running 10.10, and the default install has read-wri

      • Many distros only enable full read/write support to Windows partitions for the root user.

        After all, letting just anyone delete all those files Windows hides from everyone shouldn't be made too easy. But it's handy when someone complains that Windows is unable to delete a file or directory tree, to use a boot linux cd, log in as root, and delete the files.

    • MS actually has a very good history with in-house designed file systems. NTFS is pretty damn good, it even accommodates classic Mac resource forks (from the days NT had AppleTalk support). HPFS was very good as well...one of Microsoft's greatest contributions to OS/2 before the IBM split up. Heck, look at Apple's HFS+, its been around since 1997 and is really just an extension to the circa 1985 HFS file system.
    • by bertok ( 226922 ) on Saturday February 12, 2011 @04:10AM (#35184342)

      NTFS still doesn't have shared cluster filesystem capability. This has a bunch of flow-on effects, which basically means that Windows Server clusters are actually "Failover Clusters". The key part of that being the "Fail".

      Really basic services like the file shares are impossible to make truly highly available using Windows, because neither NTFS nor SMB support transparent fail-over of open files. There isn't even a way of doing a clean administrative cluster fail-over, such as a drain-stop. The only option is forcibly closing all open files, potentially corrupting user data, and forcing users to click through dirty error messages that their PCs may or may not recover from.

      I've tried things like Polyserve, which is a third-party filesystem that has proper cluster support, but it's still hamstrung by SMB. What's doubly ridiculous is that Microsoft basically re-engineered SMB for Vista, and called it "SMB2", but it still can't do clean fail-over!

      Similarly, SQL Server can't do proper failover of cluster nodes, nor can it do proper active-active database clusters that share a single database file, because of the limitations of the underlying filesystem. It can no active-active clustering for read-only files, but that's only rarely useful.

      Even within Microsoft, workarounds had to be found to make some of their key products somewhat resilient. Both SQL Server and Exchange now use software mirroring for cleaner failover. Ignoring the cost of having to purchase twice as much disk, mirroring has other issues too, like becoming bottle-necked by the network speed, or limiting the features that can be used. For example, if your application performs queries across two databases in a single query, then you can't use Mirroring, because there's no way to specify that the two databases should fail over in a group.

      VMware has become a multi-billion dollar company in a few short years because a single non-clustered Windows Server on a VMware cluster is more robust than a cluster of Windows Servers!

        "Enterprise Edition" my ass.

      • by DarkOx ( 621550 )

        You are correct about many things you point out. I don't see mirroring as a problem if you need an HA environment. Frankly if you are using a shared storage cluster, be it active-active or failover you still have a single point of failure the storage. That is kinda of a deal breaker if you are looking for 5 nines.

        VMWare clusters do a good job but are only really HA if you have the right kind of storage to back them up or are remotely replicating them (which is not going to give your clean failover either

  • Drive Letters (Score:5, Insightful)

    by pushing-robot ( 1037830 ) on Saturday February 12, 2011 @03:36AM (#35184224)

    IMHO, Microsoft worst offense in storage is drive letters, which provide no information about either the type and structure of the underlying disks or the data they contain, and have caused untold headaches from applications (and the OS itself) being reliant on paths that are arbitrarily assigned, subject to change, and often out of the user's control.

    Admittedly, Microsoft didn't invent the system, but the fact that drive letters still exist in 2011 is entirely their fault.

    • That would be very hard to change, as so many applications would need to be altered.
      • That would be very hard to change, as so many applications would need to be altered.

        ...but surely it is not beyond the whit of man to emulate drive letters (ln -s /D: /home anybody?) for legacy apps while dragging the rest of the file system into the 1980s?

        Instead, the more modern "logical" file system in Windows (as used by the desktop) still feels like an emulation sitting on top of drive letters, and last time I looked required you to use a proprietary GUI.

        Still - it could be worse - with all the ex-DEC people involved in Windows NT, they could have gone for the VAX filing system. Ac

      • I'm no application developer, but don't most apps rely on environment variables like %programfiles% and %userprofile% for their paths rather than drive letters?
        And reading and writing to files on the network seems to work fine without a drive letter, I can access a shared folder on this computer as \\ComputerName\FolderName from any app, though I prefer to map it to Z: since it's shorter.
        So what exactly would be the problem?

        I remember some games in the 90s that used "C:\Program Files\..." as their default i

        • I would love to introduce you to the internal developers at our place.

          "What do you mean I can't write a temporary PDF to c:\ unless I'm administrator? Where *am* I supposed to put it!!!!"

    • by itsdapead ( 734413 ) on Saturday February 12, 2011 @05:26AM (#35184608)
      How about getting the directory separator wrong? This has indirectly led to a generation of TV and radio presenters having to say "forward slash" when reading out URLs...
      • Is it wrong or simply different? AFAIK, the first DOS did not have directories, so they were free to choose / as the option prefix. Of course, they later added fancy things like directories and multiuser capabilities, but Windows users still suffer from having to be backwards compatible with a directoryless OS.
        • by Sancho ( 17056 ) *

          Yeah. I thought the real problem was choosing slash as the character signifying an option in so many of their utilities.

      • by fnj ( 64210 )

        It's a bit tedious to keep pointing this out, but actually nothing FORCES stupid people talking to other stupid people to say "forward slash." "Slash" IS "forward slash". "Backslash" is always "backslash." There is no reason for confusion, mental defects such as dyslexia aside.

  • by devent ( 1627873 ) on Saturday February 12, 2011 @06:10AM (#35184728) Homepage

    I have used LVM2 now for two years with my various notebooks and netbooks. They had various crashes and power downs but I never loosed one bit of data. My small home server is using LVM2 as well with my 3 USB hard disks, serves videos and music to my home.

    With my notebooks and netbooks I can grow or shrink my root or home partition and with my server [linux-onlineshop.de] I can just plug in another USB hard disk and grow my partition. No fuss not complicated at all and works all the time.

    All that for free, just download Fedora, Debian or Ubuntu and install it in 10 minutes. If you want, setup a FTP server, apache server or what ever you like. Or you get what you pay for with Windows for 100$ or more.

    • Re: (Score:2, Interesting)

      by batkiwi ( 137781 )

      -What happens if you lose a disk?
      So you look to install raid
      -what if all your disks aren't the same size?
      and
      -what if you want to upgrade just one disk? Or add a new disk? (I know both are possible with the raid-5 tools, but adding new disks takes HOURS, if not DAYS, depending on the size of your array.... not something I'd call usable to a home user)

      MS drive extender and Unraid both have a home-user solution that open source does not match right now. I hope this changes soon!

      • by devent ( 1627873 )

        MS drive extender and

        Well, not anymore do they?

        What happens if you lose a disk?

        Why, what happens? We are talking about the MS drive extender and with LVM2 you can use such feature with every major Linux distribution since 13 years and that without the risk to loose any data.

        If MS only just implemented LVM2 for Windows you would have now a nice space expansion feature which is proven to work.

        Raid is not a backup solution [2brightsparks.com] anyway so if you care about your data you need to have a backup strategy.

        • Drive extender gracefully degrades, any single drive contains a subset of the complete data and can be read individually.

          Drives used with LVM mirror can not, and the array will catastrophically degrade if any two drives carrying a pair of mirrored extents get corrupted.

  • by v1 ( 525388 ) on Saturday February 12, 2011 @10:43AM (#35186064) Homepage Journal

    The article does a fairly thorough job of roasting MS over their lack of internal coordination, outlining how one wing starts to work on a new technology and other departments that need to get on board "wanted nothing to do with it'. In any well-managed company, a department that refuses to get on board with a new technology gets hell rained down on them from above until they fall into line.

    Take Apple's "spotlight" meta search feature for example. Imagine the team working on the AddressBook app "wanted nothing to do with it"? There'd be hell to pay, and either team managers would change their tune or get replaced. In a large project like an operating system, lack of cooperation simply cannot be tolerated. But it seems that MS is just so large at this point that it doesn't have the power to guarantee their different projects cooperate fully with each other.

    I have read from time to time that there was this sort of internal battle going on at MS, where different projects worked in isolation and there was infighting, but I'd never really seen the effects of these issues before. It's interesting to see the result. This appears to be an upper management or communications problem. Whoever is above the Outlook team needs to be asking that team manager "so how's integration with drive extender going?" If they get foot-dragging and complaining and brush-offs, that manager needs to be dragged into the director's office for some "re-education" on cohesive development. If the director isn't asking these questions, THEY need to be replaced. Something of this sort is isn't working properly at MS.

    Its like a construction project. You've got all these separate units coming in, doing electrical, plumbing, structural, heating, floors. The general contractor has to make sure these people work together. Refusing to cooperate with one of the other groups simply cannot be tolerated, and it's the GC's responsibility to make sure everything works smoothly. Problems between groups need to be brought to the GC, and the GC needs to settle them immediately. Otherwise the finished building has serious problems. You can't just turn over the house to the owner and say "Oh by the way we removed the heating from the bathroom. The plumbers wouldn't route the pipes around where the heating ducts needed to go. You don't REALLY need heat in such a small room anyway." But that's the sort of thing that MS is pulling from time to time.

    I think MS is just taking the cowardly way out. "We can't control our own internal development processes well enough to get this feature integrated properly in with the rest of our technology, so we're just canceling it." The article states simply that companies like Dropbox and DataRobotics (makers of Drobo) that have only one core technology are forced to "get it right", because dropping it simply isn't an option. MS seems to think they have the option to just drop any feature at any time on a whim if it's not going well, instead of going to the additional effort of kicking some butts and making it work. It's not like its an impossible task. This is doable. They just lack the necessary internal management to pull it off consistently.

    Bottom line: At MS, with any new project, unless all the key players decide to get on board, the project is doomed.

    In other words, the Outlook team manager should not be capable of tanking Drive Extender. But it is, and it did. And THAT is a serious internal management problem that MS has demonstrated over and over. Something's gotta change.

BLISS is ignorance.

Working...