Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Microsoft Windows Hardware

Looking Back At Microsoft's Rocky History In Storage Tech 241

nk497 writes "Following the demise of Windows Home Server's Drive Extender, Jon Honeyball looks back on Microsoft's long, long list of storage disasters, from the dodgy DriveSpace to the Cairo Object File System, and on to the debacle that was WinFS."
This discussion has been archived. No new comments can be posted.

Looking Back At Microsoft's Rocky History In Storage Tech

Comments Filter:
  • Missing ADS (Score:5, Interesting)

    by EdIII ( 1114411 ) on Saturday February 12, 2011 @03:14AM (#35184154)

    I would have to include NTFS alternate data streams as well. It sounded like a good idea, but in practice it just left huge security holes.

  • by Zombie Ryushu ( 803103 ) on Saturday February 12, 2011 @03:44AM (#35184244)

    Because Windows Server has Active Directory and Group Policies. and Linux doesn't. Thats what sells Windows Server 2000/2003/2008. When there was a proposal to incorporate OpenLDAP auto confguration policy into KDE - it was rejected. That is why Linux lost the war for the Enterprise desktop.

  • by bertok ( 226922 ) on Saturday February 12, 2011 @04:10AM (#35184342)

    NTFS still doesn't have shared cluster filesystem capability. This has a bunch of flow-on effects, which basically means that Windows Server clusters are actually "Failover Clusters". The key part of that being the "Fail".

    Really basic services like the file shares are impossible to make truly highly available using Windows, because neither NTFS nor SMB support transparent fail-over of open files. There isn't even a way of doing a clean administrative cluster fail-over, such as a drain-stop. The only option is forcibly closing all open files, potentially corrupting user data, and forcing users to click through dirty error messages that their PCs may or may not recover from.

    I've tried things like Polyserve, which is a third-party filesystem that has proper cluster support, but it's still hamstrung by SMB. What's doubly ridiculous is that Microsoft basically re-engineered SMB for Vista, and called it "SMB2", but it still can't do clean fail-over!

    Similarly, SQL Server can't do proper failover of cluster nodes, nor can it do proper active-active database clusters that share a single database file, because of the limitations of the underlying filesystem. It can no active-active clustering for read-only files, but that's only rarely useful.

    Even within Microsoft, workarounds had to be found to make some of their key products somewhat resilient. Both SQL Server and Exchange now use software mirroring for cleaner failover. Ignoring the cost of having to purchase twice as much disk, mirroring has other issues too, like becoming bottle-necked by the network speed, or limiting the features that can be used. For example, if your application performs queries across two databases in a single query, then you can't use Mirroring, because there's no way to specify that the two databases should fail over in a group.

    VMware has become a multi-billion dollar company in a few short years because a single non-clustered Windows Server on a VMware cluster is more robust than a cluster of Windows Servers!

      "Enterprise Edition" my ass.

  • by FuckingNickName ( 1362625 ) on Saturday February 12, 2011 @05:03AM (#35184540) Journal

    Would you mind explaining carefully and precisely why you think that OS X's filesystem (and others) aren't prone to fragmentation? It's true that many filesystems incorporate techniques to reduce the likelihood and effect of fragmentation, but it still happens, and it's still possible to optimise the position of data on rotating media - as any good defragmenter will do.

    Filesystems which claim not to suffer from defragmentation concern me more because people end up not noticing the decrease in performance over time. For a machine not in 24/7 operation, a scheduled defrag run is always a good idea; otherwise, slowly doing the same during less busy moments should be mandatory.

  • by itsdapead ( 734413 ) on Saturday February 12, 2011 @05:26AM (#35184608)
    How about getting the directory separator wrong? This has indirectly led to a generation of TV and radio presenters having to say "forward slash" when reading out URLs...
  • ZFS (Score:0, Interesting)

    by Anonymous Coward on Saturday February 12, 2011 @06:50AM (#35184888)

    Or better yet, ZFS. The only free fs (until btrfs is ready for prime time) that prevents data corruption and bit rot, supports single/double/triple parity raid, caching, etc.

  • Re:LVM2 or raid? (Score:2, Interesting)

    by batkiwi ( 137781 ) on Saturday February 12, 2011 @06:54AM (#35184908)

    -What happens if you lose a disk?
    So you look to install raid
    -what if all your disks aren't the same size?
    and
    -what if you want to upgrade just one disk? Or add a new disk? (I know both are possible with the raid-5 tools, but adding new disks takes HOURS, if not DAYS, depending on the size of your array.... not something I'd call usable to a home user)

    MS drive extender and Unraid both have a home-user solution that open source does not match right now. I hope this changes soon!

  • Re:Missing ADS (Score:4, Interesting)

    by bit01 ( 644603 ) on Saturday February 12, 2011 @07:27AM (#35185054)

    ADS was introduced for one reason: to allow NT servers to support Apple clients, without the server needing to do some crazy transforms

    Umm, ADS is doing crazy transforms. Some would say giving it a different name and use different OS calls to access different data is worse than using different names and the same OS call to access the different data.

    Some people, programmers or otherwise, can't tell the difference between giving something a different name/label and actually doing something different.

    This problem is endless in the computer software industry, mainly because of the amorphous nature of software. e.g. redoing OS apps inside a web browser or reinventing file systems inside databases or reinventing hierarchical file systems inside XML and calling it all "new" and "innovative". While there is some invention going on in web browsers, databases and XML, most is just reinventing the wheel. Such software is often necessary for compatibility reasons but recognizing that it is a compatibility layer and putting that compatibility layer at the appropriate interface is the important skill.

    Or in other words meta-data is data. Sorry, but until you understand that in your bones you are not a decent programmer.

    ---

    Has the Least Patentable Unit reached zero yet?

  • by kantos ( 1314519 ) on Saturday February 12, 2011 @08:48AM (#35185400) Journal

    Honestly.... this argument is stupid, Group Policy arose because on Windows everything is a COM object with an ACL and it was neigh impossible to manage to provide even a modicum of security without some sort of system policy at a high level. Linux of course doesn't need this because it operates in a fundamentally different manner where everything is a file and the file system permissions (group based) determine if a is executable or not. Thus the Linux kernel doesn't need to know what specific COM+ handler needs to be loaded, but rather if a file is a supported executable format or not, and what to do from there. Both systems have fundamental advantages, Linux is deceptively simple leading to a power on the command line that is daunting for many users. Whereas Windows can be easy worked with to extend using COM and the registry (The registry was never designed to hold most of the crap that people shove in there... it was designed to be a central repository of information for COM objects).

    If anything this model shows MS's lack of foresight into the importance of networking and their focus on the single standalone box.

If you have a procedure with 10 parameters, you probably missed some.

Working...