Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Software Linux

PVFS2 - a High-Performance Parallel File System 26

neillm78 writes "As part of the development team, we're announcing PVFS2 version 1.0 here in Pittsburgh at the SC2004 conference! PVFS2 is a GPL/LGPL based parallel file system for cluster-based applications. It logically groups any number of storage servers into a coherent file system for use by client nodes, specifically tailored to handle efficient access to large shared files. PVFS2 supports access via an MPI-IO interface for high-performance parallel applications, but you can still mount it like a regular GNU/Linux file system for traditional serial applications and managment. The PVFS2 project is conducted jointly between The Parallel Architecture Research Laboratory at Clemson University and The Mathematics and Computer Science Division at Argonne National Laboratory. Please feel free to give it a try!"
This discussion has been archived. No new comments can be posted.

PVFS2 - a High-Performance Parallel File System

Comments Filter:
  • by brsmith4 ( 567390 ) <brsmith4@gmail. c o m> on Tuesday November 09, 2004 @10:08PM (#10772815)
    PVFS (in its first incarnation) despite some instability (more so due to the fact that our first cluster was COTS cheap-o hardware), really helped drive down the load on our clusters by removing the need to perform NFS writes to a single head node for scratch space. The set up is extrememly simple and the code base was really small.

    I plan on evaluating PVFS2 for our new clusters along with Lustre and GFS although I have heard nothing about the latter two operating over the MPI-ROMIO subsystem (which would definitely offer a performance increase).
    • This sounds like a very advanced version of XFS?! Are they saying MANY people can rm and cp and write to the same exact point in the filesystem simultaneously. Looking at the specs, I am struggling to see what's special.

      • by brsmith4 ( 567390 ) <brsmith4@gmail. c o m> on Tuesday November 09, 2004 @11:56PM (#10773545)
        It's a parallel file system, not a drop in replacement for local FS's like XFS or ext3. It runs across multiple hosts, striping the data on each host. Also, haveing multiple I/O hosts in the array helps to distribute the read/write across multiple nodes, thus reducing the overhead for those operations.

        This is like "Distributed NFS" although that description does it a huge injustice, it should help to get the point across.
        • So it works over the network without needing a network block device layer?

          That would mean it should compete on the level of OpenAFS, Intermezzo and CODA for fault tolerant network filesystems -- except it would have internode locking which the others don't at the moment.

          That would also mean it doesn't directly compete at the same level as GFS (which is targeted at configurations of servers connected by a SAN or similar).

          Is this project set on integrating with the mainline kernel? What has/will happen on
          • That would mean it should compete on the level of OpenAFS, Intermezzo and CODA for fault tolerant network filesystems -- except it would have internode locking which the others don't at the moment.

            That's an interesting thought, but at no time have we ever thought of ourselves as a replacement for those file systems. The ones you mention are general purpose file systems whereas PVFS2 is meant to be a fast file system for parallel applications.

            except it would have internode locking which the others don
            • "That's an interesting thought, but at no time have we ever thought of ourselves as a replacement for those file systems. The ones you mention are general purpose file systems whereas PVFS2 is meant to be a fast file system for parallel applications."

              Yes, I realize that now. Everything except for the last paragraph of my post was speculation, and the last paragraph was there to correct those speculations which was written after reviewing the web site a bit.

              "I'm not sure what you mean here. We have no loc
  • It's Linux! (Score:3, Insightful)

    by egarland ( 120202 ) on Tuesday November 09, 2004 @10:21PM (#10772916)
    The kernel is called Linux. Yea, you may compile against GCC but come on people! it's a Linux specific kernel module. Leave the GNU/ out of it.

    That said, Nice job! I love to see the capabilities of Linux expanded in new directions like this. Cool work. I wish I had time to work on cool projects [sourceforge.net] like that.
    • hello friend. i'm a developer on the project and i firmly stand of calling a distribution "GNU/Linux" rather than Linux. the kernel is another story, but then again, we run on clusters based on distros -- not on kernels. cheers!

      -Neill;
      • Right, but I think the parent's point is that the filesystem isn't GNU/Linux-specific; it's a Linux kernel module. Calling distros GNU/Linux is fine; while I don't share your insistence in calling them that (mainly due to laziness), that is correct in that that's what they are. But calling a Linux filesystem driver "GNU/Linux" is incorrect.

        Sorry. I'm just a nitpicky, pedantic bastard.

        Having said that, I skimmed through the info on the project website, and it looks like some interesting stuff. At the
  • by Ayanami Rei ( 621112 ) * <rayanami AT gmail DOT com> on Tuesday November 09, 2004 @10:29PM (#10772973) Journal
    I found that gigabit NFS was usually much faster with files smaller than 1MB. I guess because either way, you still had to go through one server to set up each FS operation. NFS had been around longer; the Sun implementation was hard to beat.

    Has the meta-data server been speed up at all, or made distributed with some kind of coherency-syncro backend?
    • Has the meta-data server been speed up at all, or made distributed with some kind of coherency-syncro backend?

      From the PVFS2 Guide [pvfs.org]:

      The new design has a number of important features, including:

      * modular networking and storage subsystems,
      * powerful request format for structured non-contiguous accesses,
      * flexible and extensible data distribution modules,
      * distributed metadata,
      * stateless servers and clients (no locking subsystem),
      * explicit concurrency support,
      * tunable

  • <SELF-PLUG TYPE="shameless">
    This is exciting and all, but the really importing thing about PARL is that they were the only ones at Clemson willing to host our site [clemson.edu].
    </SELF-PLUG>
  • ..but..uhh..not to appear too lame because I probably don't understand it.. but...could this be used in conjunction with someting like bittorrent so that big files like ISOs or whatnot could be shared easier cross platform? Do you understand what I am asking? An Esperanto for computers with large numbers of people working all over?
  • I've been skimming the documetantion for this.
    Does anyone use this for big, transparent file storage networks.
    I've been looking for something better than "a bunch of nfs servers with some code to redirect each client to his storage". This is a pain to manage as well as having lots-'n-lots of pof's...

    I've noticed that that metadata is not in a single node anymore, but it's not replicated yet either. I could live with this reliability problem if it could give me the transparency to just add a server when nee
    • We don't encourage anyone to rely on PVFS2 to host the sole copy of their data. So it might not be the best idea to use PVFS2 as a "transparent storage network".

      PVFS2's real sweet spot is for scratch space for scientific applications -- writing out checkpoints, reading in datasets.

      I don't know if I'd call what PVFS2 has a "reliability problem". If you've got money, hardware-based failover solutions exist today and work well with PVFS2 (think heartbeat). In the not-so-distant future we've got people wor
    • Re:Redundancy ? (Score:2, Interesting)

      by REggert ( 823158 )
      I use Andrew File System (specifically, http://www.openafs.org/ [openafs.org]) for my files, since I was used to using it at school, and I'm fond of its access control system. It allows you to designate redudant sites for your volumes for backup or load balancing purposes. However, its major downside is that it's optimized for reads but not for writes (PVFS would probably work better if you need optimal write performance), and it can be a real bitch to set up for the first time. I've also yet to figure out how to get
    • You might want to check out ZFS when solaris 10 comes out. http://www.sun.com/2004-0914/feature/

On the eighth day, God created FORTRAN.

Working...