Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Businesses Red Hat Software IT

Red Hat and HP Establish Linux Storage Lab 82

Rob writes "Linux distributor Red Hat has teamed up with Hewlett-Packard to create a new performance test lab to help customers deploy enterprise storage across Linux environments. The lab will focus on performance and integration testing in order to produce best practices and solutions guides, the companies said, and will also enable customers to preview new technological developments."
This discussion has been archived. No new comments can be posted.

Red Hat and HP Establish Linux Storage Lab

Comments Filter:
  • Consolidation (Score:1, Interesting)

    by mysqlrocks ( 783488 )
    It's interesting to watch the Linux market mature. With IBM putting so many resources behind Linux of course HP is going to want to continue to work with Red hat.
  • i wish ... (Score:4, Insightful)

    by dominic.laporte ( 306430 ) on Thursday September 22, 2005 @10:14AM (#13621725)
    they do performance tests on

    1) postgres with large data sets over SATA and IDE hard drives.
    2) mysql with large data sets over SATA and IDE hard drives.
    3) both of the above over www.coraid.com.

    p.s.
    coraid drivers are gpl and part of the kernel already.
    • by Anonymous Coward
      then:
      mysql on 420MB second-hand IDE drives ,
      then:
      postgress on RAID-1 configured with two 5.25" fdd drives

      (I don't think storage performance lab is about stuffing IDE disks in low end server and measuring performance)
    • Not quite (Score:5, Insightful)

      by earnest murderer ( 888716 ) on Thursday September 22, 2005 @10:38AM (#13621904)
      This is mostly a webvertisment/reference for deploying GFS on HP Proliant server hardware.
    • SATA disks possibly (Score:3, Interesting)

      by Anonymous Coward
      SATA disks is possibly true. To those of you who say "What??! Ordinary SATA disks on mission critical servers??!" - even high end enterprise storage systems (like EMC Symmetrix) use ordinary disks.
      • >To those of you who say "What??! Ordinary SATA disks on mission critical servers??!" - even high end enterprise storage systems (like EMC Symmetrix) use ordinary disks.

        WTF are you blathering about?
        The fact that those disk arrays can use SATA disks doesn't mean that they recommend running mission critical databases on SATA disks.

        They do that for simple reasons such as:
        a) if you need cheap storage, you don't have to buy two disk arrays (e.g. Symmertrix for FC SCSI and CORAID for SATA)
        b) you can put shit d
        • no no no you got that all wrong.

          a) The high end disks are for caching the most used portions of the DB,
          b) which is on SATA.
          c) The backups are on refurb UDMA100 disks.
          a sub 1) The rest of the high speed and availability disk space is devoted to a hidden share of MP3s XVIDs and porn for the BOFH admin staff. :P

          -nB
        • by Anonymous Coward
          The fact that those disk arrays can use SATA disks doesn't mean that they recommend running mission critical databases on SATA disks.

          They don't recommend anything, they provide a storage system and it really doesn't matter what kind of disks are under the hood. Yes, even fibre channel EMC Symmetrix (the most high end enterprise storage system) has regular, ordinary disks under the hood. You don't get to choose which disks you put there - it's a complete solution they provide and it's anything but cheap! N
    • >they do performance tests

      What kind of business benefit could HP and RH possibly derive from burning hundreds of man-hours on perf tests that can be replicated using any other hardware with any other Linux OS?

      They could sell tuning "services"?
      Yes, to the first customer, then they would do a diff on clean system install, collect their optimization settings and post them on their Web site for everyone to share.

      Apparently you wish they test so that you don't have to spend your time and money to do that. We
  • by TripMaster Monkey ( 862126 ) * on Thursday September 22, 2005 @10:17AM (#13621742)

    Some information on the Global File System can be found here [redhat.com] and here [redhat.com].
    • by AKAImBatman ( 238306 ) * <akaimbatman@ g m a i l . com> on Thursday September 22, 2005 @10:23AM (#13621787) Homepage Journal
      Not to be confused with the Google File System [rochester.edu]. A lot of people confuse them (same TLA), so it's important that sysadmins are clear that they are very different. If you install G[lobal]FS, you're getting something that has different goals in distributing the data than those of the Google servers. Google's FS has only a modicum of documentation, and no public implementation available. If you want to replicate GoogleFS, you'll have to guess as to the parts that their documentation doesn't cover.

      Now back to your regularly scheduled program. :-)
    • by drdink ( 77 ) * <smkelly+slashdot@zombie.org> on Thursday September 22, 2005 @10:28AM (#13621827) Homepage
      I don't see how GFS can scale as well as something like OpenAFS [openafs.org]. With AFS, you get an entire infrastructure. I wish more people would be investing time and effort into improving filesystems like AFS, where all systems can share a common namespace without requiring the availability of a SAN. The two have slightly different uses, but it'd still be nice to see more force behind AFS now that it is opensourced.
    • Comment removed based on user account deletion
      • No, all computers share a "drive" and GFS makes sure they don't step on eachother's toes. Usually that "drive" is a largish storage device with a lot of harddisks, though.
        • What he said. GFS is similar (in concept, anyway, I haven't looked at GFS's innards) to OCFS (Oracle Cluster Filesystem). It lets multiple servers shared the same "drive". (Typically a high end SAN, but OCFS -- and I'm presuming GFS -- will work with multiple computers plugged into the same single FireWire drive.)
          • Correct.
            (Note that GFS predates OCFS, GFS grew out of the University of Minnesota and has a long history there)

            In order to use GFS your nodes need some form of "shared blockspace (disk)". Traditionally this has been Fibre Channel Storage, but there is nothing in GFS that prevents using a shared FireWire, iSCSI or any other shared blockspace. The problem often seen here is that even if a "disk" can be shared it does not always behave "nicely" in such a setup. Lower-end devices are often not designed to perfo
    • Barely supported.. (Score:3, Interesting)

      by cybrthng ( 22291 )
      You have to run 9.2 and use specific version. GFS 6.1 looks like a life saver, but it could be years before that is certified against Oracle.

      Infact the entire Redhat/Oracle certificaition process is a nightmare.
  • by TarrySingh ( 916400 ) on Thursday September 22, 2005 @10:27AM (#13621819) Homepage
    People start shedding fears of the penguin. It's an alternative and it's very much user friendly. I predict that soon the users community will shift/accept it even at home. Although I see Novell(suse) making more progress there. Red Hat recently announced that GFS is now supported by Oracle for use with Real Application Cluster database configurations, and has been certified for use with EMC's Clariion networked storage systems, and Celerra iSCSI network attached storage systems, as well as Network Appliance's SAN interfaces BTW HP has been offering RAC on RHEL already for a long time now. Althoug hte GFS will certainly avoid the need of running the HP clsuterware(I hope) tool.
    • by fourbeer ( 144112 ) on Thursday September 22, 2005 @10:35AM (#13621881)
      I believe M$oft does not allow the sale of blank systems. They really control what goes on a system. I think Wal-mart tried this and was strong-armed by M$soft.
      • I think Wal-mart tried this and was strong-armed by M$soft.

        Evil though the Walton empire may be, they are still selling systems with no os. [walmart.com]
      • It's not that they don't allow it, it's that they offer much lower prices if a company will sign a contract that guarantees they won't sell a computer without an operating system. As far as I know, they don't specify which OS you have to load though, which is why you can buy pc's with freeDos preloaded. It fullfills the terms of the contract, even if everyone knows the first thing the buyer is going to do is blow it away and install the OS of their choice. I don't work for a major computer manufacturer t
      • Wal-Mart is just about the only retailer big enough to NOT be pushed by MS. I think of that old koan that asks what happens if an irrestible force encounters an immovable object. Wal-Mart thinks they are the only ones who have the right to be pushy and obnoxious. If push came to shove, they'd probably give MS a taste of no access to their stores for a few months just to make the point. Not because of any love for the penguin mind but just to school them.

        That could be awfully fun to watch.
    • by Anonymous Coward
      I predict that soon the users community will shift/accept it even at home.

      Oh, absolutely! It's precisely concerns about support for Clariion and RAC that have been keeping home users off Linux!

  • by Torinir ( 870836 ) <<moc.liamg> <ta> <rinirot>> on Thursday September 22, 2005 @10:38AM (#13621916) Homepage Journal
    to announce a Linux partnership?

    It was almost a given that HP would team up with some major Linux distro, given that they have a fair sized share of the corporate market. I'd open my eyes a little more if Dell or another primarily HSB (Home and Small Business) OEM were to start to offer Linux systems.

    Of course, it'd also be nice if some of those manufacturers would also add Linux support for their peripheral products. There's so few good drivers for printers/scanners/all-in-ones, especially from HP (which I do tech support for), and tbh I don't have the coding skills to build my own. It's probably a big reason that Linux use is still relatively light on the HSB side.
    • by Anonymous Coward
    • Dell offers plenty of Linux support.
      You've been able to get Redhat on the servers for years and you can also get high end workstations preinstalled with Linux. All of their drivers and utilities have great Linux support as well.

      Is there anyone left that doesn't offer Linux?
  • by Anonymous Coward
    Since there will be some storage research going on...

    Imagine you have several remote sites accessing files on a centralised storage server. Just as an example, say it is a samba server which remote computers accessing it over SSH (like this [webservertalk.com]).

    If you have a slow upload link (who doesn't), working with such a remote storage solution quickly becomes a problem.

    Is there anything in the way of:

    • All the offices have a server and the same data is mirrored on all the servers
    • When you access a file on the server
  • by dido ( 9125 ) <dido&imperium,ph> on Thursday September 22, 2005 @10:53AM (#13622036)

    Being seasoned in Linux enterprise deployments, I've had more than my share of frustration with some of HP's own storage appliances. Their entry-level storage appliances, the MSA series (which IIRC, they inherited from Compaq), seem to be pretty ok, but they're no good when you start growing to the point when more than several machines need to attach to the SAN. The VA series of high-end storage appliances are in contrast the very devil to deal with. I remember the problems a client of ours was having with these monsters when they were using it for Oracle 9i RAC. Their RAID management started having problems once the disks started filling up to more than 75% capacity, and HP never was able to give us a satisfactory solution, except to replace the damn storage array with something bigger and much more expensive. And so overtures from the likes of EMC began to reach much more receptive ears...

    I certainly hope this helps with the engineering of HP's storage appliance line, and they can fix some of the brain damage that some of them have.

    • Only thing good out of HP's storage is their ultra high end arrays which is practically HDS's lightning san arrays rebranded into HP XP series. MSA and VA is not remotely comparable to what the rest of the storage industry has to offer.


    • Thats not been my experience with EVAs. I've worked on dozens of installations with EVAs on the back end. Mostly Tru64/Alpha and some HP-UX, and problems have been very rare. I really like them. The ability to create Vdisks of almost any size without having to keep track of what disks are or aren't free is very powerful. And I like being able to assign any UDID value I like to a Vdisk, and assign aliases to groups of HBA wwids for easy host/cluster management.

      The XP range are clunky old pigs by compar
  • Use? (Score:2, Funny)

    by mayhemt ( 915489 )
    Can we use this to deploy MS' patches? that would be its regression testing...(Just a thought!!)
  • I'm surprised that they based it in NC. HP already has a world-class storage division based in Colorado Springs (it was the old Compaq storage division).

    When I worked at a FibreChannel startup, we did a lot of work with those guys.
  • by Anonymous Coward
    See Oracle and Linux set world record [noticias.info]:

    "Today Oracle announced a new world record TPC-H 300 gigabyte (GB) data warehousing benchmark for Oracle(r) Database 10g Release 2 and Oracle Real Application Clusters on Red Hat Enterprise Linux, overtaking IBM DB2's best benchmark performance in the same category.

    Running atop an eight-node HP BladeSystem cluster of ProLiant BL25p server blades, each with one AMD Opteron 2.6 GHz processor and Red Hat Enterprise Linux v.4, Oracle Database 10g Release 2 and Oracle Rea

  • I've been a Linux geek for about 10 years now, and recently got my first enterprise gig. Part of this meant working with both Linux and Solaris to deploy our new SAN (HDS if it matters). One of the first things that blew my mind was how much better Solaris is when it comes to storage. Just make sure you've got all the possible LUNs you'll be allocated by the SAN both now and in the future in your config file, and that's it.

    When new storage is allocated to the Sun, just run devfsadm and you'll be able to see
    • We do not load our HBA driver (QLogic, qla2300) on boot through the initrd. We modprobe it after the system is up. This way, you just modprobe -r to stop it, and modprobe it again to see the changes.

      No reboots here on our 200+ TB 100% Linux SAN.

      But yes, Solaris is nice in some ways too. :)

The herd instinct among economists makes sheep look like independent thinkers.

Working...