Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Businesses Red Hat Software IT

Red Hat and HP Establish Linux Storage Lab 82

Rob writes "Linux distributor Red Hat has teamed up with Hewlett-Packard to create a new performance test lab to help customers deploy enterprise storage across Linux environments. The lab will focus on performance and integration testing in order to produce best practices and solutions guides, the companies said, and will also enable customers to preview new technological developments."
This discussion has been archived. No new comments can be posted.

Red Hat and HP Establish Linux Storage Lab

Comments Filter:
  • Consolidation (Score:1, Interesting)

    by mysqlrocks ( 783488 ) on Thursday September 22, 2005 @11:09AM (#13621687) Homepage Journal
    It's interesting to watch the Linux market mature. With IBM putting so many resources behind Linux of course HP is going to want to continue to work with Red hat.
  • i wish ... (Score:4, Insightful)

    by dominic.laporte ( 306430 ) on Thursday September 22, 2005 @11:14AM (#13621725)
    they do performance tests on

    1) postgres with large data sets over SATA and IDE hard drives.
    2) mysql with large data sets over SATA and IDE hard drives.
    3) both of the above over www.coraid.com.

    p.s.
    coraid drivers are gpl and part of the kernel already.
    • by Anonymous Coward on Thursday September 22, 2005 @11:34AM (#13621873)
      then:
      mysql on 420MB second-hand IDE drives ,
      then:
      postgress on RAID-1 configured with two 5.25" fdd drives

      (I don't think storage performance lab is about stuffing IDE disks in low end server and measuring performance)
    • Not quite (Score:5, Insightful)

      by earnest murderer ( 888716 ) on Thursday September 22, 2005 @11:38AM (#13621904)
      This is mostly a webvertisment/reference for deploying GFS on HP Proliant server hardware.
    • SATA disks possibly (Score:3, Interesting)

      by Anonymous Coward on Thursday September 22, 2005 @11:40AM (#13621935)
      SATA disks is possibly true. To those of you who say "What??! Ordinary SATA disks on mission critical servers??!" - even high end enterprise storage systems (like EMC Symmetrix) use ordinary disks.
      • by Donny Smith ( 567043 ) on Thursday September 22, 2005 @12:55PM (#13622599)
        >To those of you who say "What??! Ordinary SATA disks on mission critical servers??!" - even high end enterprise storage systems (like EMC Symmetrix) use ordinary disks.

        WTF are you blathering about?
        The fact that those disk arrays can use SATA disks doesn't mean that they recommend running mission critical databases on SATA disks.

        They do that for simple reasons such as:
        a) if you need cheap storage, you don't have to buy two disk arrays (e.g. Symmertrix for FC SCSI and CORAID for SATA)
        b) you can put shit data on SATA and important data on SCSI (e.g. database files on SCSI, database backup files on SATA)
        • by networkBoy ( 774728 ) on Thursday September 22, 2005 @01:12PM (#13622749) Journal
          no no no you got that all wrong.

          a) The high end disks are for caching the most used portions of the DB,
          b) which is on SATA.
          c) The backups are on refurb UDMA100 disks.
          a sub 1) The rest of the high speed and availability disk space is devoted to a hidden share of MP3s XVIDs and porn for the BOFH admin staff. :P

          -nB
        • by Anonymous Coward on Thursday September 22, 2005 @01:46PM (#13623030)
          The fact that those disk arrays can use SATA disks doesn't mean that they recommend running mission critical databases on SATA disks.

          They don't recommend anything, they provide a storage system and it really doesn't matter what kind of disks are under the hood. Yes, even fibre channel EMC Symmetrix (the most high end enterprise storage system) has regular, ordinary disks under the hood. You don't get to choose which disks you put there - it's a complete solution they provide and it's anything but cheap! Now, how can a system like that cost that much if it uses regular disks? You pay for the hardware&software solution that makes a solid proof, fast storage system out of those regular disks.
    • by Donny Smith ( 567043 ) on Thursday September 22, 2005 @12:49PM (#13622540)
      >they do performance tests

      What kind of business benefit could HP and RH possibly derive from burning hundreds of man-hours on perf tests that can be replicated using any other hardware with any other Linux OS?

      They could sell tuning "services"?
      Yes, to the first customer, then they would do a diff on clean system install, collect their optimization settings and post them on their Web site for everyone to share.

      Apparently you wish they test so that you don't have to spend your time and money to do that. We all want that.
      They, on the other hand, have to think how to make money, so they'll instead test RH with HP storage - if you want to benefit from that work, you'll have to shell out some bucks for HP storage (and HP/RH services tied to it).
  • by TripMaster Monkey ( 862126 ) * on Thursday September 22, 2005 @11:17AM (#13621742)

    Some information on the Global File System can be found here [redhat.com] and here [redhat.com].
    • by AKAImBatman ( 238306 ) * <akaimbatman@g m a i l . c om> on Thursday September 22, 2005 @11:23AM (#13621787) Homepage Journal
      Not to be confused with the Google File System [rochester.edu]. A lot of people confuse them (same TLA), so it's important that sysadmins are clear that they are very different. If you install G[lobal]FS, you're getting something that has different goals in distributing the data than those of the Google servers. Google's FS has only a modicum of documentation, and no public implementation available. If you want to replicate GoogleFS, you'll have to guess as to the parts that their documentation doesn't cover.

      Now back to your regularly scheduled program. :-)
    • by drdink ( 77 ) * <smkelly+slashdot@zombie.org> on Thursday September 22, 2005 @11:28AM (#13621827) Homepage
      I don't see how GFS can scale as well as something like OpenAFS [openafs.org]. With AFS, you get an entire infrastructure. I wish more people would be investing time and effort into improving filesystems like AFS, where all systems can share a common namespace without requiring the availability of a SAN. The two have slightly different uses, but it'd still be nice to see more force behind AFS now that it is opensourced.
    • by account_deleted ( 4530225 ) on Thursday September 22, 2005 @11:30AM (#13621842)
      Comment removed based on user account deletion
      • by Fruit ( 31966 ) on Thursday September 22, 2005 @01:29PM (#13622872)
        No, all computers share a "drive" and GFS makes sure they don't step on eachother's toes. Usually that "drive" is a largish storage device with a lot of harddisks, though.
        • by AJWM ( 19027 ) on Thursday September 22, 2005 @02:23PM (#13623336) Homepage
          What he said. GFS is similar (in concept, anyway, I haven't looked at GFS's innards) to OCFS (Oracle Cluster Filesystem). It lets multiple servers shared the same "drive". (Typically a high end SAN, but OCFS -- and I'm presuming GFS -- will work with multiple computers plugged into the same single FireWire drive.)
          • by ratatosk_the_squirre ( 825765 ) on Friday September 23, 2005 @06:15AM (#13627936)
            Correct.
            (Note that GFS predates OCFS, GFS grew out of the University of Minnesota and has a long history there)

            In order to use GFS your nodes need some form of "shared blockspace (disk)". Traditionally this has been Fibre Channel Storage, but there is nothing in GFS that prevents using a shared FireWire, iSCSI or any other shared blockspace. The problem often seen here is that even if a "disk" can be shared it does not always behave "nicely" in such a setup. Lower-end devices are often not designed to perform well when directly accessed by multiple nodes in such a setup :)

            GFS will be the native on-disk filesystem on this "shared disk" and ensure filesystem correctness for all the nodes mounting the filesystem.
  • gaaaaah! (Score:0, Troll)

    by aussersterne ( 212916 ) on Thursday September 22, 2005 @11:26AM (#13621804) Homepage
    Integration testing? Best practices? Performance guides?

    Ugh, I have to be at work in ten minutes, please don't pollute my pre-work morning with corporatespeak. :-P
    • by PornMaster ( 749461 ) on Thursday September 22, 2005 @11:51AM (#13622021) Homepage
      "best practices", while corporatespeak, are the kinds of things that keep you from getting fired when it all breaks. If you want to continue to have pre-work mornings, as opposed to pre-jobhunting mornings, I recommend that you consider following best practices.
  • by TarrySingh ( 916400 ) on Thursday September 22, 2005 @11:27AM (#13621819) Homepage
    People start shedding fears of the penguin. It's an alternative and it's very much user friendly. I predict that soon the users community will shift/accept it even at home. Although I see Novell(suse) making more progress there. Red Hat recently announced that GFS is now supported by Oracle for use with Real Application Cluster database configurations, and has been certified for use with EMC's Clariion networked storage systems, and Celerra iSCSI network attached storage systems, as well as Network Appliance's SAN interfaces BTW HP has been offering RAC on RHEL already for a long time now. Althoug hte GFS will certainly avoid the need of running the HP clsuterware(I hope) tool.
    • by fourbeer ( 144112 ) on Thursday September 22, 2005 @11:35AM (#13621881)
      I believe M$oft does not allow the sale of blank systems. They really control what goes on a system. I think Wal-mart tried this and was strong-armed by M$soft.
      • by Anonymous Coward on Thursday September 22, 2005 @12:43PM (#13622467)
        Wal-Mart, the company that generates 200 billion in revenue with 5000 stores and over 1 million employees, who also doesn't really care about the PC market anyway, strong-armed by Microsoft?

        You're smoking something good.. but what?
      • by FuckTheModerators ( 883349 ) on Thursday September 22, 2005 @12:44PM (#13622483) Journal
        I think Wal-mart tried this and was strong-armed by M$soft.

        Evil though the Walton empire may be, they are still selling systems with no os. [walmart.com]
      • by Anonymous Coward on Thursday September 22, 2005 @01:00PM (#13622639)

        I believe the euphemism for "linux-ready" is "comes with FreeDOS pre-installed". "and Ubuntu CDs are available from your reseller, to unlock the other features". My HP kit is hunky-dory anyway, but if I was buying my laptop now, I'd be after HP, just for that Officially Sanctioned (tm) smell.

        My word was "arenas". And surely everyone was closing their paragraph tags anyway?

      • by whytakemine ( 901083 ) on Thursday September 22, 2005 @02:59PM (#13623679)
        It's not that they don't allow it, it's that they offer much lower prices if a company will sign a contract that guarantees they won't sell a computer without an operating system. As far as I know, they don't specify which OS you have to load though, which is why you can buy pc's with freeDos preloaded. It fullfills the terms of the contract, even if everyone knows the first thing the buyer is going to do is blow it away and install the OS of their choice. I don't work for a major computer manufacturer though, so all of this is just hearsay.
      • by dmaxwell ( 43234 ) on Thursday September 22, 2005 @07:37PM (#13625930)
        Wal-Mart is just about the only retailer big enough to NOT be pushed by MS. I think of that old koan that asks what happens if an irrestible force encounters an immovable object. Wal-Mart thinks they are the only ones who have the right to be pushy and obnoxious. If push came to shove, they'd probably give MS a taste of no access to their stores for a few months just to make the point. Not because of any love for the penguin mind but just to school them.

        That could be awfully fun to watch.
        • by Anonymous Coward on Friday October 07, 2005 @12:25AM (#13736985)
          Oh, no, the monopsonistic supermarket/big-box retailers (of whom Walmart is the obvious leader in the USA, but there are others in several countries) can be much more devious and effective than denying access. After all, why should they leave money on the table, or risk a reputation for not being so well-stocked with *everything* at their larger stores that shopping anywhere else is not worthwhile?

          The big weapon Walmart has against Microsoft is shelf position. If they put the boxes of Microsoft Windows and the like near, but nowhere near as easily visible or reachable, as prominently displayed boxes of (for example) Red Hat, that would start to make even Microsoft nervous. Walmart could easily go a step further in terms of offering PCs with nothing at all installed on them. If they were to position Red Hat as a comparable and cheaper thing to install on those PCs instead of Windows, and Walmart (by virtue of its huge buying power) deliberately set out to be the price-setter in the box-plus-RedHat market, and competitive with the cheapest box-plus-Microsoft market...

          But why would Walmart do this? The only reason I can think of is that they would want to wring wholesale price concessions out of Microsoft so that they can offer Windows at a lower price than anyone else, while still maintaining decent margins.

          Walmart is, after all, a retailer, not a techno-religious-movement.
    • by Anonymous Coward on Thursday September 22, 2005 @12:05PM (#13622132)
      I predict that soon the users community will shift/accept it even at home.

      Oh, absolutely! It's precisely concerns about support for Clariion and RAC that have been keeping home users off Linux!

  • Satan. (Score:0, Flamebait)

    by superub3r ( 915084 ) on Thursday September 22, 2005 @11:31AM (#13621847) Homepage
    So, RedHat teamed up with the devil? Hm. Thats odd. I'd always seen them as the advocate of everything as good. I've never been pleased with ANY HP Product (Minus their Printers). I hope they can be influenced in a good way.
    • by Anonymous Coward on Thursday September 22, 2005 @11:36AM (#13621894)
      Ever use their servers that run HP-UX, Tru64, or OpenVMS?
    • Re:Satan. (Score:2, Interesting)

      by Anonymous Coward on Thursday September 22, 2005 @11:40AM (#13621938)
      I work with HP OpenVMS and HP NonStop platforms and they are one of the best systems out there. HP do make very good things, although they don't advertise it much. HP is and has been a strong supporter of Linux in many years by supporting and selling lots of Proliant systems with Linux.
    • by Anonymous Coward on Thursday September 22, 2005 @12:52PM (#13622570)
      That's idiot speak. What H/W platform do you suggest running Linux on?
      HP Proliant servers are rock solid, never ever had a problem with them and I've built out a very large amount of them.
  • by Torinir ( 870836 ) <torinir.gmail@com> on Thursday September 22, 2005 @11:38AM (#13621916) Homepage Journal
    to announce a Linux partnership?

    It was almost a given that HP would team up with some major Linux distro, given that they have a fair sized share of the corporate market. I'd open my eyes a little more if Dell or another primarily HSB (Home and Small Business) OEM were to start to offer Linux systems.

    Of course, it'd also be nice if some of those manufacturers would also add Linux support for their peripheral products. There's so few good drivers for printers/scanners/all-in-ones, especially from HP (which I do tech support for), and tbh I don't have the coding skills to build my own. It's probably a big reason that Linux use is still relatively light on the HSB side.
    • by Anonymous Coward on Thursday September 22, 2005 @11:47AM (#13621981)
    • by dwntwnboi ( 820586 ) on Thursday September 22, 2005 @12:32PM (#13622364) Homepage
      for linux to a viable in any market other than the niche home and business markets, it has to get rid of the one thing that has always kept it from competing directly with other OSs: the command console. the people who use linux and can do anything though the console are not the people who need to start using linux.

      if linux is ever going to be adopted by anyone other than linux uber-geeks or completely masochistic home computer users, people (beginners & experts alike) must be able to do anything inside linux without ever having to use to command console. sure, keep it around for legacy, and so all those people who actually learned all those commands still have their novelty. for the rest of the world who's more interested in an easy-to-use OS and less in geek noteriety. that's who linux should be sold to: all the people who don't have it already.

      if linux becomes as easy to use as OSX, with a comperable package management system and easy installation of new applications, then M$ and apple will both be screwed. M$ will have 2 competitors, OSX and linux. OSX appeals because it's just better than the other 2 (it does everything linux does, but without the hassle of an obsolete interface and an, at best, cryptic command language). however to run OSX you need an Apple. but since most will need a new computer to run vista, people will be looking for a new computer anyhow. by the time vista is out, so will be the cheaper and faster macs with the next version of OSX (vista is based on the features of the current OSX, so it will be a year behind form the start.) if people are offered a third choice: a distro of linux that's as easy to use as the other 2 main OSs.

      if some linux distro realizes that this is the golden opportunity to debase M$ and to steal Apple's momentum to further the switch people to linux, they'll do what they have to do: make it super-easy, super-friendly, super-simple, yet still super-powerful.

      DEATH TO THE COMMAND CONSOLE!
      • by WillerZ ( 814133 ) on Thursday September 22, 2005 @02:26PM (#13623361) Homepage
        Fuck off.

        Once you know what you're doing in console-land you can do everything you need to do quicker than using a GUI. And, having done it once, you can copy your shell history into a script and do the same thing to the other 800 linux machines you're responsible for adminning.

        Remote admin is a billion times easier if you can get all the crappy GUI shit out of the equation. Of course, Linux started with no crappy GUI shit to remove so the hard work doesn't need doing.

        If you can't use the console you shouldn't have root, and will therefore have no need to use the console.
        • by dwntwnboi ( 820586 ) on Thursday September 22, 2005 @02:45PM (#13623529) Homepage
          who gives a shit what you missed losing your virginity to learn, just so that you could dis me here. p'shaw!!

          what i was saying that until self-righteous uber-geeks get over yourselves, thee spread of linux will be slow and ineffectual. by casting off people like you whose ideas hold back the commercial progress of linux, the linux community could thrust itself into the mainstream and compete directly with other dominant OSs.

          you and your ilk represent the biggest flaw in the linux community: the unwillingness to advance the fuctionality of your OS beyond your own limited uses of it. the rest of the world, the ones who would make linux a household name to replace microsoft, don't do what you do with the console. never will. that's why it gets left behind so that people like you who refuse to advance with the times won't be shut out completely by your own boorish inability to adapt.
          • by WillerZ ( 814133 ) on Thursday September 22, 2005 @03:00PM (#13623698) Homepage
            W <- this is an example of a capital letter. It has 25 friends you can use too.

            Back on topic, you said:

            by casting off people like you whose ideas hold back the commercial progress of linux, the linux community could thrust itself into the mainstream and compete directly with other dominant OSs

            And why would the "linux community" want to do that? The community gains nothing by being a mainstream OS, but it does lose the benefit of being a low-profile target for malware authors. Linux-based businesses would benefit, but most of the "community" doesn't have any commercial connection to Linux-on-the-desktop.

            Removing the console would not "advance the functionality" of Linux, it would retard it.
            • by dwntwnboi ( 820586 ) on Thursday September 22, 2005 @03:33PM (#13623958) Homepage
              i apologize for the earlier remark about your virginity, but i won't use capital letters.

              first of all, my post may have misdirected. i was addressing the "community" as they/you are to primary developers of newer technologies and the larges proponents of the absolute dependance on the console window. however, my primary interest is in the commercial linux-on-the-desktop, especially considering the implications of a desktop linux distro for home and office that would enbale a user to accomplish any take available through the console with greater ease and without the knowledge of the command language through the gui.

              and while you say (not debating) you can do things fater through the console, then why hasn't the gui been streamlined to permit that range of functions at greater or better speed than manual command entry? not a big deal to the linux community, i know.

              i just wanna know why these desktop-linux pushers, who are now in a prime position to knock apple off track by stealing the attention they've drawn from Vista, and offering desktop linux NOT devoid of a console, but devoid of the requirement to use it in favor of GUI replacements in order to make it more appealing and less frightening to people wanting to ditch XP, but don't want to pay for a new computer..

              regarding malware: it's an inevitability in any os, but i can appreciate not wanting to speed up the process. anyhow, i think OSX has drawn much more attention, and it still is also malware-free (knocks on wood).

              regarding the console: i mistakingly said "remove" when i meant, "create an equally functional GUI equivelant" just so such operations could be done through the gui without having to deal with code. and in a desktop environent, (and i'm not talking about remote server admin, etc. where you need the console, regardless of the local OS), how many of those functions that you must do via console really *have* to be through the console? certainly a good deal of them can be accomlished through the gui by various methods. and surely, linux developers are smart enough that they could design a gui/console hybrid that allows for not only one and the other, but better visual linking of object oriented command structires, especially display boxes for linked files, their attributes, commands and their parameters. it just seems to me that the simple type-it-in interface could updated with a gui console shell which parses the information into a hybrid console/GUI dialog box. perhaps something like a project window, offering much more funtionality that the console's text-in text-out, while providing users and developers incredibly useful tools to streamline workflow?
              • by WillerZ ( 814133 ) on Thursday September 22, 2005 @03:53PM (#13624156) Homepage
                Have you tried a recent release of SuSE? I've not used it extensively but it looked as though the preferred way to perform simple administrative tasks was with the graphical tool YaST2.

                why hasn't the gui been streamlined to permit that range of functions at greater or better speed than manual command entry

                Mainly because there isn't a GUI-equivalent of the tcsh history features, or even tab-completion. I have yet to see a workable graphical scheme which comes anywhere close.

                The main things which have to be done with the console is writing and/or fixing the X server configuration files, or fixing a corrupted filesystem so that you can get at your data again. These things are rare, sure, but it's easier to only remember one way of doing things.

                It's interesting that you mention "a gui console shell which parses the information into a hybrid console/GUI dialog box", because I've been planning to implement something along those lines for the Y windows file manager. My intent was to have it work both ways - so you would see the shell commands which are equivalent to each GUI action. My intent was to get users off the inefficient GUI way of doing things by showing them the speedier commandline way, without them having to go through the near-vertical learning curve.
        • by Anonymous Coward on Friday October 07, 2005 @12:11AM (#13736909)
          How on earth did a post starting with "Fuck off" get moderated up so high?

          Oh yeah, /., where it doesn't matter how antisocial you are, as long as you express the right opinion.
      • by dwater ( 72834 ) on Thursday September 22, 2005 @07:01PM (#13625728)
        You can always use webmin [webmin.com].
    • by j-cloth ( 862412 ) on Thursday September 22, 2005 @03:39PM (#13624011)
      Dell offers plenty of Linux support.
      You've been able to get Redhat on the servers for years and you can also get high end workstations preinstalled with Linux. All of their drivers and utilities have great Linux support as well.

      Is there anyone left that doesn't offer Linux?
  • by Anonymous Coward on Thursday September 22, 2005 @11:51AM (#13622026)
    Since there will be some storage research going on...

    Imagine you have several remote sites accessing files on a centralised storage server. Just as an example, say it is a samba server which remote computers accessing it over SSH (like this [webservertalk.com]).

    If you have a slow upload link (who doesn't), working with such a remote storage solution quickly becomes a problem.

    Is there anything in the way of:

    • All the offices have a server and the same data is mirrored on all the servers
    • When you access a file on the server (locally), the files becomes locked on ALL the servers
    • When you are done with the file, data is updated on all the servers using something like bittorrent (only secure+encrypted)

    If I'm thinking this one right, that would give you instantaneous read/write access to unlocked files on the server (since access is local), the only slow down being how long it takes to get a file updated/unlocked on all the servers.

  • by dido ( 9125 ) <dido AT imperium DOT ph> on Thursday September 22, 2005 @11:53AM (#13622036)

    Being seasoned in Linux enterprise deployments, I've had more than my share of frustration with some of HP's own storage appliances. Their entry-level storage appliances, the MSA series (which IIRC, they inherited from Compaq), seem to be pretty ok, but they're no good when you start growing to the point when more than several machines need to attach to the SAN. The VA series of high-end storage appliances are in contrast the very devil to deal with. I remember the problems a client of ours was having with these monsters when they were using it for Oracle 9i RAC. Their RAID management started having problems once the disks started filling up to more than 75% capacity, and HP never was able to give us a satisfactory solution, except to replace the damn storage array with something bigger and much more expensive. And so overtures from the likes of EMC began to reach much more receptive ears...

    I certainly hope this helps with the engineering of HP's storage appliance line, and they can fix some of the brain damage that some of them have.

    • by superpulpsicle ( 533373 ) on Thursday September 22, 2005 @12:14PM (#13622211)
      Only thing good out of HP's storage is their ultra high end arrays which is practically HDS's lightning san arrays rebranded into HP XP series. MSA and VA is not remotely comparable to what the rest of the storage industry has to offer.

      • by Anonymous Coward on Thursday September 22, 2005 @12:39PM (#13622431)
        Not "practically".... The HP XP256 is a rebranded Hitachi Lightning 9800 series system. Back when I was working for a FC Startup, I used to do Interoperability testing with all these systems, and I can definitly say, stick with a robust well supported product like the EMC Clariion and Symmerix arrays.
    • by Anonymous Coward on Thursday September 22, 2005 @07:46PM (#13625983)
      Do you still needs a Windows storage management appliance to manage the EVAs?

      Having worked at a site that had severe problems with HP SAN gear, resulting in data corruption and taking HP over 3 months to fix the problem (firmware bug), I would recommend people avoid HP like the plague. Stick to real storage vendors, such as NetApp or EMC.
    • by Macka ( 9388 ) on Friday September 23, 2005 @06:21AM (#13627953)

      Thats not been my experience with EVAs. I've worked on dozens of installations with EVAs on the back end. Mostly Tru64/Alpha and some HP-UX, and problems have been very rare. I really like them. The ability to create Vdisks of almost any size without having to keep track of what disks are or aren't free is very powerful. And I like being able to assign any UDID value I like to a Vdisk, and assign aliases to groups of HBA wwids for easy host/cluster management.

      The XP range are clunky old pigs by comparison. They don't support virtualization so its not as easy to make the most of the storage you have. You can't pick your own UDIDs, they're calculated for you. So you can't use that as a tool to help keep track of which storage cab your unix disks are located in (e.g. 1000+ for cab1, 2000+ for cab2, etc). And have you ever tried to unpresent a disk from an XP that has a Persistent Reservation on it ... good luck trying. On a TruCluster you have to shutdown one member, then use scu(8) to attach to the disk and zap the keys manually; stay in scu (so you don't accidentally do any I/O to the disk and re-establish they keys) and then run to your XP and whip the disk away quick. Apparently its just as big a headache in Windows too. No such problems with the EVA, it knows you're the boss and lets you do what you need to do.

       
  • Use? (Score:2, Funny)

    by mayhemt ( 915489 ) on Thursday September 22, 2005 @12:11PM (#13622174)
    Can we use this to deploy MS' patches? that would be its regression testing...(Just a thought!!)
  • by sconeu ( 64226 ) on Thursday September 22, 2005 @12:31PM (#13622355) Homepage Journal
    I'm surprised that they based it in NC. HP already has a world-class storage division based in Colorado Springs (it was the old Compaq storage division).

    When I worked at a FibreChannel startup, we did a lot of work with those guys.
  • by Anonymous Coward on Thursday September 22, 2005 @01:19PM (#13622801)
    See Oracle and Linux set world record [noticias.info]:

    "Today Oracle announced a new world record TPC-H 300 gigabyte (GB) data warehousing benchmark for Oracle(r) Database 10g Release 2 and Oracle Real Application Clusters on Red Hat Enterprise Linux, overtaking IBM DB2's best benchmark performance in the same category.

    Running atop an eight-node HP BladeSystem cluster of ProLiant BL25p server blades, each with one AMD Opteron 2.6 GHz processor and Red Hat Enterprise Linux v.4, Oracle Database 10g Release 2 and Oracle Real Application Clusters achieved record-breaking performance of 13,284.2 QphH@300GB with a price-performance ratio of $34.20/QphH@300GB. This new industry-leading result surpasses IBM DB2's best TPC-H 300 GB benchmark running on IBM hardware using half the number of processors."

  • by Anonymous Coward on Thursday September 22, 2005 @01:21PM (#13622824)
    Sounds like this can only be shared between linux boxen, is there any cross-platform support with GFS? Even third-party, like SANergy.
  • by Anonymous Coward on Thursday September 22, 2005 @03:20PM (#13623859)
    The short list:

    http://malfeasance.50megs.com/ [50megs.com]
  • by Builder ( 103701 ) on Friday September 23, 2005 @02:58AM (#13627461)
    I've been a Linux geek for about 10 years now, and recently got my first enterprise gig. Part of this meant working with both Linux and Solaris to deploy our new SAN (HDS if it matters). One of the first things that blew my mind was how much better Solaris is when it comes to storage. Just make sure you've got all the possible LUNs you'll be allocated by the SAN both now and in the future in your config file, and that's it.

    When new storage is allocated to the Sun, just run devfsadm and you'll be able to see it. With Linux, reboot. WTF ? I've still not found a way around this.

    Because we've gone for an Enterprise solution with Red Hat, I raised a support call. Their final response was that they do not support adding new LUNs to a machine without a reboot, and that was that.

    Earlier on I'd had a run-in with RH support because they wouldn't support hotswapping disks in an HP DL380. These machines are built to do this, but I was having issues detecting the replaced disk and rebuilding my software RAID array. Again Red Hat said that they did not support hot-adding disks to the machine and that I should reboot. I finally found a solution to this one on my own, making the grand I'd paid for RH support on that machine a bit of a joke :(

    So yeah, Sun kicks ass on this front, and anything that RH can do to catchup would be useful!
  • by Alf Gored ( 906180 ) on Friday September 23, 2005 @03:53AM (#13627617)
    Recently, Tipper and I discovered a great little Linux/Solaris search application that uses Ajax to boot. I can search on all my log files -- weblogic, apache, router logs, mysql, oracle, email, et cetera. Cool stuff. Splunkboy [splunk.com]

    //booyakasha
    //Alf Gored

All the simple programs have been written.

Working...