Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking (Apple) Technology (Apple) Businesses Hardware Technology Apple

InfiniBand Drivers Released for Xserve G5 Clusters 134

A user writes, "A company called Small Tree just announced the release of InfiniBand drivers for the Mac, for more supercomputing speed. People have already been making supercomputer clusters for the Mac, including Virginia Tech's third-fastest supercomputer in the world, but InfiniBand is supposed to make the latency drop. A lot. Voltaire also makes some sort of Apple InfiniBand products, though it's not clear whether they make the drivers or hardware."
This discussion has been archived. No new comments can be posted.

InfiniBand Drivers Released for Xserve G5 Clusters

Comments Filter:
  • Proprietary Crap (Score:3, Informative)

    by ceswiedler ( 165311 ) * <chris@swiedler.org> on Friday October 15, 2004 @06:35PM (#10541210)
    The article is still subscriber-only, but Linux Weekly News has a good summary of some discussion on the LKML about InfiniBand. Greg K-H's original posting can be found here [lwn.net]. Basically, he feels that it's impossible to implement the specification for InfiniBand in a free/open source product without violating the licensing agreement of the spec, because of patent infringement.
    • > he feels that it's impossible to implement the specification for
      > InfiniBand in a free/open source product without violating the
      > licensing agreement of the spec, because of patent
      > infringement.

      Not even in the nvidia drivers kind of way, with proprietary kernel modules? Not the most optimal solution (probably nearing highly pessimal) but probably possible.

    • Re:Proprietary Crap (Score:5, Informative)

      by tempest69 ( 572798 ) on Friday October 15, 2004 @06:52PM (#10541353) Journal
      Infiniband is designed to be low latency to the extreme. Their driver software is going to be really sensitive to latency. If they can make their nic driver .5 usec faster than their competition it's a huge change in total latency. Thats only 2000 clock ticks, possibly 30-50 memory pulls. But for scientific computing it makes a huge difference in Computational Fluid Dynamics. The more cpu's you scale to, the more important the latency. So their driver software is something that they are going to protect. It would be negligent to give it to the competition. Storm
      • by Anonymous Coward
        But they should give it away for FREE!! I want FREE stuff! Gimme FREE stuff, this is slashdot! Information wants to be FREE (as in, fucking GIVEN to me without any effort on my part whatsoever)! Stick it to the man!!

        (Dress this up in a bunch stupid rhetoric, and you have the typical response around here.)

      • Re:Proprietary Crap (Score:3, Informative)

        by Barto ( 467793 )
        You're missing the point: if the spec was made open (NOT the driver software), open source drivers could be developed increasing the demand for Infiniband products, reduce costs to users and Infiniband and improve compatibility.
      • Don't waste your breath. The parent poster you respond to doesn't strike me as a Mechanical Engineer nor even deals with PDE's via FEM, let alone the vast aspect of accurately calculating Fluid Dynamic laws.

    • Re:Proprietary Crap (Score:3, Interesting)

      by Johannes ( 33283 )
      It's as much crap as other technologies like IEEE 1394 (Firewire). Greg is concerned with the patent licensing requirements for Infiniband, which is a valid concern, but is no different than the requirements for other technologies that have support under Linux.

      In particular, Infiniband requires licensing under RAND terms, similar to that of IEEE 1394.
    • by gl4ss ( 559668 )
      properiaty drivers for properiaty os that is run on properiaty hardware(on os that's only legal to run on that hw makers hardware too).

      so if you're there you're already pretty deep in "properiaty crap".
    • by Kalak ( 260968 ) on Friday October 15, 2004 @08:14PM (#10541880) Homepage Journal
      OK, the "proprietary crap" discussed here is for:
      #1 XServes runing (wait for it....) Mac OS X.
      #2 Supercomputers

      This is not your linux box you're using for a NAT server, or a Beowolf running SETI, so if you're building a super computer or just like drolling over them and thinking of using and expensive interconnect like InfiniBand, you're not looking to compare it to Beowolf over gigabit, and possibly not likely to care about if the drivers are binary only or not.

      This article is in no way related to any LKML posting other than it's the same company. This is about OSX Infiniband drivers. RTFA sometime, and you might realize such things.

      Welcome to the Apple section. If you're not interested in discussion of things related to Apple, please uncheck the appropriate box in your preferences, and we will all be happier. If you like to run Linux on Apple Hardware, please examine the OS discussed before trolling.

      If you want to troll about Infinbands policies effecting Linux, then wait until the LWN article is public ("Alternatively, this item will become freely available on October 21, 2004"), and submit it to /.'s general section (where I would be more than happy to consider it not trolling), and enjoy a livelier discussion there, with a wider, and more appropriate, audience.
    • If so, then why bother? Alternative network stacks over gig-ethernet would be much cheaper and can reasonably competitive in terms of latency with well written code.



      There was a dead project that I read about a few months ago that had 20microsecond latency over 100 ethernet. If anybody knows what I'm talking about, I would appreciate a refresher.

      • Who are you kidding - GbE vs. Infiniband?

        Performance differs by an **order of magnitude**

        10GbE vs. Infiniband - maybe, but even so - Infiniband is cheaper and has lower latency.
  • Imagine (Score:4, Funny)

    by commodoresloat ( 172735 ) on Friday October 15, 2004 @06:35PM (#10541213)
    installing Infiniband on a single unit G5....
  • Shocking (Score:4, Insightful)

    by CMiYC ( 6473 ) on Friday October 15, 2004 @06:37PM (#10541229) Homepage
    With so few companies left doing anything Infiniband related, makes you wonder what the thinking is here.
    • they want to repeat the raging success known as IBM MCA.
    • The thinking is this: The Xserve/G5 is a great platform for scientific computation; The market is only beginning; Myrinet has a very small share of the Apple HPC market; Infiniband is theoretically faster.

      This is not stuff cobbling 2-4 PCs together. Its for people who want the ultimate Xserve solution. I have a 16-processor Xserve G5 with Gig-E and Myrinet. My next solution will be some 96 processors and all InfiniBand.

  • Infiniban into (Score:5, Informative)

    by hardlined ( 785357 ) on Friday October 15, 2004 @06:39PM (#10541248) Homepage
    http://www.oreillynet.com/pub/a/network/2002/02/04 /windows.html [oreillynet.com]

    This is a short into to infiband.

    "InfiniBand breaks through the bandwidth and fanout limitations of the PCI bus by migrating from the traditional shared bus architecture into a switched fabric architecture."

    "Each connection between nodes, switches, and routers is a point-to-point, serial connection. This basic difference brings about a number of benefits:

    Because it is a serial connection, it only requires four as opposed to the wide parallel connection of the PCI bus.

    The point-to-point nature of the connection provides the full capacity of the connection to the two endpoints because the link is dedicated to the two endpoints. This eliminates the contention for the bus as well as the resulting delays that emerge under heavy loading conditions in the shared bus architecture.

    The InfiniBand channel is designed for connections between hosts and I/O devices within a Data Center. Due to the well defined, relatively short length of the connections, much higher bandwidth can be achieved than in cases where much longer lengths may be needed."

    "The InfiniBand specification defines the raw bandwidth of the base 1x connection at 2.5Gb per second. It then specifies two additional bandwidths, referred to as 4x and 12x, as multipliers of the base link rate. At the time that I am writing this, there are already 1x and 4x adapters available in the market. So, the InfiniBand will be able to achieve must higher data transfer rates than is physically possible with the shared bus architecture without the fan-out limitations of the later."
  • speeeeed... (Score:2, Informative)

    by jwind ( 819809 )
    This is cool. The Xserve is a great server. We got one at work and we used it as a mirror for a while before switchover. This thing never crashes. according to one of the articles these drivers will optimize the power of these beasts...
  • by Sosarian ( 39969 ) on Friday October 15, 2004 @06:45PM (#10541297) Homepage
    I've always understood that Myrinet is one of the better latency products available.

    And it has MacOSX Drivers:
    http://www.myri.com/scs/macosx-gm2.html [myri.com]

    Myrinet is used by 39% of the Top500 list published in November 2003
    http://www.force10networks.com/applications/roe.as p?content=9 [force10networks.com]
    • by Anonymous Coward on Friday October 15, 2004 @07:11PM (#10541506)
      Here's how bandwidth and latency break down for interconnect technologies:

      1. Quadrics (EXPENSIVE! and closed standard) sub 4 microsec
      2. InfiniBand (Realtively inexpensive, open standard) 4.5 microsec
      3. Myrinet (Roughly the same price as IB, but closed standard) sub 10 microsec
      4. GigE (cheap) 20+ microsec

      All latency numbers are hardware not software latencies. Depending on how good your MPI stack is you can often triple those numbers.

      There are so few companies making IB because there is only one chipset manufacturer right now. Mellanox. All the companies making IB products are startups and it will be a while before things get better.
      • 3. Myrinet (Roughly the same price as IB, but closed standard) sub 10 microsec

        Myrinet is not a closed standard. It's an ANSI-VITA standard (26-1998). The specs are available for free (http://www.myri.com/open-specs/ [myri.com]) and anybody can build and sell Myrinet switches, if they have the technology.

        Furthermore, the latency is sub 4 microsec. Come to SuperComputing next month and you will see.
      • by stef716 ( 412836 ) on Friday October 15, 2004 @09:25PM (#10542215) Homepage
        Hi,

        where did you get these numbers?
        If you really want to compare the latency of actual interconnects you should use the official performance results achieved in real environments using the driver api:
        (values from homepages)

        1. SCI (dolphinIcs) : 1.4 us
        2. Quadrics: 1.7 us
        3. Infiniband 4.5 us
        4. Myrinet 6.3 us

        MPI latency and bandwidth highly depend on the mpi library. I suggest to compare the mpich results.
        I rated these interconnects. But I'm sorry, I only have a german version.

        http://stef.tvk.rwth-aachen.de/research/interconne cts_docu.pdf [rwth-aachen.de]
    • IB has slightly lower latency than Myrinet (about 1 to 1.5 microseconds less IIRC), but 3-4 times better bandwidth. The IB network management tools are IMHO better than the equivalents for Myrinet too.
      • by Junta ( 36770 ) on Saturday October 16, 2004 @08:11AM (#10543825)
        To say IB network management tools are better is a great understatement. Part of myrinet is that the network topology is forced to be simple and the switches as dumb as possible (distribute the task of routing and mapping the networks to the nodes). IB switches offer a tad more functionality and offload mapping work to the switch, but stays a source-routed network (which is the chief way these technologies acheive low latency while ethernet is switch routed and therefore scales poorly as the switches have more and more work to do.

        Of course, until IB over fiber media comes around, myrinet cabling is a hell of a lot easier to deal with, longer lengths, more bendable, and tighter bend radius.
    • Myrinet is also interesting in that you can program the NIC yourself, at least you could the last time I fooled with it. It has a custom processor (LANai) with memory and a bunch of, basically, DMA channels on it. It doesn't do much of anything out of the box until you put an MCP (Myrinet Control Program) on it. My group made a few custom MCPs for Myrinet back in the day. Interesting programming since it is an embedded system. However, the complexity is/was pretty high and your MCP had to be debugged a
  • by Anonymous Coward on Friday October 15, 2004 @06:45PM (#10541302)
    The Virginia Tech cluster isn't on the top 500 list anymore:

    from http://www.top500.org/lists/2004/06/trends.php

    * The 'SuperMac' at Virginia Tech, which made a very impressive debut 6 month ago is off the list. At least temporarily. VT is replacing hardware and the new hardware was not in place for this TOP500 list.

  • by mfago ( 514801 ) on Friday October 15, 2004 @06:46PM (#10541312)
    People have already been making supercomputer clusters for the Mac, including Virginia Tech's third-fastest supercomputer in the world, but InfiniBand is supposed to make the latency drop.

    Note that V.T.'s cluster already uses InfiniBand, courtesy of Mellanox [mellanox.com].

    It's mentioned at V.T.'s pages [vt.edu].
  • by Killer Eye ( 3711 ) on Friday October 15, 2004 @06:55PM (#10541367)
    ...Halo and UT2004 were starting to slow down on my 1200 CPU cluster!
  • I sure dont see it [top500.org] given that the 'official [macobserver.com]' word back in 2003 is that it was 3rd fastest. The Top 500 list (June of 2004) I can't even find it on that page. And last, if it did reach the 10.6TFlops it'd be #5 after the 11.6TFlop BlueGene/I
    • by ztirffritz ( 754606 ) on Friday October 15, 2004 @07:07PM (#10541468)
      The BigMac at VA Tech missed the list this year because they were busy switching over to DP G5 Xserves. Last I heard, they had completed the project and were busy re-benchmarking the beast. I I also heard that it was poised to move to number 2 possibly on the list after it was retested officially. The Army's version of the BigMac will probably take that title away though. That then 2 of the top 3 machines will be G5 based. Too Cool!
    • by Anonymous Coward
      You needn't be so snippy about it. If you had done any research at all into it you'd know that it was, indeed, in the #3 position but wasn't ranked at all last time around because it was down for an upgrade. They're moving from dual processor G5 desktop machines in the cluster to all G5 Xserves and since all the nodes weren't up during the official ranking period it doesn't appear on the list. Look for it to make a strong appearance again in the near future.

      You seem like the type that needs proof, so here' [top500.org]
  • by Twid ( 67847 ) on Friday October 15, 2004 @09:20PM (#10542191) Homepage
    Small Tree also makes cool multiport gigabit ethernet cards that support 802.1ad bonding. Really, the gigE cards are the more interesting thing for most of us who don't have a supercomputing cluster to run. The two-port version is less than $300. They work on Linux as well.

    http://small-tree.com/mp_cards.htm

    Gigabit has a latency of about 100 microseconds and realistic throughput of about 50MB/s. Infiniband has a latency of about 15 microseconds and a throughput of about 500MB/s.

    I mostly sell small Apple workgroup clusters of 16 nodes, and these are almost always just a gigE backbone. There are certain classes of problems that can benefit from Infiniband at low node counts, but for the most common apps, like gene searching using BLAST, gigE is just fine.

    • that was "clusters of less than 16 nodes" -- slashdot ate my <

      Oh, and regarding Voltaire in the original poster's message, Voltaire does make Infiniband hardware, and they do support Mac OS X.

      • Use 'make Infiband hardware' in the lightest way, more like they work on firmware and resell Mellanox cards/designs. I'm actually not sure who does the final manufacturing, but every Infiniband HCA I have ever seen are absolutely impossible to distinguish physically from one another until in a system and starting firmware/drivers. I don't think it is feasible to deviate from Mellanox because of patents...

    • You could interconnect with optical fiber too, right? Although I don't really know if that'd be faster.

      I'm expecting delivery of my 8-node Apple Cluster this week, which will almost exclusively do BLASTing, so I'm interested in picking your brain. And learning who your clients are: maybe they're in the market for on-site support? Reply to email above.

      To keep this on topic, I'll plug the Apple Listserves that deal with this subject: Xgrid [apple.com], SciTech [apple.com], Cluster and HPC [apple.com].
  • Not 3rd fastest (Score:2, Informative)

    October 14, 2004 Pg. 54

    http://www.netlib.org/benchmark/performance.pdf

    http://appleturns.com/scene/?id=4980

    "Calm down, Beavis; take a closer look at the third and fourth entries and you'll realize that they're the same exact cluster, before and after its owners added another 64 processors to it. In much the same way, System X is also listed in the seventh, ninth, and eleventh slots, with scores taken at various points along its journey to life as a complete 1,100-Xserve system. Factor out the doubles
  • Anyone know if we'll ever see a Quad G5 in a Mac? Probably an Xserve, but even an IBM workstation using a quad G5 would be nice. Comments?
    • by Anonymous Coward
      I think the thinking from Apple on the current configurations is that a dual 2.5Ghz is going to be better than the fastest available Intel-based system with a single 3.xGhz P4. There's no need to make a 4-way box because the 2-way box already beats the best P4 because 2.5+2.5=5. Or something like that. For clusters, who cares how many cores in a single box? Just link a bunch of 2-way systems together.

      But, once the G5 goes dual-core, I would expect to see a dual dual-core G5 machine out there somewhere. Doe
  • How does InfiniBand compare to Xsan [apple.com]? Are they different systems altogether, do they work in conjunctioin with one another, or are they competing standards?
    • Infiniband is a general high-speed (10 gigabit/sec) low latency interconnect. Think of it as a really souped up ethernet, which is an oversimplification, but gives the very basic idea.
      • I screwed up a little, I described 4x infiniband, which is the most commonly used host interface. 1x infiniband is 2.5 Gbps and there is (relatively rare) 12x infiniband.
    • It doesn't. Xsan is apple's large-scale storage solution, and is not suited to inter-host communication (unless you bounce it off a disk).

      Phil
  • 1) Macs only has 3% of the market...so who cares?
    2) Macs are only for designers...so who cares?
    3) Macs cost more than PCs...so who cares?

    I'm surprised we haven't seen the usual, eight year old "facts" as to why this is a fruitless effort. Slowly but surely, Apple is making its way back into the limelight. After being the whipping boy for so long for a variety of reasons (no market share, higher outright cost, stability issues, etc), Apple is proving itself to be cheaper, more stable, and damn powerf
  • Voltaire's okay, but you'll notice that Small Tree isn't reselling their gear, they're reselling Infinicon's gear [infinicon.com]. ICS sells the switches and shared IO gear you need to put it all together.

    As I understand it, the advantages of IB over gig-E are lower latency and scalability.
  • First, it's NOT on the Top 500 list, goddammit!
    Will you ever stop repeating that lie?

    Second, it is under testing (not even in production).

    (Third - not as relevant but still - why is a driver release still news? Topspin et al have been offering infinband drivers for Linux for a while; who wants
    • Check back to the last list genius, it was in third place when it was originally built using desktop PowerMac G5s. They're now re-building it with Xserve G5 rack-mounted servers and as the rebuild wasn't finished when they did the latest list, it didn't qualify...
      • > Check back to the last list genius

        I will, can you please provide the URL?

        > it was in third place

        That's what I'm talking about - it *was* for the day or week they tested it, but it was probably crashing or something - in any case, they couldn't/wouldn't use it as it was, so they embarked on an upgrade (or "tuning" i.e. debugging) program.

        A year (!) later the hardware has deprecated some 33% (3 year period, US$5.2m), they've _wasted_ US$1.716m and they're still not using it.

        That's laughable. What
  • unidirectional bandwidth of 931 million bytes per second is equal to 887 MegaBytes per second. More than an entire CD-ROM per second.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...