Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking Hardware IT Linux

Ask Slashdot: Enterprise-Grade Linux Networking Hardware? 140

An anonymous reader writes "In spite of Linux's great networking capabilities, there seems to be a shortage of suitable hardware for building an enterprise-grade networking platform. I've had success on smaller projects with the Soekris offerings but they are suboptimal for large-scale deployment due to their single-board non-redundant design (eg., single power supply, lack of backup 'controller'). What is the closest thing to a modular Linux-capable platform with some level of hardware redundancy and substantial bus/backplane throughput?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Enterprise-Grade Linux Networking Hardware?

Comments Filter:
  • Server (Score:4, Informative)

    by psergiu ( 67614 ) on Thursday June 07, 2012 @09:53AM (#40243689)

    Try a Dell server.
    Official Linux support - check
    Redundant power supplies - check
    Remote LAN console - check
    Server-class motherboard with loads of bandwidth - check
    Rack-mountable - check

    • by Ogi_UnixNut ( 916982 ) on Thursday June 07, 2012 @09:59AM (#40243743) Homepage

      Yeah, a remote LAN console that is atrocious if you want (god forbid) use something other than their toy web-GUI to admin it, buggy as hell (prone to lockups), plus it shares the main ethernet port, making out-of-band management impossible (a right PITA if you lose network at the link level).

      I've worked on a mix of DELL, HP, IBM, and Sun hardware, and DELL's were by far the most problematic and difficult to admin, but they were a lot cheaper than the others. I guess you get what you pay for...

      Oh, and I think the original article question was referring to networking hardware, not servers, things like layer 3 switches, bridges, routers... places where an enterprise server would be a waste of power and money. Good question though, I don't know of any linux networking hardware that is open :-/

      • Re:Server (Score:4, Insightful)

        by djsmiley ( 752149 ) <djsmiley2k@gmail.com> on Thursday June 07, 2012 @10:11AM (#40243917) Homepage Journal

        If they want networking hardware, linux *ISN'T* the way to go.

        Juniper, Cisco, others.... (I dunno anymore but there is I'm sure).

        As you said yourself, you get what you pay for. If you buy crap, you'll get crap throughput.

        • Re:Server (Score:5, Interesting)

          by h4rr4r ( 612664 ) on Thursday June 07, 2012 @10:17AM (#40244019)

          Cisco is crazy overpriced for the throughput you get. A cheap linux server acting as a router can easily beat many cisco devices.

          Trying to compete with switches on the other hand is crazy talk.

          • by unixisc ( 2429386 ) on Thursday June 07, 2012 @10:53AM (#40244497)
            Are you talking Layer 2 or Layer 3 switches?
            • Re:Server (Score:5, Informative)

              by h4rr4r ( 612664 ) on Thursday June 07, 2012 @12:41PM (#40246049)

              Layer 2 is switching. Layer 3 is routing.

              No matter what the marketing morons say.

              • by pacman on prozac ( 448607 ) on Friday June 08, 2012 @02:28PM (#40260613)

                That's the classical definition but the meaning is evolving, these days I would say it's more accurate to consider hardware forwarding decisions is switching and software/CPU-based forwarding is routing.

                As for the original question, lots of networking kit uses Linux behind the scenes. Checkpoint splat platform is Linux (IPSO is FreeBSD), I think Mcafee Sidewinder is too, Cisco ASA was a Linux kernel with an IOS-like shell stuck on it (not sure about the new ones). Bluecoat SGOS is very Linux-like but not sure how close it is in reality.

                The difficulty is the lack of hardware forwarding, Enterprise networking kit doesn't generally use fast busses or big backplanes to shift packets, it uses proprietary ASICs to handle the packet processing and forwarding at line rate. You can't just buy a top end server, stick TCP-offloading 10Gbps NICs in it and expect it to firewall at 10Gbps. Although that said a lot of "enterprise" firewalls that are sold as 1Gbps struggle to hit 200Mbps and they still sell plenty of boxes.

              • by billstewart ( 78916 ) on Tuesday June 12, 2012 @05:17PM (#40301017) Journal

                Layer 2 is bridging. Layer 3 is routing. Switching used to be doing bridging fast and cheaply using specialized hardware, but if they want to throw in routing features in the same box, that's still fine. And usually the routing in a Layer 3 switch is dumber than the routing in a router, though that's usually deliberate marketing (leaving out BGP so you still get to buy a Real Router.)

          • by Anonymous Coward on Thursday June 07, 2012 @11:08AM (#40244733)

            2nded, 3rded and 4thed.

            The Cisco 800 and 1800 series can easily be replaced with a linux box. The 800 is just a SOHO router and the 1800 has 2 EHWIC slots for various WAN cards.

            The 2800, however, has 4 EHWIC Slots, and going up from that you have the 3900, 4500 series and 7200 which do different things. The 3900 has lots of EHWIC Slots, the 4500 is a 10-slot backplane router designed for telco systems (e.g. you want 2 backplane computers, then you want 6 backplane cards with 10, T1 ports apiece and 2 backplane cards with 10 T3 ports apiece) and the 7200 has I believe 8 modules on a back-plane with a built-in computer module that can handle something on the order of 24 EHWIC slots or more powerful modules (e.g. modules that can do 3des\AES VPN without adding significantly to latency).

            So I'll agree with you the 800 and 1800 series can be done cheaper on Linux; get a cheap dell Rackmount, install linux, turn on the routing functions, buy the appropriate HBA for your WAN link, and be done with it. If you need to break 2, T3's into Ethernet, it's far cheaper.

            Load-balancing is a function of installing multiple linux boxes and using routing protocols. You install 2 ports per switch and configure STP in a staggared fashion (switch 1,3,5 uses linux box A and has ports to linux box 2 disabled. The other set of switches is set up in the opposite fashion). If one box implodes, STP re-enables the backup port, sets it as the gateway for quad zero traffic, SNMP Server sends you two messages at 2AM (Linux box A is down, Link A is up) and you reset the alarm clock for 5AM instead of 7. And that's just L2 switches; you can use 2, L3 switches as your high-availability backend then set up load balancing on that sucker using routing protocols (e.g. OSPF).

            However, if you're going to be running a Telco, or if you are going to break OCx Lines to T3's or T3's to T1's in an enterprise, the Cisco kit has a very good value at your Core layer.

          • Re:Server (Score:2, Insightful)

            by Anonymous Coward on Thursday June 07, 2012 @11:36AM (#40245177)

            On the low-end, you are right. But anywhere that you actually use the features that set a Cisco router apart (enterprise-scale redundancy, failover, etc) you will be glad you bought Cisco. Plus, with dedicated hardware, I can take a failed device, pull the config from backups, drop it on the new device and be back up and running in minutes.

            In the sub-$1000 market, there are plenty of better options than Cisco. I'm a big fan of Fortinet; their cloud management features are pretty slick, and their devices offer so much functionality that it would be difficult to duplicate with just a server. There are so many inexpensive options here that building your own simply makes no sense at all when for the same price, you could just buy a FortiGate and be done with it.

            In short, roll your own routers are fun projects, but at the end of the day it'll just be cheper to buy a commercial router. With a router, you're not buying hardware, you're buying the software. And most of that software is sufficiently complex as to not make you feel ripped off.

          • by mjwalshe ( 1680392 ) on Thursday June 07, 2012 @04:51PM (#40249417)
            But not on a cost per watt, up time and space taken up enterprise networking is serious business if you design it right you should be able to power it up and the only time you would power it down would be to replace it at EOL (Asuming no act of god power outages)
        • Re:Server (Score:5, Interesting)

          by DaMattster ( 977781 ) on Thursday June 07, 2012 @10:27AM (#40244147)

          If they want networking hardware, linux *ISN'T* the way to go.

          Juniper, Cisco, others.... (I dunno anymore but there is I'm sure).

          As you said yourself, you get what you pay for. If you buy crap, you'll get crap throughput.

          Actually, that isn't true at all. Linux can compete toe to toe with Cisco, Juniper, Big Iron, and others. This is specifically why Vyatta [vyatta.com] has so much invested in it. Vyatta has come up with a Linux distro that is designed to replace this proprietary hardware. To boot, Vyatta has scored several major Fortune 500 players. Additionally, OpenBSD [openbsd.org] has routing facilities that are a force to be reckoned with. Several of my clients use Lenovo M71e's with OpenBSD as routers that I built. I replaced the traditional HD with an SSD and bought high-end intel networking boards. Contrary to "conventional" wisdom, these have been near perfectly reliable. They use BGP and IPSEC to interface with my Amazon VPC.

          • by djsmiley ( 752149 ) <djsmiley2k@gmail.com> on Thursday June 07, 2012 @10:43AM (#40244337) Homepage Journal

            Want to use anymore buzzwords in what you just said.

            I do need to look into Vyatta...... but my point is the questioner doesn't know wtf they want. They don't specify. If they want switches. HAHAHAHA. We know thats laughable.

            I did forget the BSD's, but thats because I rarely use them. I use linux alot at home and at work, and yes my home router runs linux and so will my new one (which happens to be a Alix board similar to those that were linked in the summary.)

          • by 0racle ( 667029 ) on Thursday June 07, 2012 @10:47AM (#40244413)
            It's not the reliability that is the issue, you can get very reliable server machines. It is the benefits that the ASIC's bring to the various platforms from Cisco, Juniper, HP and whatnot. You can get away without them because for a great number of usage scenarios you don't need them, but when you do, the dedicated hardware will reliably out perform a general purpose OS on a general purpose machine. There is also the benefit that a Juniper router or a Cisco switch use a whole lot less power then that tower.

            Linux and OpenBSD do have a place, probably more places then they are deployed (but a lot of that will be support reasons), but you can not ignore the fact that the more traditional networking devices from traditional networking vendors also has their place. Picking a tower running Linux when you really did need what that Cisco/Juniper device can do will hurt you more than putting that Cisco/Juniper where you could have used Linux.
          • by SuperBanana ( 662181 ) on Tuesday June 12, 2012 @02:08PM (#40298599)
            You're seriously using a consumer-level desktop chassis for enterprise routing? You're not doing enterprise *anything*. See the title of this post. If you showed up with anything except a 1U rackmount machine, I'd show you the door.
        • by unixisc ( 2429386 ) on Thursday June 07, 2012 @10:52AM (#40244471)
          Cisco's IOS is based on Linux, while Juniper's OS is based on BSD. So if the OP buys CISCO, he gets what he was asking.
        • by Kludge ( 13653 ) on Thursday June 07, 2012 @12:39PM (#40246021)

          If they want networking hardware, linux *ISN'T* the way to go.

          That depends on what you want. The most useful part of having Linux on my router is that I can make it do what I want it to do. QOS, firewalling, those just scratch the surface. Someone who knows Linux networking well, or is willing to put a little work into it, can make a router that does virtually anything.

        • by trevelyon ( 892253 ) on Thursday June 07, 2012 @05:06PM (#40249633)
          Well that all depends on where you want it and what functionality you need. I know I've deployed fleets of WRAP PCs running LEAF that have simply blown away the Cisco hardware at a fraction of the cost. Below is a summary of how I saw them stacking up.

          The LEAF on WRAP PC advantages were:
          More secure: SSH access and serial console, latest strong encryption included
          More reliable: especially if the Cisco devices were running any network server functions like DHCP, fanless, all solid state
          More complete: VPN, DHCP, DNS (tinydns and dnsmasq, I never run BIND on a firewall even though you can)
          Lower power usage: 5W and can be powered by POE
          More upgradeable: New major version released every couple of years. Free upgrades, patches, new features, etc.
          Lower cost: about 10-20% of the cost of a pix or comparable vpn enabled router (at least as of a few years ago). So much so that having a cold standby (just swap flash cards) was easily justified. Never had a unit in the field go bad yet but at $250 ea it was easy to be safe.

          Cisco advantages are:
          A more standardized CLI
          A greater pool of available talent to work on it
          Custom asics for more routing performance in very demanding applications (ISP cores, etc)

          These areas are about the same:
          Config backup
          Staging and deployment

          These WRAP PCs were all edge devices or installed in the SMB environment and in firewall/routing/VPN/Security roles. I am not aware of any switch hardware that runs Linux but the tools are there on the Linux side for bridging management. I only needed to scale up a few times for VPN concentrators and for those server hardware was the answer. Big network core routers will need some custom asics though and I'm not aware of any offerings like that which run Linux. On the edge IMO Linux destroys the competition IF you have a couple qualified linux resources. I used to be a Cisco instructor (basic network switching courses, network management, ATM/LAN switching) several years back so have a good understanding of the device capabilities but am a bit rusty in some Cisco areas.
          I'd be curious to hear why you think Linux isn't the way to go across the board? It hasn't been my experience at all.
        • by rjr3 ( 658693 ) on Thursday June 07, 2012 @05:18PM (#40249861)

          You must not interact with Cisco gear.
          The Nexus 7000s run Linux as their Supervisor OSes
          The Nexus 5K .... ditto,
          Storage platforms .....

      • by Anonymous Coward on Thursday June 07, 2012 @10:12AM (#40243939)

        I run a 50/50 mix of Dell and HP Proliant servers. About 30 of each brand. All of these are fairly new, within a few years of age.
        By far, the Dells do break down more often. The HPs seem to only lose hot-pluggable hard drives every now and then, but the Dells lose drives, PSUs, cooling fans, RAID controllers and even had a motherboard fail. However, the latest batch of Proliants I bought seem to not be built as good as in the past either. We'll see how well they hold up. It's all Foxconn junk nowadays. The new servers do perform very fast however, you do have to give them credit there.

      • by InterBigs ( 780612 ) on Thursday June 07, 2012 @10:20AM (#40244047)
        Actually, all the DRAC Enterprise cards that I've worked with (say the last two or three generatios) have a dedicated ethernet port. The whole management card functions separately from the server, as it should. Sure, the remote console works through a Java Web Start application which seems kludgy but it has never failed me (much like pretty much all Dell server hardware we operate over here).

        However I agree with you that a complete server would be a waste of resources for this scenario so it's kind of a moot point.

      • by afidel ( 530433 ) on Thursday June 07, 2012 @04:10PM (#40248827)
        plus it shares the main ethernet port
        Huh, that is an option but on almost all models you can set it up on a separate physical port, for some models you do have to buy an additional widget to get that functionality but it's generally not expensive.
    • by ChrisBachmann ( 1675584 ) on Thursday June 07, 2012 @10:24AM (#40244107)
      I'll need to echo this. They also have Broadcom NICs with TOE + iSCSI offload. I use some Dell blades with a dual head Sun 7410 system and that runs Citrix XenServer running Debian squeeze VMs plus some windows VMs. The blades are built to have redundant NICs and room for up to two more network types. Whether it's ethernet, fiber channel, ininiband, etc. Plus the network modules in the blade chassis can be switches themselves. Plus the range of product options is pretty good too.
    • by SaDan ( 81097 ) on Thursday June 07, 2012 @01:52PM (#40247069) Homepage

      Or just get a Power Router running Mikrotik OS (Linux based)

      http://www.mikrotikrouter.com/ [mikrotikrouter.com]

    • by hamsjael ( 997085 ) on Thursday June 07, 2012 @04:02PM (#40248703) Homepage
      Try common of the shelf PC hardware. We have been running OpenBSD on old AMD dual core MBs for quite some time now. The machines are fitted with an intel quad port GB adapter. but otherwise there completely standard PC's. We have a bunch of these MB's and every component is easily replaceble. We have two identical machines running side by side, so when its time to upgrade, we yank the cables from one box to the other. We have been contemplating to use CARP for failover, but i'm a firm beliver in simple things (the importance of KISS can't be overstated). Throughput and stabilty is great. We de a lot of webhosting and have a lot of S2S IPSEC tunnels. Furthermore the OpenBSD boxes can do some tricks that the trained monkeys, with their Checkpoint, cisco, juniper and so on at our customers sites , typically have never heard of (like port based ipsec routing for example). If you have the knowhow, an "enterprise" firewall with all the service agreements, licensing costs and other thievery is just money out the window.
    • by hamsjael ( 997085 ) on Thursday June 07, 2012 @04:37PM (#40249175) Homepage

      Try common of the shelf PC hardware.

      We have been running OpenBSD on old AMD dual core MBs for quite some time now. The machines are fitted with an intel quad port GB adapter. but otherwise there completely standard PC's. We have a bunch of these MB's and every component is easily replaceble. We have two identical machines running side by side, so when its time to upgrade, we yank the cables from one box to the other. We have been contemplating to use CARP for failover, but i'm a firm beliver in simple things (the importance of KISS can't be overstated).

      Throughput and stabilty is great. We de a lot of webhosting and have a lot of S2S IPSEC tunnels.

      Furthermore the OpenBSD boxes can do some tricks that the trained monkeys, with their Checkpoint, cisco, juniper and so on at our customers sites , typically have never heard of (like port based ipsec routing for example).

      If you have the knowhow, an "enterprise" firewall with all the service agreements, licensing costs and other thievery is just money out the window.

  • by Anonymous Coward on Thursday June 07, 2012 @09:55AM (#40243699)

    Xeon/Opteron based platform. you want enterprise, you order enterprise--something with ECC, and you could probably use memory mirroring on such an networking platform. Then, get the latest Intel NIC's whic have ECC throughout the NIC's components (not sure if new feature, or only newly advertised).

    then, choose a major vendor. IBM/HP/Dell/Supermicro

  • by alen ( 225700 ) on Thursday June 07, 2012 @09:59AM (#40243749)

    checkpoint, xangati and a bunch of others i've seen use linux. have you even looked?

  • by Anonymous Coward on Thursday June 07, 2012 @09:59AM (#40243751)
  • Supermicro (Score:4, Informative)

    by BaronAaron ( 658646 ) on Thursday June 07, 2012 @10:00AM (#40243761)

    I've used Supermicro equipment for years. Their 1U Atom based systems work great for firewalls, routers, or any other kind of Linux network device. Low power, mostly fanless (power supply has a fan), expansion slots, decently priced. You can go up the line to full blown Xeon based systems with all the redundancy you need.

    Their support is good also. You get to talk to knowledgeable people who speak English.

    Supermicro website [supermicro.com]

    • Re:Supermicro (Score:3, Insightful)

      by Anonymous Coward on Thursday June 07, 2012 @10:05AM (#40243843)

      Dude said enterprise. Supermicro does not provide enterprise support, they have fine phone support but replacements are slow to arrive and unreliable. Hell their build quality is dodgy at best. (Stuff may not fit identically unit to unit, poorer quality fans, etc) I like them a lot, used them for a 400 server build a couple years back, the cost/value is fantastic, but they are not "enterprise" by any stretch. Just reasonably priced Chinese server gear.

    • by pnutjam ( 523990 ) <slashdot&borowicz,org> on Thursday June 07, 2012 @03:03PM (#40247989) Homepage Journal
      Looking at their website, I don't see any pricing or suppliers that actually sell the atom servers, but they l look interesting.
    • by Skapare ( 16644 ) on Thursday June 07, 2012 @03:13PM (#40248127) Homepage

      Supermicro does make great stuff. But I haven't found anything they make do be suitable for a network switch. The standard model here is 1U rack space, flash device for the OS (preferably internally removable, like maybe CF or SDHC on the board inside), and 16 gigabit ethernet ports (a couple of them being ten gigabit a plus, and being fiber a plus-plus). Also, a leaner CPU that runs cool, like ARM, MIPS, or PPC, would be great (but this is outside Supermicro's current area of expertise). So we are talking about a single board with all the ports right on it, and everything can be accessed with open source kernel tree drivers.

  • by Anonymous Coward on Thursday June 07, 2012 @10:00AM (#40243771)

    If you're looking for very high throughput switches and network equipement running Linux, I heard about Arista a while back which might be what you want:
    http://en.wikipedia.org/wiki/Arista_Networks
    http://www.aristanetworks.com/

    Their switches essentially run a standard Linux userspace with extra daemons controlling the switching hardware.

  • by acoustix ( 123925 ) on Thursday June 07, 2012 @10:01AM (#40243793)

    I have a friend who operates a small ISP in rural Iowa. I believe he's using ImageStream [imagestream.com] routers. Just a quick look at their lineup and I'm guessing that they can cover small to mid size businesses. They claim to be able to replace Cisco 3945 and 7206 routers. I'm not sure about hardware redundancy though.

  • Try ALIX? (Score:4, Informative)

    by guises ( 2423402 ) on Thursday June 07, 2012 @10:09AM (#40243903)
    ALIX boards can run Linux or FreeBSD (Monowall, pfSense) and support PoE, so you can set up your own redundant power system. For board redundancy, just use two routers.

    Actually, the Soekris boards seem to be similar - they both use x86 CPUs.
  • by laptop006 ( 37721 ) on Thursday June 07, 2012 @10:09AM (#40243905) Homepage Journal

    Pretty most software devices I've seen have either been a rebadged Dell or Supermicro, with the top end running custom cases, and the low end doing whitebox.

    In terms of "real" networking kit though, there is a bunch of switches that run linux:

    Arista (everything)
    Extreme (everything running XOS, which is all current models)
    Cisco (everything running IOS XE, the only switch being the 4500-X)

    All Juniper devices that run JunOS are FreeBSD, this includes both the EX and QFX switch lines, as well as their SRX firewalls.

    Also most of the openflow-aimed switches run Linux, eg http://www.pica8.com/ [pica8.com]

    • by Hydrian ( 183536 ) on Thursday June 07, 2012 @11:53AM (#40245391) Homepage

      Just as a point of reference, the Juniper Secure Access (SA) switched from BSD to Linux in firmware >= 7.x.

    • by pjr.cc ( 760528 ) on Thursday June 07, 2012 @07:22PM (#40251155)

      while this is true, theres a fundamental difference between a linux box with a 4-port quad card and say a cisco or juniper with 4 1 gig network ports. The primary purpose of the OS (bsd or linux) on these devices is to:
      1) store configuration
      2) provide a management interface
      3) program asics

      if, for example you took a whitebox, shoved two quad-port 1gig network cards in it and installed junos on it, it would be nothing like an srx210 - same port count, even same capabilities, but what you dont have is a bunch of asics that drive the network, and this is very fundamentally different. On these devices, the underlying os doesnt actually provide alot of the firewalling or routing capabilities and none of the switching, this is all handed off to dedicated hardware and the underlying os just provides a way of programming that in.

      • by pjr.cc ( 760528 ) on Thursday June 07, 2012 @07:32PM (#40251245)

        actually this is even true on the general consumer focused firewall/routers you get down the shop for $50. Take the tp-link tl-wr1043nd (http://www.tp-link.com.au/products/details/?categoryid=238&model=TL-WR1043ND) internally its a 6-port switch, entirely asic driven, and programmed from the os (if your running openwrt you can run swconfig and play with the switch config). the switch does vlaning, and everything you expect from a basic switch. So everything layer 2 is done in asics...

        One of the ports on the switch (the 6th) is directly connected to the linux OS sitting inside and the switch treats the linux os as just another connection.

        Layer 3 however on these devices *IS* however driven by the linux OS inside, firewalling, routing, etc. On enterprise kit, alot of that is also moved into asic form and provided purely in silicon as well.

    • by rheum101 ( 601693 ) on Friday June 08, 2012 @03:12PM (#40261131)
      /* Please don't comment out words such as FUCK as this fools the curse / swear filter */
  • by jimicus ( 737525 ) on Thursday June 07, 2012 @10:16AM (#40243989)

    There's plenty of options, but relatively few that an individual might be able to purchase for a pet project or for a small number of prospective clients.

    Off the top of my head, Dell offer an OEM scheme [dell.com] whereby they'll rebrand one of their servers with your logo and install your software on it before shipping it out to your customer; another company called NEI [nei.com] will do something similar. I've actually got an NEI box right next to me now - I'm the customer of a company that uses them.

  • by ZonkerWilliam ( 953437 ) on Thursday June 07, 2012 @10:24AM (#40244099) Journal
    Their Proliant line is still pretty good and supports *nix.
    • by unixisc ( 2429386 ) on Thursday June 07, 2012 @11:18AM (#40244877)
      It seems the OP was talking about networking gear. But if he was talking about servers as well, then HP's Integrity servers would be even better, since one knows that they are enterprise class and would scale well. Granted, the options of OS are pretty thin here - on Linux, there's only Debian, and on BSD, there's only FreeBSD. But the good thing about that it that it forces the company to stick to FOSS like ProgreSQL, which in the long run, ensures that it will be around regardless of support. The temptation to switch to Windows 2008 Server or Oracle Linux or other such things are eliminated.
  • by ggendel ( 1061214 ) on Thursday June 07, 2012 @10:25AM (#40244117)

    We've had good results with boxes from Penguin Computing. We get boxes with redundant power supplies, redundant NICs, and RAID. We've spent a lot of time qualifying these boxes before deploying them to our customers and currently have a lot of them in the field.

  • by multipartmixed ( 163409 ) on Thursday June 07, 2012 @10:28AM (#40244161) Homepage

    ...but I use Sun Microsystems hardware for this task.

    The X2100, X4100 series servers more than meet my needs, and are available on the used market for a song these days.

    The lights-out management works great, the rackmount kits and cable management arms are first-class, the hardware is well-made, and they look cool. Heck, they're even certified to run RHEL 5 or so.

    Best of all - buying used Sun gear and putting Linux on it pisses off Larry Ellison. What more could you ask for?

    • by unixisc ( 2429386 ) on Thursday June 07, 2012 @11:08AM (#40244743)

      Would putting Oracle Linux on it piss off Larry? Since Oracle Linux is rebranded Red Hat Linux. Or does Oracle Linux not support Sparc, even though Red Hat does? That would be too funny.

      Or were you talking about putting a non-Oracle Linux, such as either Red Hat itself, or something like Debian or something else? If you put on it BSD flavors like OpenBSD, pFSense or Monowall, that would be like dragging it from its SVR4 roots back to its BSD roots, and would be even funnier.

  • My Day Job. (Score:5, Informative)

    by cheetah ( 9485 ) on Thursday June 07, 2012 @11:00AM (#40244625)

    Ok first thing first, I work for ImageStream as the Technical Support manager. So I might have a slightly biased viewpoint when it comes to the place I have been working for the last 16 years... But we have been doing Linux Based networking for the last 14 years.

    What the OP wants to do is rather difficult for a few reasons. First, after shipping thousands of Linux based routers I can tell you that redundant power supplies that fit into standard PC hardware have a much higher failure rate than a standard Power Supply. Granted, if you have a failure you still have a functional power supply(which is now working twice as hard and is even more likely to fail).

    Second, standard PC hardware just doesn't support multiple redundant components. Sure you can get redundant power supplies, but redundant buses or Cpu's your talking different about a totally different class of hardware(see below).

    Third, If you truly have an Enterprise application, and your asking about hardware to support your application you are already in over your head. Sorry it's just the truth. The OP is talking about building a custom solution for a mission critical application and they have to ask on slashdot about hardware solutions. What happens when(not if) the OP has a problem. The real reason that many people buy our(ImageStream's) hardware is for the support. If something doesn't work they don't have try and troubleshoot a strange Pci bus condition or an obscure Linux Kernel issue that you only see when you have +5,000 networking interfaces in a system. It's one thing if your a Google and you want to build something that just doesn't exist like the OpenFlow switches they are using in their Gscale network. But for a normal organization you are going to spend money and time to develop your custom solution and in the end if anything doesn't work, you will spend more time fixing it.

    Now if the OP still wants to do this... I would look at an ATCA (AdvancedTCA ) chassis. You can get support for a redundant dual loop back plane, multiple CPU cards, redundant power supplies and in most cases a out of band management module for the chassis. But this is VERY costly hardware. If your not budgeting at least $20k in hardware your likely not going to end-up with anything that had real redundancy.

  • by Chirs ( 87576 ) on Thursday June 07, 2012 @11:03AM (#40244677)

    Current state-of-the-art in off-the-shelf ATCA gear (chassis, switch cards, compute cards, etc.) provides redundant 40-gigabit backplane connectivity on the data fabric. It's available with linux support.

    It's telco-grade stuff, so redundant power supplies, redundant fans, redundant networking, redundant shelf management, etc.

    You're going to pay for it though.

  • by jon3k ( 691256 ) on Thursday June 07, 2012 @11:09AM (#40244745)
    Arista Networks. 10GbE, insanely low latency, insanely low per port cost and last I checked was running a Fedora kernel and userland.
  • You don't define precisely what you mean by these things.

    * If we're talking about switches, forget it. Cisco does it better and faster, easier to manage, with more robust hardware and a better service plan (limited lifetime warranty on all fixed configuration switches!). A non-Cisco switch doing anything of value on your network is a surefire way to convince me that you are bush league.

    * Routers - it really depends. What are you going to do? Just route traffic between LAN interfaces? A Cisco L3-capable switch will probably be the fastest for this job, considering that many of its traffic routing tasks can be done in hardware which has been made to spec. But if you're looking to stick with Linux, you can configure a Linux server with the hardware you require and load it up with a network protocol you need it to run. A Linux server can certainly run OSPF or BGP. However, what else are you need? Do you also need a firewall, a VPN concentrator, an intrusion detector, a WAN optimizer, a small phone system? Because if you need those things as well, a hardware router will do these things at once in addition to its routing tasks, with a better performance:price ratio. Configuring it is not hard to learn to do. If you don't have time, you can always phone someone else who's contractually obligated to fix it.

    * Firewall - this is wide open. Every single piece of firewall software seems to approach things in a totally different way, especially in terms of management interfaces. I would look around for the one that communicates to you in the way you find most intuitive, and then buy the gear that runs that. While I know Linux on a server will have some powerful firewalling capabilities, I simply can't use most of the Linux-based management packages because they just don't seem to think the way I do. Hopefully this is remedied soon, because most firewall vendors are incredibly overpriced and, in the case of Cisco especially, occasionally hard to even obtain at all.

    I'm no Cisco fanboy, although I do rely on them for my income (full disclosure). I also don't want to be a Negative Nancy, as I understand that not everyone warms up to the whole "you should be grateful to have our logo in your rack" attitude you get from Cisco...I certainly don't. But there is a reason beyond simple groupthink that causes people to buy their stuff - frankly, there just is no serious alternative when it comes to switches or multi-function routers.

  • by Quick Reply ( 688867 ) on Thursday June 07, 2012 @11:48AM (#40245331) Journal

    Here is something different to all the other experts.

    It is absolutely useless to have redundant hardware, eg: Dual PSUs, Dual CPUs, Dual Motherboards, etc. on the same computer. You will never be able to 100% protect against a hardware failure as they will invariably share hardware to allow the interconnection between the redundant components to occur, it is unlikely to protect from things like a short circuit/power surge which would take out everything until the UPS. Then if a component does fail, to repair it your are going to have to take it offline to restore that redundancy anyway.

    You are far better off getting two (or more) completely separate servers, geographically diverse if possible, which uses software to provide redundancy. If one goes down, the other(s) would be powerful enough to handle all the load, and when everything is rosey, it just load balances.

    The real world difference is you are looking at a $5000 server with identical specs as a $20,000 but without all the redundant PSUs, etc. but you would be better off buying two $5,000 servers ($10,000 total), set them up to have redundancy of each other (So you truely have two COMPLETELY separate sets in redundant hardware of all components, and geographically separate too if possible), and as a bonus you have twice as much computing power (or scale down power draw when not needed) for when both servers are working. If you need to pull one down for maintenance, you don't need to shut off the whole thing.

    If you are into Dual PSUs, etc. equipment in addition to also load balancing/fallover between other servers which also have redundancy, this is pointless because you should have ability to cope with the complete failure of a "redundant" server anyway, for the time it takes to replace the defective part the window that the other server(s) will have a failure in that time is not very high.

    The only exception to this is Hard Drives, Hard Drives make sense for redundancy, not just because of their high rate of failure, but the fact that if there is a failure, it is a lot more work to recover from (Whereas other components are just a straight hardware swap) so it is saving extra work in the long run.

    For a smaller environment where a small amount of downtime would be acceptable, You can even have a Cold Server, an exactly clone of the Main Server ready to go with all the software setup but powered off until needed if there is fault with the main server, the Cold Server can then be powered on to take over. There is no redundancy or fall over with this, but then again, in a smaller environment, your app might not support any kind of redundancy. With a Cold Server, just turn off the faulty server, switch on the cold server, restore the latest data set, and off you go. Microsoft doesn't require that Cold Servers hold a separate license either.

    • You nailed it Sir
      I am running a lot of services of standard PC hardware. Modern PC's are insanely highpowered. Im running a fair amount of AMD t1100 (6 cores and 16 GB DDR3 RAm with ECC) with SSD disks in a mirror as DB servers. Off course this kind of setup requires extra care to test backup/restore procedures.
      On the positive side any component in these boxes are extrremely easy to replace.
      No "4 hour, on site" service contract beats the ability to pull a standard PSU, motherboard or RAM stick of the shelf and sticking it in to the server!!
      And best of all these boxes are dirt cheap :-)
      Off course you will never get the same performance from a single box with this setup, but in a lot of cases (at least for us) this is not necessary.
    • Yeah, I was wondering where this was going to be in this thread. Linux isn't the right software for a switch because the right hardware doesn't exist. But it's good software for a router. A router is usually a good candidate for duplication and hot failover, as opposed to a switch, so this is perfectly good advice.

    • In principle I agree with you, but take exception to your dismissal of dual PSUs.

      All our servers run affordable dual PSU units, with single backplanes and modular PSU trays. These fit into standard ATX PSU bays so special cases aren't needed. These weren't purchased due to anticipating PSU module failure, but upstream power source failure. We can power down any one UPS in our server room without affecting any servers. Given the reliability of UPSs and the occasional need to move cables, etc, this is a definite bonus for us.

  • However, the company used FreeBSD, not Linux.

    Still, it was one of the best routers the company I worked for (100.000+ employees hi-tech company) ever installed. We had mostly Cisco gear, but the FreeBSD-based routers (they used some special motherboards) were a pleasure to admin and came with some service-level routing capabilities as added bonus. Performance was stellar for the time.

  • by dougsk ( 677406 ) on Thursday June 07, 2012 @12:23PM (#40245801)

    These guys make the hardware that $VENDOR rebrands and sells as an appliance.

    http://www.nexcom.com/Products/network-and-communication-solutions/mainstream-appliance [nexcom.com]

  • by Anonymous Coward on Thursday June 07, 2012 @01:08PM (#40246401)

    Go with the cheap router and buy TWO or more.

    Deploy using VRRP or other active/standby or active/active configuration.

  • by Anonymous Coward on Thursday June 07, 2012 @01:31PM (#40246717)

    This probably isn't the right path for the OP, but throwing this out since this is an option that might be suitable for some readers.

    More "industrial grade" than "enterprise grade", but if you need a flexible high-slot-count solution you may want to look into PICMG 1.3 (System Host Board) based hardware. Instead of a motherboard with PCIe expansion slots, there is a passive backplane consisiting for a system slot and some number of PCI and/or PCIe slots (anywhere from 1 to 20 depending on the particular backplane). The system slot takes a "Single Board Computer" that performs all of the "motherboard" functions with options ranging from atom to dual xeon processors to suit most processing needs. Since the hardware is really nothing more than standard PC components on a card instead of a motherboard, just about any PC OS is supported.

    If you go with a 3U or 4U chassis, you can easily find redundant power supply options and service is also easier (you can swap out the processor card as easily as any other card). The only "difficult" maintenance is a backplane failure, but even that is normally a much simpler process than a conventional motherboard layout. There is no bus-level redundancy (though with a "split backplane" you can actually have 2 indepenent units in a single chassis... but you are better off with 2 separate chassis anyway). You can easily put together a "spares kit" of processor cards, backplanes, and network cards.

    These systems are mainly used in industrial settings, so they tend to more rugged than typical systems. Due to the level of customization, you would also end up spending quite a bit of time selecting a configuration and doing testing. Depending on the number of systems you anticipate needing, this might be more effort than you'd want to spend.

    Many vendors to choose from, if you are interested in looking into the option here are a few starting points:

    http://www.trentontechnology.com/
        (look under Products: Board Products). They produce high-performance processor cards (single and dual socket "Core" and Xeon).

    http://www.onestopsystems.com/
        turnkey systems and some interesting PCIe bus extension products if you want to share a rack of cards

    http://www.cyberresearch.com/
        a wide array of cards, backplane, chassis options including "lower-power" cards (celeron/atom) as well as higher-end.

  • by Skapare ( 16644 ) on Thursday June 07, 2012 @02:38PM (#40247691) Homepage

    Hover over the Products tab. You get choices for the various product line numbers. But this is obscurity for the public market. The marketing director might know exactly what all those numbers mean. But those who are new to this company will not. That's not to say they must not list their products by number somewhere. But I am saying they need to list their products by what functions they do and what problems they solve, so that new customers can go right to the correct pages. Potential customers won't be, if they have to step navigate sequentially by going in and out of different pages. They be better off scrolling than doing that.

  • by pjr.cc ( 760528 ) on Thursday June 07, 2012 @07:07PM (#40251047)

    Dont try and beat companies for switching with linux grade equipment - there just isnt a good reason to. I love junos, screenos and ios - they kick arse... I also like what huawei do (they are a little cheaper, but at the switching side, they're very good). I've been doing networking for 15 years as a job and i've been doing linux since '92.

    However, im also very VERY keen on linux at the routing side... I've even written my own firewall/routing software for linux. At the layer 3, linux has one advantage cisco, juniper (screenos and junos), and basically everyone else cannot give you - adaptability. just about any 1ru server capable of supporting either 8 1gbps nics (2x4 pci-e) or 2-4 10gbps nics (either 1x2 or 2x2 pci-e) is fantastic. Modern cpu's and busses really dont change much between vendors, only generation so you shouldn't really be bothered looking for "which has the best bus" cause they all do (dell, ibm, hp, it doesnt matter). If you can get a server with a serial lom (not just a network-connected web-gui based piece of nastyness (because you DO want oob management) you'll be laughing. Generally speaking, most x86 hardware will have around the same life expectancy as dedicated hardware and by that i mean if you get a dell server with redundant power supplies and so forth, it'll have about the same uptime as a juniper srx650 with dual power supplies. The one thing you'll probably miss out on is hot-swap-ability.

    Now you plug that machine into your switch, etherchannel and vlan trunk it to your server and you have an amazing device. What you do with it then is entirely up to you, and this is generally the harder decision then the hardware - what you'll put on it. You can go with a real bit of firewall gui (such as vyatta) or you can do something far more interesting - i recommend devil linux personally as its the most flexible of the lot without being a bitch to maintain (as in, centos, ubuntu, fedora, whatever - not good choices for networking equipment cause there is alot of config to manage at the machine side - very bad for networking). One reason i say i dont like most firewall distro's in linux is that they tend to limit you and if your going to do this, go get a juniper netscreen/srx, they're just not that expensive (there is one exception to this, and thats openwrt, it runs on x86 and has almost every component a normal linux distro has). Its also worth avoiding harddrives (except if your going to put a network cache in there) and there are good options out there for doing just that.

    Linux's most valuable asset is its abilities to do unbelievably fantastic things at the network layer and then be adapted easily. With vendor enterprise kit you'll get ipv4, ipv6, routing protocols (isis, ospf, bgp, rip and add eigrp for cisco) policy based routing, some network serivces (dhcp, ra's, etc) add firewall/loadbalancing/vpn depending on the device. With linux you get all this and a hell of a lot more in one device, it is well worth your time checking out the younger and more intresting routing protocols (like babel, oslr, etc etc, theres a few) - the fun is bringing it all together.

    There is one downside to all this, too many options and alot to learn. Do you want a network device that will do:
    1) policy based routing
    2) ipv4 and 6 firewalling
    3) load balancing
    4) routing protocols
    5) vpn'ing

    1+2 come from the same place, so you'll be quite ok with that, the rest though is up to you, each has 15 different options from 15 different ppl and it takes some experimenting to know which is best for you. You'll also find none of them will configure or look anything like one another so you will be learning 4 very distinctly different software stacks with 4 very distinctly different configuration paradigms.

    Personally, i dont see that as an issue for myself - in an organisation it can be a bit harder.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...