Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Databases The Internet

Cisco Introduces Rackmount Servers 93

1sockchuck writes "After shaking up the market for blade servers, Cisco Systems is launching a line of rackmount servers. But the company says its ambitions are more targeted than a full-scale 'all your racks are belong to us' assault on the volume server market. Cisco says it sees its 1U and 2U C-Series rackmount servers as offering an entry point to its Unified Computing System vision for companies who've built their data centers using rackmount servers instead of blades. But it thinks many customers will like the expanded memory capacity Cisco has built into the Xeon 5500/Nehalem EP processor."
This discussion has been archived. No new comments can be posted.

Cisco Introduces Rackmount Servers

Comments Filter:
  • Re:Sorry Cisco (Score:5, Interesting)

    by swb ( 14022 ) on Wednesday June 03, 2009 @10:47PM (#28205155)

    Is Cisco actually designing the motherboards, or is it like many HP servers were, just badged boards from ServerWorks or the like?

    I would guess whatever makes it special has nothing to do with system specs but has everything to do with software loaded either into the hardware or onto the hosts that drives networking.

  • Re:Take that, HP! (Score:5, Interesting)

    by BBTaeKwonDo ( 1540945 ) on Wednesday June 03, 2009 @11:04PM (#28205265)
    When a company has over 30 billion dollars in liquid assets (Excel warning) [cisco.com], entering a market that's closely related to the one it's currently in does not classify as ballsy, even if said market has competitors.
  • by TD-Linux ( 1295697 ) on Wednesday June 03, 2009 @11:19PM (#28205355)

    Seriously, Cisco? Yet another boring Xenon server? There are so many out there I can't tell the difference.

    You could have done something unique and interesting... throw a couple ARM Cortexes into a ultra-low-power 1U server... and make it completely redundant, just for kicks. Or you could have integrated something you are good at, like, well... I guess that option is becoming slimmer.

    Anyway, cheers for yet another undistinguished product entering a crowded market aimed at legacy users with falling demand.

  • by symbolset ( 646467 ) on Wednesday June 03, 2009 @11:59PM (#28205575) Journal

    I'm not Bill Gates. 640K might have been enough for anybody back then, but if he had only said "for now", we wouldn't be having this talk. I have opinions and I'll share them. Most of the time after a few years the market agrees with me.

    We're in the technology singularity. Stuff has already gotten silly and it's about to get absurd.

    Long before the aforementioned RAM quantity becomes a bottleneck for 99.9% of uses you're going to need faster RAM, a faster CPU (or more CPUs) to talk to it, more channels to talk to it with. We're half a year away from 8 cores per CPU, and 9 months away from 12 cores at most and those platforms are going to come with more RAM channels, and hence even more RAM per server, even without considering that DIMMs are going to hit 16GB soon. Likely it will be much sooner. Between now and then we'll need faster interconnects for inter-node communications, faster storage like this [engadget.com], and faster networking like FCoE (tomorrow, literally). As much as I hate the waste of throwing out year old servers, software makers are making it an imperative by insisting on licensing that defeats the technology value proposition. It may not even be wasteful as each server increment does twice as much with half the power. People who use this stuff are well paid to replace the hardware that lives under these limits frequently because the software costs at least 4 times as much as the hardware.

    /and yes, if you use open software you don't have this problem - but you're usually paying per server for support, and that amplifies the incentive to throw out your old gear every year.

    The economic contraction has turned out to be the harsh winter that brings forth a summer of great fruit. Everybody in the trade is emptying their cupboard of innovation in the hope of gaining market share, rather than holding it in reserve for a rainy day. Because it's raining now.

    What we need now is services that need this extra gear. If somebody doesn't come up with it soon Google's going to shrink down to 90 individual racks in somebody else's datacenters - three per geographic area.

    //And no, we're not dumb enough to burn these cycles running the server version of Vista. We get paid to be useful.

  • by Bluecobra ( 906623 ) on Thursday June 04, 2009 @12:27AM (#28205701)

    I think this is a great thing for Cisco. Okay, so nobody will buy their servers for regular stuff. But they will buy Call Manager servers and the like. At work we have 3 Cisco servers that are re-branded IBM boxes. One is for our Unity voicemail system and the other two are for Callmanager. When there are hardware issues, I need to call Cisco who then calls IBM to fix it. I think from a support perspective, it would be a huge benefit to actually MAKE the servers you are supporting that way support requests get processed more efficiently. Cisco doesn't just have IBM servers either, they have HP as well so that would be two vendors that they don't need to deal with anymore for support.

  • by Euzechius ( 600736 ) on Thursday June 04, 2009 @05:10AM (#28206895)

    I work for Cisco, so this post is biased.

    If you want to know more about Intel Nehalem 55xx architecture [zdnetasia.com].

    It explains that a the server manufacturer using the Intel Nehalem 55xx processor can support up to 3, 6 or 9 DIMMs/socket. This corresponds with a memory bus speed of 1333, 1066 or 800Mhz. The latter is not often implemented and would give you (9x2x8GB) 144GB in a dual socket system.

    What Cisco did is, developing a patented "memory switch" which presents up to 4 DIMMs as 1 to the processor, MULTIPLYING THE ALLOWED RAM TIMES FOUR. If the memory is running at 1066Mhz this gives you 48DIMMs. If the memory is running at 800Mhz this would allow up to 72 DIMMs in one server. The latter one has not been implemented.

    Where would you ever need this kind of memory?

    * Running VMware ESX, XenServer,... and assuming 3-4GB per VM -> imagine 96 VMs per physical box
    * imagine running a 300GB MySQL database out of RAM without the need of a high end machine

    Also the price per GB is not linear for memory. 8GB costs currently way more than 4x 2GB. So if you still don't need the 384GB memory, you can fill the 48DIMMs with 2GB and have a 96GB RAM server for a lower price.

    There are also a lot of other features which are really different and better than the competition, such as centralized management per 320 servers. In more enterprise environments customers can also consolidate their SAN and their LAN network by using open standard FCoE.

    Please check it out at Cisco - Unified Computing System [cisco.com]

  • by speculatrix ( 678524 ) on Thursday June 04, 2009 @05:16AM (#28206923)
    I'm surprised Cisco didn't simply buy Sun Microsystems - a reputation for making expensive, over-engineered hardware (both).

    It's only a small step for Linksys to move from making NASs and media players/extenders to PCs, so I expect we'll see a Linksys version of some of the small eee desktop etc.
  • by TheSunborn ( 68004 ) <mtilstedNO@SPAMgmail.com> on Thursday June 04, 2009 @07:01AM (#28207329)

    How much (If any?) extra latency does the switch add? It's not the first time someone tries something like this but the latency normally might be so bad that you might just want to buy an other server instead. (Unless it's a database server, because even slow ram is much faster then the disk :}

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...