Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Encryption Networking Hardware IT

Build an Open Source SSL Accelerator 136

Amin Zelfani writes "SSL accelerators like Big-IP 6900 from F5 Networks typically carry a $50k or more price tag. An article over at o3magazine.com shows you how to build an SSL accelerator that's on par with the commercial solutions, using Open Source projects. SSL Accelerators offload the encryption / decryption process from web servers, reducing load and reducing the number of certificates needed."
This discussion has been archived. No new comments can be posted.

Build an Open Source SSL Accelerator

Comments Filter:
  • Re:Huh? (Score:5, Informative)

    by Trepidity ( 597 ) <delirium-slashdo ... h.org minus city> on Wednesday April 15, 2009 @04:38PM (#27590613)

    Partly the article is quoting prices on a whole box, not just the SSL acceleration. The Big-IP 6900 mentioned in the summary, for example, is a dual-core rackmount server with 10GigE, and hardware SSL and compression. Presumably much of that money you're paying is going for the actual server, not just the SSL-accelerating coprocessor. Of course, you're probably also paying a markup for buying a specialty server of that sort, rather than slapping an SSL accelerator in a server from a commodity vendor.

  • uh (Score:4, Informative)

    by anthonyclark ( 17109 ) on Wednesday April 15, 2009 @04:39PM (#27590621)

    you *do* know that an F5 Big-IP is more than an SSL accelerator? Like, a load balancer with lots of cool features.

    I guess you could duplicate the features of an f5 with nginx and more, but I guess it'd take a developer more than 50k worth of time to do it.

  • Re:Huh? (Score:2, Informative)

    by Anonymous Coward on Wednesday April 15, 2009 @04:39PM (#27590629)

    Actually you forgot to mention that most licensing systems require multiple licenses per 'machine'. One of the advantages of using one of these SSL accelerators, besides offloading the work, is being able to consolidate certs onto one machine for many front-edge machines.

  • Re:Huh? (Score:5, Informative)

    by upside ( 574799 ) on Wednesday April 15, 2009 @04:49PM (#27590789) Journal

    The BIGIP does load balancing, active-active clustering, routing, packet manipulation using scripts etc. It's extortionately priced but is very powerful and very user friendly.

  • Re:uh (Score:4, Informative)

    by deraj123 ( 1225722 ) on Wednesday April 15, 2009 @06:25PM (#27592001)

    but I guess it'd take a developer more than 50k worth of time to do it.

    He wasn't trivializing. He was, in a somewhat roundabout way, saying that 50k is a lot cheaper than what it would cost to implement the same solution yourself. The summary (don't know about the article, didn't read it) was trivializing the difficulty, the GP was refuting the summary.

  • by Goyuix ( 698012 ) on Wednesday April 15, 2009 @06:40PM (#27592113) Homepage

    Apache is only half the problem at best, the real issue is the lack of compliant clients at a significant level. Server Name Identifcation (the extension to allow for virtual hosts behind SSL/TLS connections) has been supported in Firefox since v2 I believe, and Internet Explorer 7 - though I think that is only on Vista for some reason. I have no idea what Safari, Opera and other browsers and platforms might support.

  • by Anonymous Coward on Wednesday April 15, 2009 @07:33PM (#27592605)

    Hmm, why no mention of nginx's thread limitations? By design, nginx does not use threads and as a result has performance issues scaling beyond one CPU or core. Those limitations will become apparent on certain real world workloads and with realistic tests. Those are important issues and this piece, like many nginx discussions, glosses over them. It also disingenuously tries to compare nginx to commercial solutions.

    I like nginx a *lot* and have tested and deployed it in many different situations. But it is not always the best choice, and in some cases is a poor choice.

    When I rolled out some new nginx services 6 months ago, nginx was only being developed by one person. Again, not a showstopper for everyone but it would be for some.. and Very worth mentioning in an article that compares nginx to commercial solutions. Nginx is great at some things but it is still maturing.

  • by Anonymous Coward on Wednesday April 15, 2009 @10:29PM (#27593705)

    What Sun DOESN'T tell you is that each of those eight 1.2 GHZ "cores" are actually 4 threads (read: cores)... running at 300mhz each.

    So what you REALLY get is.... 32 cores, each running 1 thread (total of 32 threads) at a speed of 300mhz. They just group four of them together and market them as a core.

    No, you're wrong.

    On a T2, you can have 8 or 4 cores, each with a floating point pipeline and two integer pipelines (T1 had one int per core, and one float per chip). Each (real, 1.2GHz) int pipeline is fed by four hardware threads. The hardware threads allow the processor to quickly service the next process if anything stalls. The OS can't do anything about these stalls, and only sees them as busy time, as if the processor was really doing something. The 4 to 1 ratio gives the system pretty good odds of being able to keep all 16/8 int units busy all the time. If the OS is executing more than 16/8 processes concurrently, then per process speed will be less than 1.2GHz obviously. No different from running four busy processes on a 1Ghz Pentium, (ignoring superscalar execution) each one will run at about 250MHz. Any number of concurrent processes above the number of real cores x int units you have is just eking out better efficiency, not real processing power.

    There is a lot to gain from better efficiency though, because typical processors spend a lot of time doing nothing even when the OS sees 100% busy.

    I don't know where this stupid 300MHz myth started (Oracle?), but it's not why single threaded performance on T1/T2 lags behiind other processors. The reason is other UltraSparc chips are super scalar, and mostly have a faster clock rate. That is the real trade off for better efficiency, laying out all the resources horizontally, as opposed to stacked vertically.

    Here's an anology:
    Niagra chips work like queuing up to four customers to a cashier, but if any one of them stalls (price check!) he simply starts on the next one immediately. You'll put one in each lane before this happens though.

    Other modern chips are like sticking two cashiers (and baggers) per lane, but strictly working with one customer at a time. A stall can easily hold up multiple workers. You have fewer lanes, but they can be really, really fast - except in practice, your customers do really stupid things all the time, like handing you a cart full of tagless items. For this reason, your workers need to be extra fast to average out all the worst case scenarios.

What ever you want is going to cost a little more than it is worth. -- The Second Law Of Thermodynamics

Working...