Building a Better Webserver 286
msolnik writes: "The guys over at Aces' Hardware have put up a new article going over the basics, and not-so-basics, of building a new server. This is a very informative, I think everyone should devote 5 minutes and a can of Dr Pepper to this article."
What all companies should do (Score:4, Insightful)
Actually a very interesting article, to be honest, in my 1 year of building webserver applications. I haven't gone through a process like this once. Usually we make a rough guess about how the application has performed (or more usually underperformed on existing servers, and just scale a percentile. As you can imagine, this is hardly realistic. Thanks for the read!
New Webserver? (Score:3, Troll)
(Maybe they just sent this so they could test it? Plan.)
Re:New Webserver? (Score:1)
Re:New Webserver? (Score:1)
doh, I wish I learned how to copy and paste properly
Regarding load on their server [aceshardware.com]I really wanna see todays numbers...
Re:New Webserver? - not good (Score:2)
Maybe they need to adjust their constants.
It is those d*mn modem users that drive up the RAM use. They stay connected longer on their GET and tie up resources longer.
Re:New Webserver? - not good (Score:2)
Re:New Webserver? - not good (Score:5, Informative)
One thing that does seem to work against the onslaught is a throttling webserver [acme.com]. If you haven't got the bandwidth etc to serve a sudden onslaught of requests, probably the best thing to do is to just start 503'ing -- at least people get a quick message 'come back later' instead of just dead air.
Re:New Webserver? - absolutely (Score:3, Informative)
Yes, a throttling server is a great idea. If you recognize that there will always be a load too high for you to handle (10 requests per minute for my site, yes minute, it is a physical device), then you must either decide to deal with the load or let the load crush your machine.
Consider a typical web server. When it gets overloaded it slows down, each request takes longer to handle, there are more concurrent threads, overall efficiency drops, each request takes longer to handle.... welcome to the death spiral. (on my site-which-must-not-be-named-less-it-be-slashdott
The key decision is to determine how many concurrent threads you can handle without sacrificing efficiency and then reject enough traffic to stay under that limit.
This is where optimism comes in and bites you in the ass. You remember that every shunned connection is going to cost you money/fame/clicks whatever so you set the limit too high and melt down anyway.
Re:New Webserver? - not good (Score:3, Interesting)
Not quite as elegant a solution, but it's nice for preventing your web server from taking all of your bandwidth (if, say, you run it off your cable modem, and wish to continue gaming...).
Re:New Webserver? (Score:1)
Unfortunately it doesn't seem to have stood up long enough to read the article. I suppose I'd better put my can back in the fridge...
Sigh, and I was hoping I could use it to justify a quad Xeon server with 4GB of RAM as the next web app's server on our 8 user LAN....
Re:New Webserver? ooh, ooh, it's up again!!! (Score:1)
Aha! Realtime load dependant hardware upgrades! That's gotta be the plan!
Now let's just see...
Re:New Webserver? (Score:2)
Looks like they might to revisit their approach to building a better webserver.
It is hard to say if we have maxed their bandwidth or maybe given the server a real life lesson in load.
I suspect the article might get a rewrite
Unfortunately I wasn't able to get past the first page, but me thinks the next article would introduce additional server's and some load balancing.
[Slashdot Seal Of Death]
Re:New Webserver? (Score:1)
sweet and bitter irony, isnt it?
Re:New Webserver? (Score:2, Informative)
Most people are unlikely/too lazy to follow the comment link above so I've repeated the first part of the response below:
we never need more than 2 GB......8) (Score:2)
Memory: Max 2Gb, 2Gb used. (4x increase old memory) may sound a lot but "we will never need more than 640Kb" and already 50% is used and "not growing."
Processor: 500 Mhz now 25% used. But no more extra processors are possible. (I know 1 sun Mhz != 1 athlon Mhz, but 25% load is far fro near idle)
They can work arround this limitations by placing an extra server and placing some functions on the other server, but they started with that in their case an extra server would be an extra point of failure.
In other words, if they keep developing their site we will see such an article agian in one or 2 year. gues this one will be about load balancin g on cheap (sun or x86) hardware.
I am a little bit suprised they didn't use x86 hardware since that is waht they review all the time. They looked futher than what you would expect.
Re:New Webserver? (Score:1)
Best webserver to generate traffic (Score:4, Funny)
Instant traffic to your site, no advertising!
Re:Best webserver to generate traffic (Score:2, Informative)
looks good so far... (Score:1, Redundant)
Seems to me that these guys might be onto something here...or maybe they just really know what they're talking about...
Re:looks good so far... (Score:2)
Quick page, good read (Score:1)
It was a good read and I wish we could do something vaguely similar with our web servers here. Not that we get the server load to demand such improvements at the moment, but I figure it's best to spend the money early on, get a good setup going that can handle high volumes, that way you're not caught with your pants down when things take off for you. It's unfortunate bean counters never think this way.
Of course I don't think I'll be taking this approach at home - even if it would be fun to have a Sun Blade sitting in the living room purring along answering the 1 or 2 web hits we get a day.
Re:Quick page, good read (Score:5, Insightful)
but I figure it's best to spend the money early on, get a good setup going that can handle high volumes
Throwing money at the problem is exactly the WRONG approch. You need to start by spending time PLANNING and RESEARCHING the best way to do things.
For example, if you are setting up a dynamic site like ./, which is serving 100 pages/second. It obviously needs to be dynamic, so you need a database to store all the comments in.
There are two ways to do this, one is to serve content straight out of the database, but this means that for every page you serve up, there needs to be a database query. (the database queries are the expensive part in terms of time it takes to serve a page). The other way would be to serve the articals as static pages which are generated every minute or so by a process on the database and pushed down to the web server, which serves these up as static pages.
The advantage of this is that insted of 100 database queries per minute, you end up with, maybe 10 queries per minute to populate the static pages. Sure, you site is no longer 100% dynamic, but it is a whole lot faster, and you have saved thousands of dollars to boot!
This is just one small off-the-top-of-my-head example of where PLANNING sould become way before spending any money.
Re:Quick page, good read (Score:2)
More sophisticated caching needed.
Re:Quick page, good read (Score:5, Insightful)
Actually, that's a terribly wasteful way to go. If you work on an easily-scalable infrastructure, then you can pretty much purchase capacity as it's needed, which not only frees up capital for a longer time, you end up spending a lot less, as the price of computers is always dropping, and the performance is always going up.
steve
Offtopic XBox server (Score:2, Funny)
Note this article [icrontic.com] for information on connecting USB keyboards and mice, what shorting the debug pins does on the keyboard, and replacing that measly ATA33 hard drive cable with an ATA100 (surprise, surprise: it actually increased performance :) ).
Re:Offtopic XBox server (Score:1, Offtopic)
*Congratulations screen: you can now type in Swedish Chef! Bork, Bork, Bork!
Re:Offtopic XBox server (Score:2)
Anyway, it's not halfway there, more like .02% there. No idea yet on how to run random bits of code on it, Microsoft obviously will have put all sorts of hurdles there. And then you have to reverse engineer stuff for all the libraries and OS(since they're statically linked) and figure out how to talk to hardware for all the little differences from normal PCs they've added. Long way to go.
Quick Guide to Building a Web Server (Score:1, Troll)
talking about better web server. (Score:2, Funny)
Good article, but... (Score:5, Interesting)
Why not get the server version - Netra X1? (Score:2, Informative)
Re:Why not get the server version - Netra X1? (Score:2)
...well, you won't get drivers for the smart card reader anyway, but that's not the point.
Re:Why not get the server version - Netra X1? (Score:2)
I don't know about Solaris 7, but Solaris 8 comes not only with drivers, but a neat smartcart GUI utility and some good developer libs for doing even more with the interface.
Re:Why not get the server version - Netra X1? (Score:2)
I usually... (Score:2, Funny)
Re:I usually... (Score:1)
steve
Am I the only one... (Score:2, Funny)
Re:Am I the only one... (Score:1)
Re:Am I the only one... (Score:1)
Update (Score:2, Redundant)
Their site is slowing down under the
Down already? (Score:2, Funny)
Devote my time? (Score:2)
the Ultimate Webserver is... (Score:4, Insightful)
good but... they discounted x86 to fast (Score:2, Informative)
Re:good but... they discounted x86 to fast (Score:1)
A lot of the extra money that you spend on "big iron" hardware is spend getting tremendous amounts of I/O to the various CPU's. For something like a database server, where your app pretty much has to run on a single machine, that's great. For something as simple as web-serving, which is extremely easy to cluster, you're wasting your money. Ten $2,000 Intel-based machines will deal out far more than one $20,000 Sun/IBM/Alpha.
In fact, when one company was doing an embedded solution based on the Strong-ARM chips, just for fun, they used ten of them to dish out over a million web pages per *minute* - and that was with StrongARMs.
steve
Just a can (Score:3, Funny)
New Webserver For 21st Century Goes Down (Score:1)
Therein, a stress test to the folks at Ace's Hardware.
What have we learned? (Score:2, Funny)
If your website is dynamically generated from a database, and your name isn't Amazon.com, don't let Slashdot link to you.
A single $999 box isn't going to stand up to Slashdot, unless every page is static.
Re:What have we learned? (Score:2)
Don't you mean, "...unless each page really says *Server too busy*"?
building a web server??? (Score:2, Troll)
Slashdotted (Score:1)
argh, server performance vs BANDWIDTH (Score:4, Insightful)
Re:argh, server performance vs BANDWIDTH (Score:2, Funny)
You've obviously never worked with Java.
Re:argh, server performance vs BANDWIDTH (Score:3, Insightful)
Tuning a web server is also a bit of an art - most default settings don't take full advantage of the hardware, they throw out Too Busy messages before the CPU/memory is full utilised. Parameters such as queues and worker threads need to be increased to accept more connections. Of course, this can lead to overtuning, where the parameters are set too ambitiously, the server bites off more than it can chew, and chokes.
Modern web servers on modern hardware can serve a frightening number of flat html pages per second - the real problems stem from poorly optimised dynamic code, usually to do with databases. Sure it's cute to have the site navigation automatically generate from a database query, but it's insanely inefficient. It'll work great under normal light loads, but when you get linked from Slashdot, you're dead.
Re:argh, server performance vs BANDWIDTH (Score:2)
Re:argh, server performance vs BANDWIDTH (Score:2)
Inefficient DB usage (Score:2)
Phillip.
Compression for dialup connections??? (Score:4, Informative)
Except that 56 Kbps modems get 5 KBps thoughput by compressing the data! If the client and server compress, the modems won't be able to; the net effect is lots of extra work on the server side, and probably no increased throughput for the modem user.
The server might or might not see a decrease in latency, and in the number of sockets needed simultaneously; it depends on how much it can "stuff" the intermediate "pipes". The server will see an overall decrease in bandwidth needed to serve all the pages.
Ironically, broadband customers (who presumably don't have any compression between their clients and Internet servers) will see pages load faster. (And the poor cable modem providers from the previous story will be happy.)
Re:Compression for dialup connections??? (Score:3, Informative)
mod_gzip is your friend.
Re:Compression for dialup connections??? (Score:2, Informative)
Re:Compression for dialup connections??? (Score:5, Informative)
Also dynamic vs static built dictionary (Score:2)
Phillip.
Re:Compression for dialup connections??? (Score:2)
Undernet coder-com was working on an idea to add to ircu.d that would (on multi proc machines) use one processor for the irc functions and the other, which was usually regulated to everyday mundane functions (running ssh server, typical processes) to compress data going from server to server to reduce the lag time in some of the long jumps like *.eu to *.us. This was in the wake of the 3Gbps DDoS attacks on the system, causing several servers to delink. (we miss you irc2.att.net)
So compression server side has lots of uses, not just for modem lusers. When the vast majority of what you're transfering is conversational text, compression works wonders
~z
Compression, Caching, GIFs/JPGs (Score:2)
How disappointing (Score:1)
article discussing how to design better webserver
software -- something which would have been
very interesting since it has been ages since I
saw a fresh take on that.
instead: another article on piecing together hardware. *sigh*
Re:How disappointing (Score:1)
Sigh, just more Sun drenched propaganda.
What a waste of a good can of sugary gut-rot.
Why Sun? (Score:3, Insightful)
SPARCs come from Sun, everybody makes a PC - so guess which is cheaper? We see some reasons why they went for the Blade (a nice machine, but rather more expensive than a couple of PCs).
Please get this right, I'm no x86 fan, but I love the competition going through the line from the processors, chip-sets, mother-boards, etc. This has got to ensure that unless you really want the 2GHz Pentium 4, you have plenty of choice.
As for reliability, I don't know the Blade, but the SPARC 20s used to give some headaches over their internal construction. It always seemed a little complicated with the daughter boards and they seemed to lose contact after machines were moved around.
Re:Why Sun? (Score:5, Insightful)
I am amazed at how people buy into the myth of cheap PC?s. Yes, if you are technically oriented and are not running critical applications, a cheap PC will be ok. On the other hand, I have been involved with several enterprises in which my employer insisted on going with cheap PC?s at the expense of short- and long-term productivity. One certainly cannot get a server class PC for $500, and there is few if any available for $1000. I would not say that a Blade would make a good office machine, but it seems to be a good choice for a server.
Re:Why Sun? (Score:2, Informative)
You mean people like Google [google.com] who run their highly-regarded search engine/translator [google.com]/image indexer [google.com]/Usenet archive [google.com] on a server farm of 8,000 inexpensive [internetweek.com] PCs [google.com] with off-the-shelf Maxtor 80GB IDE HDs?
Re:Why Sun? (Score:2)
Of course, if you ask me, the article was just an x86 hardware-review site's attempt to justify using non-x86 hardware on their new server.
Re:Why Sun? (Score:2)
I think the software development costs (they'd already done a lot of work for a platform they knew, apparently with some specific third party tools not available for Solaris/x86) were their biggest consideration. (They also mention "sparse hardware support" for Solaris/x86.)
Oddly, their OS choices for x86 seemed to be Windows 9x and Solaris; no mention of NT/2000/XP, let alone Linux or *BSD.
Re:Why Sun? (Score:2, Interesting)
Why did I chose sparc? Well its a tad quieter then a X86 box, smaller, and (and this is the point) it uses up a lot less power. The SS10 ships with a 65 watt ps (at least mine did). Considering you can get these things for less then 25~65 dollars they are a bargain (I paid 25$ for the SS10 and 65$ for the SS20). Anyhoo I kept my SS10 running for 30 minutes on a 300 va ups when the power went out last week - I doubt its drawing more then 25 watts peak. The software is still free since it runs debian linux well (and you can get sloaris for it too for free)
Plus - I have the added advantage that for some reason sun equipment is like a geek's dream - they look kinda cool sitting on the table next to the cable connection. Everyone who has ever come by has to comment on them somehow - either "whats that" - or "wow - you have one of those?". Don't get me wrong - there slow, (the SS10 has a cacheless microsparc in it), but the SS10 seems to keep up with the 4 megabit cable connection okay.
Re:Why Sun? (Score:2)
So ace probably had to make a decision. Could we A.) Use this new WindowsNT with IIS 2.0 and make this work with questionable quality x86 hardware? or could we B.) Use a standard UNIX variant on standard UNIX hardware and purchase the software tools to make our site. or C.) have a third party company make the decision for us. (which they would obviously choose UNIX).
Now in 2001 the x86 market has changed. Its now the opposite of the past. In the old days it was hard to find any UNIX for x86 besides skunkware. Anyway they have invested in solaris and I assume they do not own much of the software if a third party wrote it or it was sun specific. I assume it would be relatively easy to port it to linux but why go through the effort. This is why they went with sun. Also even in 1997 the old sun workstation was loaded with features you still could not get with a pc at the same price. I do agree that a dual processor x86 system might better sense then a single SPARC system. SMP machines appear to work so fluidly under heavy loads. But I guess they had there reasons. I would of spent more bucks for a dual SPARC system if I were them but they are alot more technical then I am.
Also pentium4's are unreliable and have high latency rdram. The latency wouldn't affect regularly work performance running pc apps or games but on a server it will cripple it. Remember that is not just the speed of the ram but also the responsiveness when alot of users query it with requests. Also The pentium4's get really hot which is another reason why they chose a sun. If a fan dies and the board and cpu burn then shit hits the fan. . Also after a few days of downtime could cost ace most of its customers. Thats really bad and reliability is important.
Re:Why Sun? (Score:2, Informative)
About pc's having more competition, it's not a hard argument that the competition isn't really what it seems to be - most of the competition is in price and how fast Quake will play. If Intel's processor is a little bit slower than AMD's, the fact that it still goes into most OEM computers will keep Intel alive. If Sun does not stand up to the competition with their processors, motherboards, and other components, people will leave them for something better, and Sun will be down the hole. They *have* to be better to survive - there's not much forcing people to stick with them.
They're also a lot more solid in their roots (Sun servers have been around forever, so they've had a lot of time to work on tweaking things and getting processors to work well for their applications), and Sun's support generally ranges from fairly good to downright amazing, from what I've heard (not that I've needed it).
But in the end, it's a lot different from PC hardware, and it can sometimes take a bit of getting used to.
ode to SPARCstation 20 (Score:4, Interesting)
I remember using a Sun evaulation model at Rice many years ago... the machine had two 150 MHz HyperSPARC processors (though 4 were avilable for more $$), a wide SCSI + fast ethernet card, two gfx cards for two monitors, and some sort of strange high speed serial card (for some oddball scanner, I think). Not to mention 512 MB of ram, in 1994! The machine was a pretty decent powerhouse and sooo small! I sort of wish the concept would have caught on, given how large modern workstations are in comparison. Heck, back then an SBUS card was about 1/3 the size of a modern 7" PCI card.
Then there's the other end of the spectrum... one department bought a Silicon Graphics Indigo2 Extrme in 1993. The gfx cardset was three full size GIO-64 cards (64 bit @ 100 MHz = about 800 MB/sec), one of which had 8 dedicated ASICs for doing geometry alone. 384 MB of RAM on that beast. Pretty wild stuff for the desktop.
Ahh, technology. I love you!
Uh oh... (Score:2)
*sigh* Probably because we've seen enough of it in the past...
Confusing the issues (Score:4, Informative)
In a part about databases and persistent connections they confuse the issues more than a bit. The real problem is not too many processes, what automatically makes threads look better, but the symmetry among processes -- any request should be possible to serve by every process, so all processes end up with database connections. This is a problem particular to Apache and Apachelike servers, not a fundamental issue with processes and threads.
In my server (fhttpd [fhttpd.org] I have used the completely different idea -- processes are still processes, however they can be specialized, and requests that don't run database-dependent scripts are directed to processes that don't have database connections, so reasonable performance is achieved if the webmaster defines different applications for different purposes. While I didn't post any updates to the server's source in two last years (was rather busy at work that I am leaving now), even the published version 0.4.3, despite its lack of clustering and process management mechanism that I am working on now, performed well in situations where "lightweight" and "heavyweight" tasks were separated.
Re:Confusing the issues (Score:2)
The webserver doesn't increase much in size having it serve static pages isn't a waste.
Since the app itself persists it can do what it want with DB connections. The app can be threaded or nonthreaded.
In these sort of circumstances looking to some sort of buffering would help a lot too - stuff that sucks the page from your servers at 100Mbps and trickle it down to some poor 9.6 dial up user. That way the number of persistent DB/app connections doesn't really go up at your server, even when the number of persistent users connecting to your site does.
Re:Confusing the issues (Score:3, Informative)
Other than that, FastCGI is a good idea.
Re:Confusing the issues (Score:2)
Great new webserver, but... (Score:2, Interesting)
500 Servlet Exception
java.lang.NullPointerException
at BenchView.SpecData.BuildCache.(BuildCache.java:96
at BenchView.SpecData.BuildCache.getCacheOb(BuildCac
at BenchView.SpecData.BuildCache.getLastModified(Bui
at BenchView.SpecData.BuildCache.getLastModifiedAgo(
at _read__jsp._jspService(/site/sidebar_head.jsp:60)
at com.caucho.jsp.JavaPage.service(JavaPage.java:87)
at com.caucho.jsp.JavaPage.subservice(JavaPage.java:
at com.caucho.jsp.Page.service(Page.java:474)
at com.caucho.server.http.FilterChainPage.doFilter(F
at ToolKit.GZIPFilter.doFilter(GZIPFilter.java:22)
at com.caucho.server.http.FilterChainFilter.doFilter
at com.caucho.server.http.Invocation.service(Invocat
at com.caucho.server.http.CacheInvocation.service(Ca
at com.caucho.server.http.HttpRequest.handleRequest(
at com.caucho.server.http.HttpRequest.handleConnecti
at com.caucho.server.TcpConnection.run(TcpConnection
at java.lang.Thread.run(Thread.java:484)
Resin 2.0.2 (built Mon Aug 27 16:52:49 PDT 2001)
smoke test (Score:2, Funny)
kind soul links us from slashdot
looks like we eat crow
other factors (such as the router) (Score:4, Interesting)
Now, I realize modern hardware (Cisco 3660 and 7x00 series, and pretty much any Juniper) can route several T3s (at 45mbps each direction) worth of data, but older routers and minimally configed routers do exist.
There are MANY bottlenecks in hosting a website. Server daemon, CPU, router, routing and filtering methods, latency and hops between server and internet backbones, overall bandwidth thruput, and much more.
It's not as simple as "lame server, overloaded CPU, should have installed distro version x+1".
Argh: not such a good webserver.... (Score:2)
Mebbe they really needed a v880 or summat before they started getting posted on /. :)
it's the BANDWIDTH (Score:5, Informative)
Re:it's the BANDWIDTH (Score:2)
Almost every server I've ever seen using JSP is dog slow. They have what look like very nice reasons for using it, but it sure doesn't look like they quite work out in practice.
Anyone know why?
D
Java Vs. PHP...multithreaded vs multiprocess? (Score:2, Interesting)
Speed shouldn't be the reason you switch to Java. If anything, I've found that PHP has been faster for simple web applications and page serving (and loads faster to develop applications with), while Java stands out as being more robust and stable.
fork'd childs use to much mem..... (Score:3, Interesting)
This means an Apache web server using keepalives will need to have more child processes running than connections. Depending upon the configuration and the amount of traffic, this can result in a process pool that is significantly larger than the total number of concurrent connections. In fact, many large sites even go so far as to disable keepalives on Apache simply because all the blocked processes consume too much memory.
::end quote::
lets see, anyone here hear of COW (copy on write) Linux uses this idea to save time on fork'd child processes, they get the
The only setback is when a process fork's a child, its current time slice is cut in half with half given to the child, so the main proc will run aground if to many requests come in and the server has more processes to worry about.
-ShadoeLord
Multithread Apache (Score:3, Informative)
When I discussed this issue with Thau (or to be precise, he did most of the talking) he gave the reason for using processes over threads as the awful state of the then pthreads packages. If Apache was to be portable it could not use threads. He even spent some time writing a threads package of his own.
I am tempted to suggest that rather than abandon apache for some java server (yeah lets compile all our code to an obsolete byte code and then try to JIT compile it for another architecture), it should not be a major task to replace the Apache hunt group of processes with a thread loop.
The other reason Thau gave for using processes was that the scheduler on UNIX sux and using lots of threads was a good way to get more resources, err quite.
Now that we have Linux I don't see why the design of applications like apache should be compromised to support obsolete and crippled legacy O/S. If someone wants to run on a BSD Vaxen then they can write their own Web server. One of the liabilities of open source is that once a platform is supported it can end up with the application supporting the platform long after the O/S vendor has ceased to. In the 1980s I had an unpleasant experience with a bunch of physicists attempting to use an old MVS machine, despite the fact that the vendor had obviously ceased giving meaningfull support for at least a decade. In particular they insisted that all function calls in the fortran programs be limited to 6 characters since they were still waiting for the new linker (when it came it turned out that for functions over 8 characters long it took the first four characters and the last four characters to build the linker label... lame, lame, lame)
Re:OSDN: Please read this (Score:1)
aceshardware.com _JUST_ fell over. I guess it couldnt keep up with
Re:OSDN: Please read this (Score:2, Interesting)
Funny. What was our next closest competitor spent several million dollars on Sun hardware and everything done in Java. We spent less than $40,000 on some dual-proc Intel machines, doing everything with Postgres, Perl, and Apache. The result? Our servers have many times the capacity that theirs do, and they're almost completely out of business.
steve
Re:OSDN: Please read this (Score:4, Informative)
"Real Multithreading" considered harmful (Score:2)
It's pretty easy to just do:
...
for (;;) {
n = select(...);
perConnStructPtr = getPerConnPtrByFd(anActiveFd);
}
after all.
Re:"Real Multithreading" considered harmful (Score:2)
This is not really multithreading. The correct term is multiplexing. See W. Richard Stevens' books APUE [amazon.com] and UNP [amazon.com].
Re:"Real Multithreading" considered harmful (Score:2)
And then stick in a call to 'gethostbyname()' and watch all your multiplexed tasks freeze while the nameserver hangs trying to find a nonexistant hostname.
--jeff
Re:OSDN: Please read this (Score:3, Informative)
In other words, with the right hardware architecture, threads could be very useful for sites such as Ace's Hardware (though they happened to go with a uniprocessor) and Slashdot.
Java threads are also easier to program than C and C++ threads, though not easy. (Manual memory management is hard; thread programming is hard; manual memory management in a threaded program is very hard. I'm not speaking hypothetically on the last point; I've really envied Java programmers the last few weeks.)-:
True (Score:3, Interesting)
Once upon a time, we had 1 web server that did everything, so it needed to be able to do everything. Now everytime we do something new we toss out a new webserver (or 2 or 10 of 'em). And they all basically need to do one thing (webmail, portal, whatnot) and do it well and that's it.
So we've got a whole bunch of Apache servers which a bucket load of apache processes who basically spend all day doing little more than exec'ing the same CGI over and over (and copying the data around a couple of extra times).
I'm pretty much now convinced that would my next step is going to be to franken-meld my cgi with something like mini-httpd [acme.com] so it is a single, persistant, app.
I'm certainly not redoing the whole thing in Java though! :)
Re:OSDN: Please read this (Score:1)
Actually Slashdot is usually one of the fastest site on the Net for me. I frequently use it to test if my DSL connection works properly. Their scripts/database often get hosed at high loads, though, I wonder what the bottleneck is. But Java as a replacement? Puhleeaze..
Re:OSDN: Please read this (Score:2, Informative)
Also, don't confuse the CGI protocol with short-lived CGI binaries. Slashdot uses modperl, whcih is NOT a short-lived process, but Apache is still a forking server in the 1.3.x branch.
Disk IO on the Blade 100 (Score:3, Insightful)
I don't know what Ace's traffic numbers are normally like, but using a Blade 100 for anything other than a small, personal website is flat-out folly. At a minimum, they should have been using a Netra T1/AC200 ($3k, nicely configured, and a 1U rackmount machine to boot), and I would probably have thought seriously about scrounging a used E250 or E220R off of Ebay.