Facebook VP Slams Intel's, AMD's Chip Performance Claims 370
narramissic writes "In an interview on stage at GigaOm's Structure conference in San Francisco on Thursday, Jonathan Heiliger, Facebook's VP of technical operations, told Om Malik that the latest generations of server processors from Intel and AMD don't deliver the performance gains that 'they're touting in the press.' 'And we're, literally in real time right now, trying to figure out why that is,' Heiliger said. He also had some harsh words for server makers: 'You guys don't get it,' Heiliger said. 'To build servers for companies like Facebook, and Amazon, and other people who are operating fairly homogeneous applications, the servers have to be cheap, and they have to be super power-efficient.' Heiliger added that Google has done a great job designing and building its own servers for this kind of use."
PHP (Score:3, Interesting)
It's because your shitty website doesn't have a single line of compiled code. PHP only goes so far.
Re:PHP (Score:5, Interesting)
A Familiar Tune from Facebook (Score:4, Interesting)
Re:WTF? (Score:5, Interesting)
Looks like that to me; he scoped for cheap and cheerful and was bit on the ass when he realised that sometimes you get what you pay for. Like what's the point in having quad-core server CPU without the high-bandwidth buses of server-grade hardware.
In the concurrent DNS/Kaminsky thread, I saw a reference that facebook's DNS TTL is low. A quick investigation reveals that they have a 30 second TTL and are using DNS round-robin for their load balancing.
He's nothing but a blame-shifting cretin.
Re:Facebook's application is poorly coded (Score:5, Interesting)
One of the server techs from Twitter was at SXSW 2 years and gave some details about how their backend servers worked. If I remember correctly (there were 4 sites on the panel, so I may be confusing them with someone else), the original code was written in Ruby on Rails which did not scale well to the multi-server systems that they had setup. They have spent a lot of time improving their code over the years, but from what I could tell, their initial implementation wasn't the most thought out thing in the world.
Re:Sun.... (Score:3, Interesting)
Re:Well I suppose... (Score:4, Interesting)
I was just going to say that. If Facebook et al are not looking at the Sun coolthreads servers, they're idiots. A T5240 would deliver a whopping 128 hardware threads per 1u of rackspace.
Re:WTF? (Score:3, Interesting)
How can you be blamed for finding an acceptable solution when there simply isn't one available? He is a software developer, not a hardware one. Not everybody can just go out and design their own servers like Google does. He's saying he's been tripped up by the fact that the server manufacturers aren't delivering on their promises; hardly something he should be blamed for. Your attempts to read more into his comment about "not being cheap" and compare it to the false words of a politician seems like a pretty big stretch.
If you read the entire article, he not only doesn't say that his decisions have led to disasters, but instead says that his infrastructure development decisions have led to very smooth transitions, even when Facebook rolls out big, new features like the customized home page URLs. He is only voicing his disappointment in saying that the servers aren't living up to the hype, and that he is still looking for a better solution.
I will say that his comment to not be cheap seems to be in direct conflict with the rest of his argument, since his criticism over AMD and Intel revolves around the fact that they need to be cheaper. Seems a bit counter-intuitive.
And yet... (Score:5, Interesting)
Every major server vendor has jumped on the bandwagon of 'look how efficient we are, and 'cheap'. Three years ago, by and large the tier ones wouldn't bother designing systems without forcing even the cheap design to have parts included to facilitate purchase of redundant add-ons (i.e. power distribution cards designed for dual power supplies regardless of one being bought or not). They would always put a high end storage controller on the planar. They would always make their 'entry' platform be burdened with expensive components to make it easier to option it up.
Now, we have tons of 'internet scale', or 'cloud', or whatever buzzword you feel like. They tend to stress energy efficiency, low cost components, with sales and management strategies targeted at thousands of servers (i.e. IBM iDataplex, HP SL6000). Basically, precisely what he prescribes, though probably not as 'cheap' as he wants. The incentive he gives is that the vendors should have zero margin, which is not particularly compelling for companies to work toward. Google's situation works because they brought it in-house and thus have fewer middle-men. Honestly, from all the rumours I hear, it's the logical thing to do when your server consumption is larger than some respectable computer companies' entire production. If he thinks the volume of servers is high enough to pull a google, by all means do it. Otherwise, be prepared for people not jump at the chance to give their designs to him at zero margin.
Of course, if he is calling them out on performance per-watt by avoiding non-x86 solutions, including ARM, that might be a fair criticism. However, I think company forays into 'exotic' architectures have not panned out in the market recently. Sun's niagra, despite all the worthy praise, couldn't attract a mass-market required to subsidize it for those who benefited most from it. Last year, IBM seemed to be saying Cell architecture would light the world on fire, but have been a lot quieter about it now. The message their buisness leaders have probably taken in is that while these things have their target market, that market isn't worth the expense of developing products that are refused by the larger market and focus instead on leveraging commonly accepted building blocks to do as best they can for that niche, even if it means skipping the 'perfect' solution. Sure, IBM still sells plenty of POWER, but I haven't heard that be *particularly* praised on the performance/watt category like I hear a lot for Niagra, Cell, and ARM. And if not for POWER's legacy, it probably would be still born in the market today. The PA-RISC->Itanium decision for HP probably sank their HP-UX product line faster than banking on legacy of PA-RISC installs, and it seems IBM won't make that mistake, but at the same time I don't hear much about *new* POWER customers.
Re:WTF? (Score:3, Interesting)
You can better identify your bottlenecks by benchmarking. Facebook's scalability is likely not as cpu-bound as predicted, thus the dude's angst on discovering that CPU upgrades weren't a silver bullet.
In your case, you haven't looked past the RAID configuration for the root-cause of your performance issues. Without benchmarking you don't really know if it was an issue with: the filesystem, the block size, stripe size, or a caching tunable.
Systems architecture isn't as easy as PC builders would have you believe.
Re:Facebook's application is poorly coded (Score:5, Interesting)
Given the quality of Facebook's developer API (it's horrible), I'd be amazed if the back-end of the actual site wasn't poorly written.
Re:Sounds like a bunch of excuses to me (Score:3, Interesting)
Actually, Google got all three of those in their system-level design (when cheap is measured per CPU). What they didn't get was per CPU reliability. That's pretty miserable by the standards of commercial servers. Luckily, all Google software is architected, designed, and written to work around frequent hardware failures, so that's ultimately covered.
Re:WTF? (Score:4, Interesting)
Uhhh, correct me if I'm wrong. I've been looking at after market bolt on parts for my car. The headers claim increase fuel mileage, the spark plugs, the air filter, the tires, as does a turbocharger. The glass pack mufflers, and the resonator. Oh yeah, the aerodynamic rims, the hood, and spoiler. Don't forget the carbon fiber body panels. Taken all together, those increased MPG's add up to about 150 MPG. You're saying I may not see that much improvement on my 1968 Chevy Malibu? It's just hype? Man - you just saved me about $5,000!!!
Re:WTF? (Score:1, Interesting)
At the end of your long comment u said "If I was Intel/AMD I'd be chiming in right about now and opening a dialog with Facebook and looking to see what the issues are. Facebook is a big customer with huge name recognition and you want to be able to use them as an example of your solution delivering the promised performance for your marketing."
U know the funny thing is...back in the day, lets say Pentium 1, 2 and 3 days. AMD had awesome products, and no one was buying because they had no idea who these guys were. (Example. AMD was first to make 1 GHz CPU and it was faster per clock then Intel's offerings). So what AMD did is they went out of their way to work with customers, to get sales, and they slowly built up a name for themselves...its really funny that after they reached the top (remember athlon VS Pentium when AMD was without a doubt better), they "forget" to do the things that got them where they are, and wonder why they are dieing.
Re:WTF? (Score:3, Interesting)
No, he just found that RAID controllers suck. Which they do, universally, all the time. The only ones that actually perform decently are the ones in external SAN boxes, and inside they are typically servers with software RAID...
Re:WTF? (Score:5, Interesting)
One of the fun toys Intel has to play with is a complete system simulator, which simulates every single component in a computer for early testing. This lets them vary parameters that aren't feasible yet while they're working on their design goals. A few years ago they did a test; what happens to the system performance if you make the CPU infinitely fast? They adjusted the simulator so that every CPU operation took zero simulated time and ran their benchmark suite. It ran twice as fast (in simulated time) as it was before.
A CPU-bound workload can quickly become a RAM-speed bound or a disk-speed bound workload if you make the CPU faster but don't upgrade anything else.