Slashdot Asks: What's Your View On Benchmark Apps? 50
There's no doubt that benchmark apps help you evaluate different aspects of a product, but do they paint a complete picture? Should we utterly rely on benchmark apps to assess the performance and quality of a product or service? Vlad Savov of The Verge makes an interesting point. He notes that DxOMark (a hugely popular benchmark app for testing a camera) rating of HTC 10's camera sensor is equal to that of Samsung's Galaxy S7, however, in real life shooting, the Galaxy S7's shooter offers a far superior result. "I've used both extensively and I can tell you that's simply not the case -- the S7 is outstanding whereas the 10 is merely good." He offers another example: If a laptop or a phone does well in a web-browsing battery benchmark, that only gives an indication that it would probably fare decently when handling bigger workloads too. But not always. My good friend Anand Shimpi, formerly of AnandTech, once articulated this very well by pointing out how the MacBook Pro had better battery life than the MacBook Air -- which was hailed as the endurance champ -- when the use changed to consistently heavy workloads. The Pro was more efficient in that scenario, but most battery tests aren't sophisticated or dynamic enough to account for that nuance. It takes a person running multiple tests, analyzing the data, and adding context and understanding to achieve the highest degree of certainty. The problem is -- more often than not -- gadget reviewers treat these values as the most important signal when judging a product, which in turn, also influences several readers' opinion. What's your take on this?
You also have to consider cheaters (Score:3)
Re: (Score:2)
Wait, how would that work. I mean, all the name->IP translation happens locally, and only IP addresses are sent out... unless they deeper packet examination. Which seems like a high cost.
I suppose they could parse the HTTP request headers... or listen for the DNS queries?
Re: (Score:2)
Wait, how would that work. I mean, all the name->IP translation happens locally, and only IP addresses are sent out...
When you go to http://www.google.com/ [google.com], your browser sends a header saying:
Host: www.google.com
When you go to http://206.111.13.26/ [206.111.13.26], that's not sent.
I suspect the speedtest site was something like HisProvidersName.speedtest.net, and maybe it faked it if it got a connection from an IP within that provider.
Re: (Score:3)
Case in point: ADSL line speed. I've had several different ADSL providers, and living somewhat far out, the speed is consistently bad, sometimes awful. But if I try one of the many 'ADSL speed test' websites, the results are always in line with the promised speed.
Not every place you visit (in fact, likely most places) will fully saturate your downstream link. They might have the bandwidth to be capable of doing so, but they ration it on a per-session (sometimes per-IP) basis so that everybody who happens to access the site can get a reasonable speed. (By the way, this is the principle that so called "download accelerators" take advantage of -- they combine multiple sessions into one. But they won't work on a per-IP basis unless you are able to do i.e. multipath TCP.
The Benchmark Lifecycle (Score:4, Insightful)
A good benchmark -- in cameras, CPUs, GPUs, cars, anything really -- is ideally a set of tests which contains a random sampling of real-world scenarios. In the beginning, the benchmark is good precisely because the vendors are unaware of it and don't spend a bunch of time trying to optimize for it specifically.
Once a benchmark becomes popular, companies try to make their product better for the benchmark ("See PHB! I increased our PCBench score by 10%!") but CAN ultimately end up doing so in a custom way that doesn't represent real-world performance (e.g. Volkswagen). Because the company is now specifically trying to optimize for a specific use-case, the benchmark is no longer random and thus no longer representative of real-world use.
Enter a new benchmark, which is really good, and better mirrors real-world performance and the cycle begins anew.
Re: (Score:2)
I have a friend working on.... a popular webbrowser. They test JS performance (of theirs and competitors') all the time against benchmarks. In theory, those benchmarks are derived from looking at the 1000 most popular sites (according to some site ranking algorithm). If that's true, than that seems to be a valid(ish) benchmark. I mean, those 1000 sites probably account for the vast majority of traffic, and other sites probably model themselves after those 1000 sites.
Re: (Score:2)
I view benchmarks like I view performance review numbers. You cannot show improvement if you cannot compare to past metrics. So you collect metrics even if they are poor choices. For example, you can measure a software engineer against SLOC. It is not a great measure of productivity (and many people can attest to why), but it is a measure that is readily available by looking at SCM. Having a bad measure is better than having no measure. Over the years, the SLOC measure may get tweaked in terms of how it is calculated to prevent software engineer from gaming the system too much. Perhaps credit will be added for code reviews and penalties for build breaking. Eventually you will have a number that reflects some level of work done by the software engineer, but not necessarily a linearly scalable number that can accurately reflect productivity. But it is still better than no number.
Complete falacy.
No number is almost always better than an uninformative number. SLOC is a great example of this. You actively do not want engineers to be contributing lots of lines of code - that's how you end up with the Facebook app - 17000 classes doing... basically nothing much, and no one who understands how any of it works.
You actively do not want to use that kind of measure, because it bears exactly 0 correlation to an engineer being productive and/or useful.
Re: (Score:2)
Re: (Score:2)
Once a benchmark becomes popular, companies try to make their product better for the benchmark ("See PHB! I increased our PCBench score by 10%!") ...
Slight tangent from this, when management of any kind starts running the benchmarks / tests / security scanners / etc, watch out! Suddenly, there's a huge red flag that must be fixed immediately, and it's just an internal only static site with a self signed cert.
Re: (Score:2)
Then you'd best stick with $20 crack whores. Sure, you're paying for regret and an STD, but you can be pretty sure that you're getting what you paid for.
Only two benchmarks are important (Score:1)
Boot up time and Photoshop filters. Use a bittorrent client to measure internet speeds. "Speed test" web sites are dogged down by traffic.
Build to the benchmark (Score:2)
Re: (Score:2)
You get what you measure. Unfortunately, my use cases and the majority's are not the same.
Re: (Score:3)
Companies have been known to take this even further. You can probably find plenty of compilers that have something like, "if(this_looks_like_benchmark_x) emit_special_code_for_benchmark_x". I know for a fact that the old Sun compiler could detect a matrix multiply and would emit hand tuned, parallelized assembly when it detected it.
Vendors will always play games with benchmarks and customers will always read things into benchmarks that aren't true. That's not to say that benchmarks aren't useful but, if
Re: (Score:2)
Maybe. But, it's very dishonest. A simple matrix multiply is a triple nested loop and when the compiler detects that loop with a certain stride through memory, it drops in the fast stuff. The exact same loop with a different stride through memory didn't trigger any special optimization and, as expected, the performance dropped by at least an order of magnitude. So, in the context of benchmarks, it's cheating: The benchmark does not represent the capabilities of the machine or compiler on any workload t
Re: (Score:2)
Umm. I _want_ my compiler writers to cheat. Ok, I may not get the full benefits if I don't know all the cheats, but I'll trigger enough of them to make the system faster.
A compiler that knows how do make code execute faster? Sounds fucking ideal to me.
Re: (Score:2)
Actually, no. What you'll do is get the compiler and processor that are super fast for a tiny fraction of your code and slow as a log truck going up hill the rest of the time instead of the cpu/compiler that is twice as fast for 100% of your code. You will lose big on that deal.
Benchmark tools? (Score:2)
So..who are the "tools" - the shysters creating the benchmarks or the rubes consuming them?
Benchmarks are useless in reviews (Score:1)
That's the conclusion I've mostly come to, at least for complete consumer products.
When I look at the latest Dell, Apple, etc desktop or laptop I already see the figures available from the maker, and often there's at least a few choices in terms of CPU, RAM, or SSD options. The only way performance from one item to another would be considerably different would be if one OEM made a major error.
On the other hand there are things that are hard to tell from the spec sheet that make a huge difference for me:
Is
Re: (Score:2)
Does the case feel like it will fall apart on the first tweak?
I've bought dozens of PCs, for myself and others. I have carried a laptop for years on bicycle, including on snow/ice and fell multiple times.
I've never replaced a desktop, and not even a laptop, because of a broken case. Even the so-called "cheap plastic" laptops are more than durable enough for a lifespan of 3-10 years. And even in the unlikely case of a case break, the laptop will most likely continue to work just fine, and therefore the problem would be only cosmetic.
Too old and too slow, or broken disp
Re: (Score:2)
Not sure which laptops you've bought or how they've dropped, but apparently you've not worked on others' stuff much - people break shit in some really horrible ways. Cracks in the case around the display, particularly near the hinge, are notably problematic, as are around the keyboard. It doesn't take much of a crack for things to start not working properly.
Re: (Score:2)
Not sure which laptops you've bought
Mostly cheap ones.
or how they've dropped,
That's my point, they haven't. Or they were in their protecting bag when it happened.
My view is: (Score:3)
Reviewes (Score:3)
Says who? The reviewers "objective" opinion? These are the same guys that say a $10,000 audio cable produces "warmer" sounds than a $5 one.
Re: (Score:2)
Just don't say nothing bad about my $200 Shakti Electromagnetic Stabilizer Stone.
http://www.musicdirect.com/p-7... [musicdirect.com]
It's a benchmarck, not God's Score (Score:3)
You are not looking at God's manual for existence, to check a score, like some kind of video game.
It's just the results from a test - helpful, but not perfect. Luck, design for the test, and many other factors may affect it.
If all you do is look at the benchmark, you deserve to be screwed over. Doing so is like looking at new lawyers grades in law school and making the highest score a partner right off the back.
The limitations of testing (Score:2)
If you want to use a test result, you must first understand what the test is measuring. It isn't ever going to be as simple as "Laptop A got 536 and laptop B got 642, therefore laptop B is better at everything." This same thing applies to medical diagnostic tests, or academic test, or product quality tests. Unfortunately, this is hard. Because statistics is hard. And science is hard.
Sorry. :-(
Benchmark = a standard or point of reference (Score:3)
Benchmark: a standard or point of reference against which things may be compared or assessed.
Yes, benchmarks do a good job of comparing two pieces of hardware, especially tests which involve the entire system. I use benchmarks all the time for hardware comparison and system optimization/overclock comparison. Without benchmark tools we couldn't effectively compare changes to setting or in hardware speed specifically raw CPU, raw GPU, raw RAM, and raw DISK I/O speeds.
Benchmark tools also help determine system stability by pushing the hardware to the limit and taking it to it's thermal throttling speed.
So people ship custom hardware to vendors to cheat on benchmark? Yes.
Will these cheats show up in the reviews on NewEgg, Amazon, and Tom's Hardware when they can't be replicated? Yes
So please, benchmark away. Publish the results. Keep the data in a table for all to view. Benchmarks keep everyone honest in the end.
Re: (Score:2)
Yes, benchmarks do a good job of comparing two pieces of hardware, especially tests which involve the entire system.
No, they usually don't. Doing a "full system test" is almost certainly not going to give you useful information. How do you weigh individual results into a final result? How do you know the vendor hasn't included special cheat modes into the hardware/software to skew the benchmark? How do you know the benchmark is even testing what it claims to be testing?
Without benchmark tools we couldn't effectively compare changes to setting or in hardware speed specifically raw CPU, raw GPU, raw RAM, and raw DISK I/O speeds.
Comparing "raw" anything is probably not useful either. Discovering that increasing the CPU speed by 10% increases a benchmark score by 10% is almost
Re: (Score:2)
So does "while(true);". That doesn't make it a useful benchmark.
This actually just gets put in the L1/L2 cache of the CPU. [stackoverflow.com]
In General, if I use a benchmark like Cinebench [maxon.net] it correlates to real world performance in programs like Final Cut, Adobe Premiere, and After Effects for video rendering.
In all my years of benchmarking and overclocking, I have not found anything suspicious. Years ago there was the whole Intel vs. AMD benchmark bru-ha-ha where benchmarks favored Intel due to compiler optimization favoring Intel hardware, but the CPU wars are long over. AMD lost and n
This has been an issue since forever (Score:2)
Systematic review is very important; however, in most cases, the system used to review is not complex enough to effectively qualify what's being reviewed.
It's like any system used to summarize data: fundamentally you're going to get a flawed diagnosis, because it's summarized. Unless you're dealing with a huge amount of data, and the analysis thereof, the answer is almost always "it depends".
And then there is the 'bias review' introduced in a lot of these benchmark tools. It's why open source benchmark meth
Trust (Score:4, Insightful)
DxOMark is indeed a perfect example (Score:2)
DxOMark is indeed a perfect example of elaborate benchmarking and what can go wrong with it. To make a streamline and objective test they only measure the few things that are the easiest to measure objectively over various cameras. In the end they seem to just combine these test scores and come up with a number that makes no sense if you look at real life performance, since not only they do not measure a multitude of things that also affect performance, but in addition, the way they combine the things they
Necessary but not sufficient (Score:2)
Benchmarks are necessary, but not sufficient way to test things.
The reason for benchmarks is simple - you want a scientifically repeatable test that can be used to compare things with each other. This limits the benchmark's utility as a real-world test because it's inherently limited in what it can test. All it gives is how your thing measures up to all the other things out there. And yes, benchmarks will be gamed, doesn't matter the field (see VW, Mitsubishi and everyone else with diesel engines). However,
Norton SI (Score:2)
Which benchmark? (Score:2)
3dmark? pretty pictures
iobench? now that's useful
There is a silver lining (Score:2)
Benchmarks are great tools since they are repeatable and give you a picture of what your hardware, phone, etc is capable of.
However I've learned never to rely on the benchmarks alone as they normally don't mimic real world usage scenarios.
Tl;dr, great for reference and stress test, bad for real world usage.