There's no doubt that benchmark apps help you evaluate different aspects of a product, but do they paint a complete picture? Should we utterly rely on benchmark apps to assess the performance and quality of a product or service? Vlad Savov of The Verge makes an interesting point. He notes that DxOMark (a hugely popular benchmark app for testing a camera) rating of HTC 10's camera sensor is equal to that of Samsung's Galaxy S7, however, in real life shooting, the Galaxy S7's shooter offers a far superior result. "I've used both extensively and I can tell you that's simply not the case -- the S7 is outstanding whereas the 10 is merely good." He offers another example: If a laptop or a phone does well in a web-browsing battery benchmark, that only gives an indication that it would probably fare decently when handling bigger workloads too. But not always. My good friend Anand Shimpi, formerly of AnandTech, once articulated this very well by pointing out how the MacBook Pro had better battery life than the MacBook Air -- which was hailed as the endurance champ -- when the use changed to consistently heavy workloads. The Pro was more efficient in that scenario, but most battery tests aren't sophisticated or dynamic enough to account for that nuance. It takes a person running multiple tests, analyzing the data, and adding context and understanding to achieve the highest degree of certainty. The problem is -- more often than not -- gadget reviewers treat these values as the most important signal when judging a product, which in turn, also influences several readers' opinion. What's your take on this?