Over the last four years, I’ve been asked repeatedly, “What is the best way to compare browsing performance of one mobile processor platform versus another?” My answer was long – as there is no single metric or test out there that does the job. Instead, you need a combination of benchmarks to ensure the whole browsing experience is covered.
Most benchmarks fall into one of the following categories:
Single-Function benchmarks: SunSpider, V8, GUIMark, etc.
Synthetic-Workload benchmarks: BrowserMark, Vellamo, etc.
These benchmarks arbitrarily combine multiple test results that measure individual processing aspects (e.g., DOM, layout, rendering, etc.) running artificial workloads—not real web content.
Live Page Load Tests:
We’ve all seen YouTube videos of people holding devices side-by-side, loading webpages. But, these tests aren’t repeatable as live network conditions vary, affecting web connections.
There has simply not been any browsing benchmarks that use real webpages and mimic real network conditions between a client device and a web server…until today.
BrowsingBench benchmark is the result of over 1-1/2 years of intense collaborative development among EEMBC members, which include major SOC providers, solution developers, and industry experts. Read the announcement here. We first met in November 2009 when we recognized the need for a single benchmark that could reproduce live web browsing conditions. Numerous weekly conference calls and several face-to-face meetings later, we now have a benchmark that all member companies count as their own and are satisfied with its unbiased, fair measurement techniques.
As far as I know, this is the only benchmark that mimics real-world user experience without sacrificing repeatability and reliability required from a good benchmark. To that end, it does not use any synthetic workloads – instead it deploys actual content of real-world websites. It hosts the page content on separate web servers and not in the local memory of a Device Under Test (DUT). It goes even further by creating network conditions that make the web server appear just like a live Internet website to the DUT. This was done by controlling the latency and bandwidth of the network that connects the server with the device.
An added benefit of deploying a client-server based benchmark is that we’re able to use the server to measure times. Previous benchmarks all rely on the client to measure its own processing time – a method fraught with inaccuracies due to device constraints and non-uniform time controls. Since the server PC is independent of the DUT and is typically a more powerful reliable machine, its measurements are not only reliable but also impartial and agnostic to the DUTs.
We even had compliance tests developed to flag browsers that do not faithfully reproduce a web page. This was an important consideration for the working group as we’ve all seen limited-feature browsers that quickly reproduce a web page but skip key processing steps – resulting in incorrect page layouts, missing pictures, or wrong fonts or colors. The final benchmark has a compliance test suite that produces a compliance score – this is used together with the performance score to enable fairer comparison between different platforms.
For more than a decade, TI has been driving mobile innovation with OMAP technology. Our commitment goes beyond producing best-in-class mobile processing platforms to include participating in critical activities like the development of BrowsingBench that benefit the industry as a whole.
As chair of the working group behind BrowsingBench, I can tell you our work is not finished. We’ve already started working on the Phase II development—an endeavor that will help our industry get still closer to efficiently, accurately and fairly assessing the full power and performance of applications processors.
BrowsingBench is available now for EEMBC members and for non-member licensing. Contact EEMBC for access.
We would love to hear from you – your ideas for Phase II development and feedback on Phase I implementation.