JSBench

This page has been copied from the original at Purdue, but not yet updated. Don't mind the dust.

Suite

The JSBench Suite is a suite of JavaScript benchmarks created by sampling real JavaScript-utilizing web pages.

The current JSBench Suite is 2013.1. We will update this suite occasionally to add new pages or address bugs in JSBench. Results of this suite are not comparable to results of previous releases of the JSBench Suite.

To run this suite live in your current browser, click here.

To download it for offline use, click here. Not all included benchmarks can be run without a browser, and not all non-browser JS environment can run this benchmark suite. You will need to adjust either the suite or your environment to adapt; if the former, please contact Gregor Richards so that we may integrate your changes in further versions of the suite. Note that because not all included benchmarks can be run without a browser, the results from a non-browser run are not comparable to the results from a run in a browser.

The current benchmark suite includes code sampled from the following web pages:

Older versions of the benchmark suite are archived.


About

JSBench, part of Purdue's, and now University of Waterloo's, DynJS projects, is a new approach to JavaScript benchmarking. Traditionally, benchmarks are made by writing long-running but arbitrary programs which the author hopes will exercise the same parts of the language engine as real-world code does, in the same ways. This technique creates reliable, portable benchmarks, but it is difficult to evaluate the legitimacy of these benchmarks with respect to real code. Another technique, used in projects such as the DaCapo benchmark suite for Java, is to adapt real-world code to use as benchmarks. We expand on that idea by automating the process of adapting real-world code with the JSBench record-replay framework. Our benchmarks are generated from real JavaScript-utilizing web pages. They aim to be representative by using the real, original code, only modified to remove human interaction and nondeterminism.


FAQ

How does it work?

The JSBench tool acts as a proxy and records the behavior of a human interaction with a web site to a trace. The JavaScript component of that trace is extracted and arranged to run in a predictable, reliable way. More details are available in the paper.

Why these sites?

There are three qualifiers for a site to be chosen for the JSBench suite: First, it must work well with the JSBench software. Unfortunately, as the process of record and replay will never be perfect, and unsupported sites are possible. Secondly, it must have exhibit enough behavior that the time spent in the site's code will overwhelm the time spent in the JSBench framework. Third, it must be a major web site, representative of the state of the web.

Why these browsers?

In principle any site can be recorded and replayed with any combination of browsers, but in practice differences can arise due to language incompatibilities between them. Firefox, Safari (WebKit), Chrome and Opera are generally selected for recording because sites typically present standards-compliant code to these browsers. Since all modern browsers are fairly standards complaint, this code will work on all of them. Internet Explorer is not used for recording because although Internet Explorer 9 and later are quite standards-compliant, many sites continue to present Internet-Explorer-specific code to this browser, which would fail on other browsers.

So which browser is best?

The role of the JSBench Suite is to create a more realistic set of JavaScript benchmarks. If you would like to know the results of running the suite with every major browser, you are by all means free to do so.

What do you mean by “realistic”? What makes the other benchmark suites unrealistic?

JavaScript engines may have many different performance goals, but a seemingly-obvious one is to run real JavaScript-utilizing web pages as smoothly as possible, and, as such, with greatest JavaScript performance, so that the execution of code does not interrupt the utilization of the site proper. However, it is quite difficult to quantitatively measure the performance of a browser on the JavaScript code on a web site, as there are innumerably many other factors: Networking, image loading and layout just to name a few. JSBench isolates a single component, namely code, and reproduces it in a predictable way.

Other benchmark suites are written from scratch or ported from benchmark suites in other languages. Therefore although they certainly test the JavaScript engines, those tests may not reflect how real JavaScript code is written, and so do not reflect how the engines will perform with real code.

It is perfectly reasonable to wish to know an engine's performance on either style of code. Our claim is that the JSBench Suite is a more relevant measure of an engine's performance on real web pages and nothing more.

Isn't measuring JavaScript alone unrealistic?

This depends on what you're trying to measure. The JSBench Suite aims only to measure the performance of the JavaScript engine, not the browser as a whole.

One possibility that the record-replay style allows for is replaying a part of the trace other than the JavaScript code, such as the layout of the page and DOM interaction. However, this has not yet been investigated.


Acknowledgements and Credits

The JSBench tool and infrastructure is copyright © 2012-2013 Purdue University, written by Gregor Richards, and released under the terms of the simplified BSD license. Sampled code is copyright by its respective owners.

The authors thank Ben Livshits and Ben Zorn at Microsoft Research for their input, discussions and feedback during the development of the JSBench tool, as well Brendan Eich, Andreas Gal and others at Mozilla for participating in and supporting its research and development, and Filip Pizlo of Apple for feedback. This material is based upon work supported by the National Science Foundation under Grant No. 1047962 and 0811631, by a SEIF grant from Microsoft Research and PhD fellowship from the Mozilla Foundation.

Photo credits