I work on a Javascript-heavy application and would definitely like to see performance improve. My applications deal with a ton of data and are struggling a bit on the common computer at my workplace.
The IE string result is curious. Any idea why it’s happening?
Can you better define what each of the benchmarks means? For example is ‘access’ proper I/O or parsing the DOM tree?
If ‘string’ means ‘parsing any stringified data’ that means that its probably involved in building the DOM tree from the [X]HTML which means that it is one of the more important aspects of the benchmark.
What does ‘3d’ mean? z-positions? Proper 3d rendering? Making buttons look roundy??
The point is that these tests appear to cover a significant part of the javascript libraries but fails to identify which ones are used more often on the web. I would hope that most browsers implementations of javascript are optimized for the more common web use-cases.
A benchmark set up to measure the performance of javascript “as it is commonly used on the web” might show us more useful results. Properly identifying the benchmark terms would be a good step at the minimum.
How come you didn’t mention Tamarin? It’s been making huge news lately.
FYI, Adobe donated a JIT compiler for ActionScript (a very close relative to JavaScript and ActionScript) to Mozilla. It’ll be a major feature of Mozilla 2 and JavaScript 2 support.
@James Justin Harrell: At this point Tamarin isn’t relevant as it cannot run the majority of the JavaScript available on the Internet. It may interesting in the future, but it’s worth waiting until it is viable as technology before making noise about it.
@Freiheit: The benchmarks are purely about JavaScript. There is no DOM access, rendering or network access.
Anyway, do performance in such small part (but yet important) of the browser matter? In the end it’s how many seconds it takes from the point that I press enter to actually render the stuff on the screen. In that respect, I believe that the rendering process is actually what needs more attention.
Why measure just javascript? Measure everthing that it takes to put a webpage in my monitor. You are seeing with the eyes of a developer, as you said programmer should see their software as the user would see.
@Mark Rowe:
If I follow you right this is disconnected from how Javascript is used in the browser. For example if I have javascript change the style on a particular element it is just fiddling with some variable that the browser provided for it. That kind of access is no different from setting some arbitrary value; Javascript is just doing some processing and sticking the result somewhere.
Thanks for the clarification, I abhor JavaScript but I’m kind of interested in this discussion because it shows how better coding of the JS engine can make a browser better.
The results more or less bear out my own experiences. I’ve found Opera to be the fastest gun in the West with regard to both rendering and script processing. Sadly, Firefox is the worst. Hitting sites like the ExtJS libraries, or scriptaculous’s site, etc. Firefox on both Linux and Windows stutters and struggles. Even with html/css it is terrible. I have a page that uses a UL/LI vertical menu with a little opacity applied and position:fixed. I get “tearing” when the page is scrolled on a machine with 4GB of RAM and a Quad-core CPU! Other browsers are smooth as butter.
I tested Firefox 3 beta 1 the other week and it was noticeably faster but still stuttered and teared. I hope they can improve. As it stands, I use FF for tools like Firebug but Opera for browsing.
Jeff: If you’re taking requests, I’d be curious to see how the latest WebKit nightly build (http://nightly.webkit.org/) stacks up against the other browsers on Windows.
Hoffmann: This benchmark focusses on one part of the functionality of the web browser. There are other benchmarks that cover DOM access, page loading and rendering. A benchmark that covers every piece of functionality in a web browser provides information that is incredibly complex to analyse and thus very difficult to use in a practical fashion to optimise the browser.
Freiheit: Setting a variable and setting a property of a DOM object appear syntactically similar but are vastly different in terms of implementation. Setting a property on a DOM object typically results in the rendering engine being required to update the layout, repaint a portion of the screen, etc. The performance of these operations is obviously of great interest to developers, both of web applications and browsers, but are not the focus of SunSpider. There are other benchmarks that cover these areas.
Just like Fred, I have seen some visually appealing graphs in your posts. Can you please share what you use to generate these graphs? Is it a charting tool from a spreadsheet, perhaps Excel?
In my testing of Firefox 2.x and IE 6.x on the AJAX applications I have developed for my client, I have found that Firefox is 2 to 5 times faster for real world operations that load XML data, transform it via XSLT, and render lots of DOM elements (DIVs, IMGs, TABLEs, etc.) as a result. My client uses IE only but I develop using Firefox and when I test under both environments I really notice the difference in performance. IE 6 sucks. I have not tested under IE 7 since my client has yet to upgrade to it.