Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get Render Performance for Javascript based Charting Libraries?

To preface I am pretty new to programming Javascript, but I have been working with various libraries for a while now. I've been tasked to get performance metrics for various charting libraries to find the fastest and most flexible based on some of the libraries available (e.g. AmCharts, HighCharts, SyncFusion, etc.). I've tried JSPerf and it seems like I am getting performance metrics for the code execution and not the actual rendered chart which is the metrics we want (aka what the user experience will be). I've tried using the performance.now() within the Javascript code in the header and also wrapped around the tags where the charts are displayed, but neither method is working.

What is the best way to get these performance metrics based on rendering?

like image 866
Steve Thompson Avatar asked Feb 13 '15 18:02

Steve Thompson


2 Answers

Short Answer :

Either :

  1. Start your timing right before the chart code executes and setup a MutationObserver to watch the DOM and end the time when all mutation ends.
  2. Find out if the charting library has a done() event. (But be cautious as this can be inaccurate depending on implementation/library. "done()" could mean visually done, but background work is still being performed. This could cause interactivity to be jumpy until the chart is completely ready).

Long Answer :

I'm assuming your test data is quite large since most libraries can handle a couple thousand points without any negligible degradation. Measuring performance for client-side charting libraries is actually a two sided issue: rendering times and usability. Rendering times can be measured by the duration when a library interprets the dataset, to the visual representation of the chart. Depending on each library's interpretation algorithm, your mileage will vary depending on the data size. Let's say library X uses an aggressive sampling algorithm and only has to draw a small percentage of the dataset. Performance will be extremely fast, but it may or may not be an accurate representation of your data set. Even more so, interactivity at a finer grain detail could be limited.

Which leads me to the usability and interactivity aspect of performance. We're using a computer and not a chart on a piece of paper; it should be as interactive as possible. As the amount of interactivity goes up though, your browser could be susceptible to slowdown depending on the library's implementation. What if each of your million data points was to be an interactive dom node? 1 Million data points would surely crash the browser.

Most of the charting libraries out there deal with the tradeoff between performance, accuracy, and usability differently. As for what is It all depends on the implementation.

Plug/Source : I am a developer at ZingChart and we deal with our customers with large datasets all the time. We also built this which is pretty relevant to your tests : http://www.zingchart.com/demos/zingchart-vs/

like image 97
mike-schultz Avatar answered Nov 12 '22 10:11

mike-schultz


My method is really basic. I create a var with current time then call a console.log() with the time I got to the end of my code block and the difference.

var start = +new Date();
//do lots of cool stuff
console.log('Rendered in ' + (new Date() - start) + ' ms');

Very generic and does what it says on the tin. If you want to measure each section of code you would have to make new time slots. Yes, the calculation takes time. But it is miniscule compared to what the code that I want to measure is doing. Example in action at the jsFiddle.

like image 21
wergeld Avatar answered Nov 12 '22 09:11

wergeld