Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Website Performance Testing: How best to approximate computer performance?

I have some browser-intensive CSS and animation in my webpage and I'd like to determine if the user has a fast PC or not so i can scale things accordingly to provide the best experience.

I am using http://detectmobilebrowser.com's script to detect all mobile devices, and I am going to include the clause /android|ipad|ipod|playbook|silk/i.test(a) to include all tablet devices as well.

However this doesn't and cannot really address the actual hardware. It doesn't go very far at all to paint a picture of what I'm looking for.

An iPhone 4S, for example, will be quite a lot more capable than many of the devices matched by the mobile user agent detector, and this provides no way for it to set itself apart. Somebody might run Google Chrome on a Pentium II machine (somehow) and want to view my page. (This person probably does not have an iPhone 4S)

Obviously to actually get an idea for this I'll have to do some actual performance testing, and as with performance testing with any kind of application, it makes sense to only test the performance of the type of tasks that the application actually performs.

Even with this in mind I feel like it would be difficult to obtain any reasonably accurate numbers before the performance testing routine will have taken too long and the user will have became impatient. So this probably means go ahead with it unless I want the first initial impression to be perfect. Well, this actually happens to be the case. So I can't get away with measuring performance "after the first run" and adjusting the parameters later.

So what I've got left is to basically try to perform a similar task on initial page load, in a way that is dependent on browser rendering and processing speed, while not presenting anything to the user (so that to the user they still think the page is loading), and then preferably within a second or two obtain accurate enough numbers to set parameters for the actual page to animate and present in a pleasing manner that doesn't resemble a slideshow.

Maybe I could place a full-page white <div> over my test case so that I can prevent the user from seeing what's going on and hope that the browser will not be smart by avoiding doing all the work.

Has anybody ever done this?

I know people are going to say, "you probably don't need to do this", or "there's gotta be a better way" or "reduce the amount of effects".

The reason for doing any of the things I'm doing on the page are so that it looks good. That's the entire point of it. If I didn't care about that as much this question wouldn't exist. The goal is to give the javascript the ability to determine enough parameters to provide an awesome experience on a powerful computer, and also a passable experience on a less capable computer. When more power is available, it should be harnessed. So hopefully that can explain why such suggestions are not valid answers to the question.

like image 753
Steven Lu Avatar asked Jul 17 '12 00:07

Steven Lu


2 Answers

I think this is a great question because it puts the user's experience first and foremost.

Several ideas come to mind:

  • Microsoft has published many tests demonstrating the performance of IE 9 and 10. Many of these tests focus on graphic performance, such as this one, which appears to use this JavaScript file to measure performance. There may be some code/concepts you can use.

  • A media-intensive page probably takes a few seconds to load anyway, so you have a little breathing room if you begin your tests while the rest of the content loads. For example, initiate AJAX/image requests, run your tests, and then handle the responses.

  • To test graphic performance, what about using a loading graphic as the performance test? I'm not usually a fan of "loading" screens, but if the site may take a few seconds to load, and the end result is better UX, then it isn't a bad idea.

  • The white screen idea may work if you draw a bunch of white shapes on it (not sure if any engines are smart enough to optimize this away because it is the same color).

Ultimately, I would err on the side of better performance and lower fidelity, and a less accurate (but fast) test versus making the user wait for too long.

like image 111
Tim M. Avatar answered Nov 13 '22 02:11

Tim M.


Rather than measuring the user's CPU performance once and determining how many fancy visual effects to use from that, I would measure the amount of time taken by the CPU-intensive bits every time they execute (using new Date()), compare that to expected minimum and maximum values (which you will have to determine), and dynamically adjust the "effect level" up and down as appropriate.

Say if the user starts up a program in the background which eats a lot of CPU time. If you use this idea, your page will automatically tone down the visual effects to save CPU cycles. When the background program finishes, the fancy effects will come back. I don't know if your users will like this effect (but I am sure they will like the fact that their browser stays responsive when the CPU is overloaded).

like image 2
Alex D Avatar answered Nov 13 '22 00:11

Alex D