In our company we combine every Javascript file into one big (around 700kb but growing) minified and gzipped Javascript file. I am trying to assess the performance differences between using one big Javascript file for every page (minified and gzipped) and using several Javascript files, one for each page.
An obvious difference is that the big Javascript file can be cached by browsers after it has been loaded on the first page request and creates little overhead thereafter while when using several js files, there will be at least one uncached get request on each different page. So I would be trading a slower first initial page load for slower successive initial page loads.
In order to find out when the slow initial page load (using one big Javascript file) will become problematic enough to justify the work of breaking up the combined file into smaller files and changing our build procedure, I would like to find out how long it takes for the code to be parsed, so I can estimate the total loading and parsing time.
So far my approach has been to add a script tag to a test page which takes the current time, appending a bigish script with Javascript and after that meassuring time again like so:
var head = document.getElementsByTagName('head')[0];
var script = document.createElement('script')
script.setAttribute('type', 'text/javascript');
script.src = 'path/700kbCombineFile.js';
start_time = new Date().getTime();
head.appendChild(script);
At the end of 700kbCombineFile.js, I appended:
console.log(new Date().getTime() - start_time)
Then I subtract the network transfer time obtained from firebug and receive approximately 700ms for a 700 kb file and about 300ms for a 300 kb file.
Does this approach make sense? Why not? Is there a better way/any tools for doing this?
The easiest way to track execution time is to use a date object. Using Date. now() that returns the total number of milliseconds elapsed since the Unix epoch, we can store the value before and after the execution of the function to be measured and then get the difference of the two.
The parse() method takes a date string (such as "2011-10-10T14:48:00" ) and returns the number of milliseconds since January 1, 1970, 00:00:00 UTC. This function is useful for setting date values based on string values, for example in conjunction with the setTime() method and the Date object.
The ParseTime( fmt,str ) command parses the string str, assumed to contain date and time data, according to the parsing specification given by the format string fmt. • The time format string is a Maple string that can contain time and date format conversion specifiers, introduced by a character.
I think
console.time("Parsetime")
and
console.timeEnd("Parsetime")
gives you a more accurate measurement then the Date Object,
Article about javascript time accuracy
The amount of time taken to parse the JavaScript will vary significantly between browsers.
What you've got is the best method I can think of to measure the parse time, however, I would question whether this is the best measurement to judge which approach is more effective.
Personally, I'd consider the time taken for the load
event on the window
to fire, from the time the page was requested a better measurement on which approach to go for.
The time taken to download the page is important, and a file always been cached after 1st load is only ever true in a perfect world.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With