Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

JavaScript object code caching: which of these assertions are wrong?

Because I have been around engineers for so many years, I know that if I don't provide context, I'm just going to get a hundred answers of the form "What are you trying to accomplish?" I am going to give the background which motivates my question. But don't confuse the background context for the question I am asking, which is specifically related to the JavaScript semantics that made object code uncacheable between padge requests. I am not going to give marks for advice on how to make my webapp faster. It's completely tangential to my question, which will probably only be answerable by someone who has worked on a JavaScript compiler or at least the compiler for a dynamic language.

Background:

I am trying to improve the performance of a web application. Among its many resources, it contains one enormous JavaScript file with 40k lines and 1.3million characters pre-minification. Post-minification it's still large, and it still adds about 100ms to the window.onload event when synchronously loaded, even when the source is cached client-side. (I have conclusively ruled out the possibility that the resource is not being cached by watching the request logs and observing that it is not being requested.)

After confirming that it's still slow after being cached, I started doing some research on JavaScript caching in the major browsers, and have learned that none of them cache object code.


My question is in the form of some hypothetical assertions based on this research. Please object to these assertions if they are wrong.

  1. JavaScript object code is not cached in any modern browser.

    "Object code" can mean anything from a byte code representing a simple linearized parse tree all the way to native machine code.

  2. JavaScript object code in a web browser is difficult to cache.

    In other words, even if you're including a cached JS source file in an external tag, there is a linear cost to the inclusion of that script on a page, even if the script contains only function definitions, because all of that source needs to be compiled into an object code.

  3. JavaScript object code is difficult to cache because JS source must evaluated in order to be compiled.

    Statements have the ability to affect the compilation of downstream statements in a dynamic way that is difficult to statically analyze.

    3a. (3) is true mostly because of eval().

  4. Evaluation can have side effects on the DOM.

  5. Therefore, JavaScript source needs to be compiled on every page request.

Bonus question: do any modern browsers cache a parse tree for cached JS source files? If not, why not?

Edit: If all of these assertions are correct, then I will give the answer to anyone who can expound on why they are correct, for example, by providing a sample of JS code that couldn't be cached as object code and then explaining why not.

I appreciate the suggestions on how to proceed from here to make my app faster, and I mostly agree with them. But the knowledge gap that I'm trying to fill is related to JS object code caching.

like image 629
masonk Avatar asked Sep 19 '12 15:09

masonk


People also ask

How do you cache objects in JavaScript?

set = function(key, value) { this. memory[key] = value; }; cache. get = function(key) { if(console) { console. log("memory: "); console.

What is caching in JavaScript?

The Cache API allows Service Workers to have a control over resources( HTML pages, CSS, JavaScript files, images, JSON, etc ) to be cached. Through Cache API a Service Worker can cache resources for offline and retrieve them later.

What is jsCache?

jsCache is a javascript library that enables caching of javascripts, css-stylesheets and images using my localStorage polyfill.

Does Chrome cache scripts?

Like other web browsers, Google Chrome features a cache that stores files such as images, scripts and video content from websites that you visit over time.


1 Answers

You're right in that it's dynamically compiled and evaluated.
You're right that it must be.

Your recourse isn't in trying to make that compile-time smaller.
It needs to be about loading less to begin with, doing the bare-minimum to get the user-experience visible, then doing the bare minimum to add core functionality in a modular fashion, then lazily (either on a timer, or as-requested by the end-user) loading in additional features, functionality and flourishes.

If your program is 10,000 lines of procedural code, then you've got a problem.
I'm hoping it's not all procedural.

So break it up. It means a slower 1st-page load. But on subsequent requests, it might mean much faster response-times as far as what the user perceives as "running", even though it will take longer to get to 100% functional.

It's all about the user's perception of "speed" and "responsiveness", and not about the shortest line to 100% functional.

JavaScript, in a single-threaded format, can't both do that and be responsive.
So be responsive first.

PS: Add a bootstrap. An intelligent bootstrap. It should be able to discern which features are needed.
RequireJS is for loading dependencies.
Not for figuring out what your dependencies are.

An added benefit -- you can set a short-term cache on the bootstrap, which will point to versioned modules. How is this a benefit? Well, if you need to update a module, it's a simple process to update the version in the bootstrap. When the bootstrap's cache expires, it points at the new module, which can have an infinite lifetime (because it's got a different name -- versioned or timestamped);

like image 116
Norguard Avatar answered Oct 17 '22 15:10

Norguard