I'm developing a JavaScript heavy web application; heavy as in, without JavaScript, the whole application is useless. I'm currently using requirejs as my module loader, and the r.js
tool to optimize my JS into a single file in production.
Currently, in production my markup looks something like this;
<script src="/js/require.js"></script>
<script>
require.config({
// blah blah blah
});
require(['editor']); // Bootstrap the JavaScript code.
</script>
However, this loads the JavaScript asynchronously, which leaves the page rendered albeit unusable until the JavaScript is loaded; I don't see the point. Instead, I'd like to load the JavaScript synchronously like so;
<script src="/js/bundle.js"></script><!-- combine require.js, config and editor.js -->
This way, when the page is rendered, it is usable. I've read that all modern browsers support parallel loading, which leads me to believe most of the advice on the Internet suggesting to avoid this approach as it blocks parallel downloads is outdated.
Yet;
In development, I want to insert the uncombined files as several script tags, rather than the single minified file;
<script src="/js/require.js"></script>
<script>/* require.config(...); */</script>
<script src="/js/editor-dep-1.js"></script>
<script src="/js/editor-dep-2.js"></script>
<script src="/js/editor.js"></script>
... yet this seems so fiddly in requirejs (Use r.js to produce a fake build, to get a list of the dependencies of editor.js
), it feels wrong.
My question(s) are therefore as follows;
<script />
's advice being outdated?Short answer: yes, it is wrong. You use require.js to first load all your dependencies, and then once all of them are loaded, you run the code that is dependent on all the things you loaded.
If your page is unusable until after your require-wrapped code runs, the problem is not require, but your page: instead, make a page that is minimal and indicates it is still loading, with nothing else (visible) on it (use css display:none
on elements that shouldn't be used until the JS finishes, for instance), and enable/show the actual functional page elements only once require is done and your code has set up all the necessary UI/UX.
Take a moment to think about why you are using requirejs in the first place. It helps manage your dependencies, avoiding a long list of script tags that must be in precisely the right order. You could argue this only becomes unmanageable when a large number of scripts are involved.
Second, it loads scripts asynchronously. Again, with a large of scripts this can greatly reduce load times, but the benefit is smaller when a small number of scripts are used.
If your application only uses a few javascript files, you might decide that the overhead of setting up requirejs properly is not worth the effort. The benefits of requirejs only become obvious when a large number of scripts are involved. If you find yourself wanting to use a framework in a way that feels "wrong", it helps to step back and ask whether you need to use the framework at all.
Edit:
To solve your problem with RequireJS, initially set your main content area to display: none
, or better yet display a loading spinner animation. Then at the end of your main RequireJS file simply fade in the content area.
I decided to take ljfranklin's advice, and do away with RequireJS completely. I personally think AMD is doing it all wrong, and CommonJS (with it's synchronous behaviour) is the way to go; but that's for another discussion.
One thing I looked at is moving to Browserify, but in development each compilation (as it scans all your files and hunts down require()
calls) took far too long for me to deem acceptable.
In the end, I rolled out my own bespoke solution. It's basically Browserify, but instead it requires you to specify all your dependencies, rather than having Browserify figure it out itself. It means compilation is just a few seconds rather than 30 seconds.
That's the TL;DR. Below, I go into detail as to how I did it. Sorry for the length. Hope this helps someone... or at least gives someone some inspiration!
Firstly, I have my JavaScript files. They are written à la CommonJS, with the limitation that exports
isn't available as a "global" variable (you have to use module.exports
instead). e.g:
var anotherModule = require('./another-module');
module.exports.foo = function () {
console.log(anotherModule.saySomething());
};
Then, I specify the in-order list of dependencies in a config file (note js/support.js
, it saves the day later):
{
"js": [
"js/support.js",
"js/jquery.js",
"js/jquery-ui.js",
"js/handlebars.js",
// ...
"js/editor/manager.js",
"js/editor.js"
]
}
Then, in the compilation process, I map all of my JavaScript files (in the js/
directory) to the form;
define('/path/to/js_file.js', function (require, module) {
// The contents of the JavaScript file
});
This is completely transparent to the original JavaScript file though; below we provide all the support for define
, require
and module
etc, such that, to the original JavaScript file it just works.
I do the mapping using grunt; first to copy the files into a build
directory (so I don't mess with the originals) and then to rewrite the file.
// files were previous in public/js/*, move to build/js/*
grunt.initConfig({
copy: {
dist: {
files: [{
expand: true,
cwd: 'public',
src: '**/*',
dest: 'build/'
}]
}
}
});
grunt.loadNpmTasks('grunt-contrib-copy');
grunt.registerTask('buildjs', function () {
var path = require('path');
grunt.file.expand('build/**/*.js').forEach(function (file) {
grunt.file.copy(file, file, {
process: function (contents, folder) {
return 'define(\'' + folder + '\', function (require, module) {\n' + contents + '\n});'
},
noProcess: 'build/js/support.js'
});
});
});
I have a file /js/support.js
, which defines the define()
function I wrap each file with; here's where the magic happens, as it adds support for module.exports
and require()
in less than 40 lines!
(function () {
var cache = {};
this.define = function (path, func) {
func(function (module) {
var other = module.split('/');
var curr = path.split('/');
var target;
other.push(other.pop() + '.js');
curr.pop();
while (other.length) {
var next = other.shift();
switch (next) {
case '.':
break;
case '..':
curr.pop();
break;
default:
curr.push(next);
}
}
target = curr.join('/');
if (!cache[target]) {
throw new Error(target + ' required by ' + path + ' before it is defined.');
} else {
return cache[target].exports;
}
}, cache[path] = {
exports: {}
});
};
}.call(this));
Then, in development, I literally iterate over each file in the config file and output it as a separate <script />
tag; everything synchronous, nothing minified, everything quick.
{{#iter scripts}}<script src="{{this}}"></script>
{{/iter}}
This gives me;
<script src="js/support.js"></script>
<script src="js/jquery.js"></script>
<script src="js/jquery-ui.js"></script>
<script src="js/handlebars.js"></script>
<!-- ... -->
<script src="js/editor/manager.js"></script>
<script src="js/editor.js"></script>
In production, I minify and combine the JS files using UglifyJs. Well, technically I use a wrapper around UglifyJs; mini-fier.
grunt.registerTask('compilejs', function () {
var minifier = require('mini-fier').create();
if (config.production) {
var async = this.async();
var files = bundles.js || [];
minifier.js({
srcPath: __dirname + '/build/',
filesIn: files,
destination: __dirname + '/build/js/all.js'
}).on('error', function () {
console.log(arguments);
async(false);
}).on('complete', function () {
async();
});
}
});
... then in the application code, I change scripts
(the variable I use to house the scripts to output in the view), to just be ['/build/js/all.js']
, rather than the array of actual files. That gives me a single
<script src="/js/all.js"></script>
... output. Synchronous, minified, reasonably quick.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With