Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How synchronous AJAX call could cause memory leak?

I understand this general advice given against the use of synchronous ajax calls, because the synchronous calls block the UI rendering.

The other reason generally given is memory leak isssues with synchronous AJAX.

From the MDN docs -

Note: You shouldn't use synchronous XMLHttpRequests because, due to the inherently asynchronous nature of networking, there are various ways memory and events can leak when using synchronous requests. The only exception is that synchronous requests work well inside Workers.

How synchronous calls could cause memory leaks?

I am looking for a practical example. Any pointers to any literature on this topic would be great.

like image 901
Johnbabu Koppolu Avatar asked Jan 16 '13 18:01

Johnbabu Koppolu


People also ask

What is AJAX synchronous call?

Synchronous AJAX call is made when async setting of jQuery AJAX function is set to false while Asynchronous AJAX call is made when async setting of jQuery AJAX function is set to true. Default value of the async setting of jQuery AJAX function is true.

Can AJAX be made synchronous?

AJAX can access the server both synchronously and asynchronously: Synchronously, in which the script stops and waits for the server to send back a reply before continuing. Asynchronously, in which the script allows the page to continue to be processed and handles the reply if and when it arrives.

Why is it important that AJAX is asynchronous?

Making Asynchronous Calls: Ajax allows you to make asynchronous calls to a web server. This allows the client browser to avoid waiting for all data to arrive before allowing the user to act once more.

Is AJAX synchronous or asynchronous?

Ajax requests are Asynchronous by nature, but it can be set to Synchronous , thus, having the codes before it, execute first.


2 Answers

If XHR is implemented correctly per spec, then it will not leak:

An XMLHttpRequest object must not be garbage collected if its state is OPENED and the send() flag is set, its state is HEADERS_RECEIVED, or its state is LOADING, and one of the following is true:

It has one or more event listeners registered whose type is readystatechange, progress, abort, error, load, timeout, or loadend.

The upload complete flag is unset and the associated XMLHttpRequestUpload object has one or more event listeners registered whose type is progress, abort, error, load, timeout, or loadend.

If an XMLHttpRequest object is garbage collected while its connection is still open, the user agent must cancel any instance of the fetch algorithm opened by this object, discarding any tasks queued for them, and discarding any further data received from the network for them.

So after you hit .send() the XHR object (and anything it references) becomes immune to GC. However, any error or success will put the XHR into DONE state and it becomes subject to GC again. It wouldn't matter at all if the XHR object is sync or async. In case of a long sync request again it doesn't matter because you would just be stuck on the send statement until the server responds.

However, according to this slide it was not implemented correctly at least in Chrome/Chromium in 2012. Per spec, there would be no need to call .abort() since the DONE state means that the XHR object should already be normally GCd.

I cannot find even slightest evidence to back up the MDN statement and I have contacted the author through twitter.

like image 176
Esailija Avatar answered Sep 18 '22 05:09

Esailija


I think that memory leaks are happening mainly because the garbage collector can't do its job. I.e. you have a reference to something and the GC can not delete it. I wrote a simple example:

var getDataSync = function(url) {     console.log("getDataSync");     var request = new XMLHttpRequest();     request.open('GET', url, false);  // `false` makes the request synchronous     try {         request.send(null);         if(request.status === 200) {             return request.responseText;         } else {             return "";         }     } catch(e) {         console.log("!ERROR");     } }  var getDataAsync = function(url, callback) {     console.log("getDataAsync");     var xhr = new XMLHttpRequest();     xhr.open("GET", url, true);     xhr.onload = function (e) {         if (xhr.readyState === 4) {             if (xhr.status === 200) {                 callback(xhr.responseText);             } else {                 callback("");             }         }     };     xhr.onerror = function (e) {         callback("");     };     xhr.send(null); }  var requestsMade = 0 var requests = 1; var url = "http://missing-url"; for(var i=0; i<requests; i++, requestsMade++) {     getDataSync(url);     // getDataAsync(url); } 

Except the fact that the synchronous function blocks a lot of stuff there is another big difference. The error handling. If you use getDataSync and remove the try-catch block and refresh the page you will see that an error is thrown. That's because the url doesn't exist, but the question now is how garbage collector works when an error is thrown. Is it clears all the objects connected with the error, is it keeps the error object or something like that. I'll be glad if someone knows more about that and write here.

like image 20
Krasimir Avatar answered Sep 19 '22 05:09

Krasimir