Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it possible to clean memory after FileReader?

FileReader seems to consume all the memory as it is repeatedly used to preload multiple blobs, and never frees it. Any known way to force it to release consumed memory? Setting FileReader object and it's result property to null doesn't seem to work.

UPDATE:

Here is a sample code (test it on a big files, like movie, or you won't notice the effect in task manager):

<input id="file" type="file" onchange="sliceMe()" />

<script>
function sliceMe() {
    var file = document.getElementById('file').files[0], 
        fr,
        chunkSize = 2097152, 
        chunks = Math.ceil(file.size / chunkSize), 
        chunk = 0;

    function loadNext() {
       var start, end,
           blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice;

       start = chunk * chunkSize;
       end = start + chunkSize >= file.size ? file.size : start + chunkSize;

       fr = new FileReader;
       fr.onload = function() {      
          if (++chunk < chunks) {
             // shortcut - in production upload happens and then loadNext() is called
             loadNext(); 
          }
       };
       fr.readAsBinaryString(blobSlice.call(file, start, end));
    }

    loadNext();
}
</script>

I tried to create fresh FileReader instance every time, but the problem still stays. I suspect that it could be caused by a circular nature of the pattern, but I'm not sure what other pattern can be used in this case.

I checked this code in both Firefox and Chrome and Chrome seems to handle it more gracefully - it purges memory after each cycle and is very fast. But the irony of the situation is that Chrome doesn't need to use this code at all. It's just an experiment to overcome Gecko 6- FormData + Blob bug (Bug 649150 - Blobs do not have a filename if sent via FormData).

like image 296
jayarjo Avatar asked Aug 21 '11 10:08

jayarjo


2 Answers

Try it like this instead:

function sliceMe() {
        var file = document.getElementById('file').files[0],
        fr = new FileReader,
        chunkSize = 2097152,
        chunks = Math.ceil(file.size / chunkSize),
        chunk = 0;

    function loadNext() {
       var start, end,
           blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice;

       start = chunk * chunkSize;
       end = start + chunkSize >= file.size ? file.size : start + chunkSize;

       fr.onload = function() {      
          if (++chunk < chunks) {
             //console.info(chunk);
          }
       };
       fr.onloadend = function(e) {      
          loadNext(); // shortcut here
       };
       fr.readAsBinaryString(blobSlice.call(file, start, end));
    }

    loadNext();
}

The onloadend will keep you from overlapping your other reads... (Obviously, you can fix the increment a little better, but you get the idea...)

like image 151
Craig Avatar answered Oct 19 '22 18:10

Craig


Bug has been marked as INVALID, since it turned out that I wasn't in fact re-using FileReader object properly.

Here is a pattern, which doesn't hog memory and cpu:

function sliceMe() {
    var file = document.getElementById('file').files[0],
        fr = new FileReader,
        chunkSize = 2097152,
        chunks = Math.ceil(file.size / chunkSize),
        chunk = 0;

    function loadNext() {
       var start, end,
           blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice;

       start = chunk * chunkSize;
       end = start + chunkSize >= file.size ? file.size : start + chunkSize;

       fr.onload = function() {      
          if (++chunk < chunks) {
             //console.info(chunk);
             loadNext(); // shortcut here
          }
       };
       fr.readAsBinaryString(blobSlice.call(file, start, end));
    }

    loadNext();
}

Another bug report has been filed: https://bugzilla.mozilla.org/show_bug.cgi?id=681479, which is related, but not the evil in this case.

Thanks to Kyle Huey for bringing this to my attention :)

like image 3
jayarjo Avatar answered Oct 19 '22 18:10

jayarjo