Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is fopen() limited by the filesystem?

Tags:

php

fopen

fwrite

I wrote a program to generate large .SQL files for quickly populating very large databases. I scripted it in PHP. When I started coding I was using fopen() and fwrite(). When files got too large the program would return control to the shell and the file would be incomplete.

Unfortunately I'm not sure exactly how large is 'too large'. I think it may have been around 4GB.

To solve this problem I had the file echo to stdout. I redirected it when I called the program like so:

[root@localhost]$ php generatesql.php > myfile.sql

Which worked like a charm. My output file ended up being about 10GB.

My question, then, is: Are fopen() and fwrite() limited by the file system in terms of how large a file they are capable of generating? If so; is this a limitation of PHP? Does this happen in other languages as well?

like image 934
KeatsKelleher Avatar asked Feb 26 '23 19:02

KeatsKelleher


1 Answers

What's probably occuring is the underlying PHP build is 32bit and can't handle file pointers >4GB - see this related question.

Your underlying OS is obviously capable of storing large files, which why you're able to redirect stdout to a large file.

Incidentally, an SQL file is likely to be highly compressible, so you might like to consider using the gzip fopen wrapper to compress the file as you write it.

$file = 'compress.zlib:///path/to/my/file.sql.gz';
$f = fopen($file, 'wb');

    //just write as normal...
    fwrite($f, 'CREATE TABLE foo (....)');

fclose($f);

Your dump will be a fraction of the original size, and you can restore it simply piping the output from zcat into an SQL client, e.g. for mysql

zcat /path/to/my/file.sql.gz | mysql mydatabase
like image 50
Paul Dixon Avatar answered Mar 04 '23 11:03

Paul Dixon