Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Laravel download file from php output buffer VS. private storage folder | security

A user can download query results in CSV format. The file is small (a few KB), but the contents are important.

The first approach is to use php output buffer php://:

$callback = function() use ($result, $columns) {
    $file = fopen('php://output', 'w');
    fputcsv($file, $columns);

    foreach($result as $res) {
        fputcsv($file, array($res->from_user, $res->to_user, $res->message, $res->date_added));
    }
    fclose($file);
};

return response()->stream($callback, 200, $headers);

The second approach is to create a new folder in laravels storage system and set it to private and download the file from there. You could even delete the file after the download:

'csv' => [
    'driver' => 'local',
    'root' => storage_path('csv'),
    'visibility' => 'private',
],

Here is the create/download code:

$file = fopen('../storage/csv/file.csv', 'w');
fputcsv($file, $columns);

foreach($result as $res) {
    fputcsv($file, array($res->from_user, $res->to_user, $res->message, $res->date_added));
}
fclose($file);

return response()->make(Storage::disk('csv')->get('file.csv'), 200, $headers);

This return will instantly delete the file after the download:

return response()->download(Storage::disk('csv')->path('file.csv'))
->deleteFileAfterSend(true);

What would be more secure? What is the better approach? I am currently leaning towards the second approach with the storage.

like image 871
Roman Avatar asked Sep 13 '19 12:09

Roman


3 Answers

Option 1

Reasons:

  • you are not keeping the file, so persisting to disk has limited use
  • the data size is small, so download failures are unlikely, and if they happen, the processing time to recreate the output is minimal (I assume it's a quick SQL query behind the scenes?)
  • keeping the file in storage creates opportunities for the file to replicate, an incremental backup or rsync that you may setup in the future could replicate the sensitive files before they get deleted...
  • deleting the file from the filesystem does not necessarily make the data unrecoverable

If you were dealing with files that are tens/hundreds of MB, I'd be thinking differently...

like image 56
tanerkay Avatar answered Nov 17 '22 09:11

tanerkay


Let's think about all options,

Option 1 is good solution because you are not storing the file. It will be more secure than others. But timeout can be a problem at high traffic.

Option 2 also good solution with delete. But you need to create files with unique names so you can use parallel downloads.

Option 3 is like option 2 but if you are using laravel don't use it. (And think about 2 people are downloading at the same time)

After this explanation, you need to work on option 1 to make it more secure if you are using one server. But if you are using microservices you need work on option 2.

I can suggest one more thing to make it secure. Create a unique hashed URL. For example, use timestamp and hash it with laravel and check them before URL. So people can't download again from download history.

https://example.com/download?hash={crypt(timestamp+1min)}

If it is not downloaded in 1 min URL will be expired.

like image 33
Görkem D. Avatar answered Nov 17 '22 09:11

Görkem D.


I think the reply depends on the current architecture and the size of file to download

(1)st approach is applicable when:

  • If the files are small (less then 10 Mb) Thanks @tanerkay

  • you have simple architecture (e.g. 1 server)

Reasons:

  • no download failures -- no need to retry

  • keep it simple

  • no files = no backups and no rsync and no additional places to steal it

. . .

(2)nd approach is applicable when:

  • If your files are big (10+ Mb)

  • If you already have microservices architecture with multiple balance loaders -- keep the similarity

  • If you have millions users trying to download -- you just can't service them without balance loader and parallel downloading

Reasons:

  • The second approach is definitely more SCALABLE and so more stable under high loading, so more secure. Microservices are more time consuming and more scalable for heavy loading.

  • The usage of separate file storage allows you in the future to have separate file server and balance loaded and the queue manager and separate dedicated access control.

  • If the content is important, it usually means that to get it is very important for the user. But direct output with headers can hang or get a timeout error and so on. Keeping the file until it would be downloaded is much more sure approach of delivering it I think.

Still, I consider expiration time instead or additionally to the downloading fact -- the download process can fail, or the file is lost (ensure 1+ hour availability) or vice versa the user will try to download it only after 1 year or never -- why should you keep this file for more than N days?

like image 2
Eugene Kaurov Avatar answered Nov 17 '22 10:11

Eugene Kaurov