I have developed a function to download CSV file which generated based on the database. I have created testing for this and working fine for me. but the problem is the file is not getting deleted after completing the test run.
Question. Will this file created using storage fake got deleted automatically once the test completely run? if yes it is not deleted for me. please look into my test function.
/*Test file*/
public function testAmazonDailyPendingStatusReport(){
//creating factories
Storage::fake('reportslocal');
$dailyStatus = new DailyStatus(
new FileWriter(),
new Filesystem(),
Storage::disk('reportslocal')
);
$fileExported = $dailyStatus->export();
//continuing assertions
}
/*export function*/
public function export(){
//fetch data from database.
//create file using SplFileObject
//writing files into it.
//storing to 'reportslocal' path
//sending email to client with attached this file
}
If the file not deleted automatically, what should I do? or can I use Storage::disk('reportslocal')->delete($fileExported)
in my test function
. Is this a proper way?
What is the best assertion to be checked here? I have checked, the file existence, column number, column header sequence, and value, check the contents of the file. is there anything I missed?
Please help me to do this(Priority is the storage::fake() issue.).
Thanks in advance.
Storage::fake()
Storage::fake()
is used to setup a directory on your local disk for your test suite to use. This helps keep you from modifying your actual defined storage disks.
If, for example, your code is using the s3
disk, where all operations will be hitting your configured AWS s3 bucket, you can call Storage::fake('s3')
, and it will swap out your s3 cloud configuration with a simple local disk without having to modify the code you're testing at all.
Now, every time you call Storage::fake('reportslocal')
, it will clear out the files in the defined directory when that method is called. However, there is nothing that automatically clears out the files again once the test is complete.
If you want to empty out the directory after your test is complete, you have a couple options.
you can just call Storage::fake('reportslocal')
again at the end of your test. That'll run the code to clear out the fake disk.
you can call the code to manually clear out your fake disk yourself:
(new Illuminate\Filesystem\Filesystem)->cleanDirectory(Storage::disk('reportslocal')->path(''))
Careful here! If you run the above command, but forgot to fake your disk first, you'll empty out your real disk. So, really, you'd be safer just to call Storage::fake('reportslocal')
a second time at the end of your test.
Later edit:
Please read the following comment before you want to test s3 upload behavior. https://laracasts.com/discuss/channels/testing/how-do-you-testing-laravel-filesystem-with-aws?page=1&replyId=455104
As @Simon pointed out in a comment and @patricus in response, there are two points to keep in mind:
Storage::fake()
will actually write files to:<ROOT>/storage/framework/testing/disks/local
(or, if we place a name like Storage::fake('public')
, then it will place the files in <ROOT>/storage/framework/testing/disks/public
Even if we set 's3' as a parameter, the files will still be written locally to <ROOT>/storage/framework/testing/disks/s3
, so it doesn't help to see if the connection to s3 works and if the files are actually written in s3.
fake()
but build the Storage
with the necessary s3 configuration. It will write files to s3. At the end of the test files may be deleted.
Ex: $storage = Storage::build(config('filesystems.disks.s3'));
$storage->put('path/to/file.ext', 'content');
So, to be sure that it will not overwrite any existent file in your real s3, I recommend to have another bucket only for testing . This separate s3 will also help you if have complex flows in the application (the above code Storage::build will help only in the test function) and you can set s3 as default driver and tests will run with that configuration.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With