Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How could I store uploaded images to AWS S3 on PHP

I'm on an EC2 instance and I wish to connect my PHP website with my Amazon S3 bucket, I already saw the API for PHP here: http://aws.amazon.com/sdkforphp/ but it's not clear.

This is the code line I need to edit in my controller:

thisFu['original_img']='/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

I need to connect to Amazon S3 and be able to change the code like this:

$thisFu['original_img']='my_s3_bucket/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

I already configured an IAM user for the purpose but I don't know all the steps needed to accomplished the job.

How could I connect and interact with Amazon S3 to upload and retrieve public images?

UPDATE

I decided to try using the s3fs as suggested, so I installed it as described here (my OS is Ubuntu 14.04)

I run from console:

sudo apt-get install build-essential git libfuse-dev libcurl4-openssl-dev libxml2-dev mime-support automake libtool
sudo apt-get install pkg-config libssl-dev
git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr --with-openssl
make
sudo make install

Everything was properly installed but what's next? Where should I declare credentials and how could I use this integration in my project?

2nd UPDATE

I created a file called .passwd-s3fs with a single code line with my IAM credentials accessKeyId:secretAccessKey.

I place it into my home/ubuntu directory and give it a 600 permission with chmod 600 ~/.passwd-s3fs

Next from console I run /usr/bin/s3fs My_S3bucket /uploads/fufu

Inside the /uploads/fufu there are all my bucket folders now. However when I try this command:

s3fs -o nonempty allow_other My_S3bucket /uploads/fufu

I get this error message:

s3fs: unable to access MOUNTPOINT My_S3bucket : No such file or directory

3rd UPDATE

As suggested I run this fusermount -u /uploads/fufu, after that I checked the fufu folder and is empty as expected. After that I tried again this command (with one more -o):

s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

and got this error message:

fusermount: failed to open /etc/fuse.conf: Permission denied
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf

Any other suggestion?

4th UPDATE 18/04/15

Under suggestion from console I run sudo usermod -a -G fuse ubuntu and sudo vim /etc/fuse.conf where I uncommented mount_max = 1000 and user_allow_other

Than I run s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

At first sight no errors, so I thought everythings fine but it's exactly the opposite.

I'm a bit frustrated now, because I don't know what happened but my folder /uploads/fufu is hidden and using ls -Al I see only this

d????????? ? ?        ?              ?            ? fufu

I cannot sudo rm -r or -rf or mv -r it says that /uploads/fufu is a directory

I tried to reboot exit and mount -a, but nothing.

I tried to unmount using fusermount and the error message is fusermount: entry for /uploads/fufu not found in /etc/mtab

But I tried sudo vim /etc/mtab and I found this line: s3fs /uploads/fufu fuse.s3fs rw,nosuid,nodev,allow_other 0 0

Could someone tell me how can I unmount and finally remove this folder /uploads/fufu ?

like image 460
NineCattoRules Avatar asked Sep 28 '22 09:09

NineCattoRules


2 Answers

To give you a little more clarity, since you are a beginner: download the AWS SDK via this Installation Guide

Then set up your AWS account client on your PHP webserver using this bit

use Aws\S3\S3Client;

$client = S3Client::factory(array('profile' => '<profile in your aws credentials file>'
));

If you would like more information on how to use AWS credentials files, head here.

Then to upload a file that you have on your own PHP server:

$result = $client->putObject(array(
'Bucket'     => $bucket,
'Key'        => 'data_from_file.txt',
'SourceFile' => $pathToFile,
'Metadata'   => array(
    'Foo' => 'abc',
    'Baz' => '123'
)
));

If you are interested in learning how to upload images to a php file, I would recommend looking at this W3 schools tutorial. This tutorial can help you get off the ground for saving the file locally on your server in a temporary directory before it gets uploaded to your S3 bucket.

like image 136
Erik Avatar answered Oct 06 '22 20:10

Erik


Despite to "S3fs is very reliable in recent builds", I can share my own experience with s3fs and info that we moved write operation from direct s3fs mounted folder access to aws console(SDK api possible way also) after periodic randomly system crashes .

Possible that you won't have any problem with small size files like images, but it certainly made the problem while we tried to write mp4 files. So last message at log before system crash was:

kernel: [ 9180.212990] s3fs[29994]: segfault at 0 ip 000000000042b503 sp 00007f09b4abf530 error 4 in s3fs[400000+52000]

and it was rare randomly cases, but that made system unstable.

So we decided still to keep s3fs mounted, but use it only for read access

Below I show how to mount s3fs with AIM credentials without password file

#!/bin/bash -x
S3_MOUNT_DIR=/media/s3
CACHE_DIR=/var/cache/s3cache

wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz
tar xvfz s3fs-1.74.tar.gz
cd s3fs-1.74
./configure
make
make install

mkdir $S3_MOUNT_DIR
mkdir $CACHE_DIR

chmod 0755 $S3_MOUNT_DIR
chmod 0755 $CACHE_DIR

export IAMROLE=`curl http://169.254.169.254/latest/meta-data/iam/security-credentials/`

/usr/local/bin/s3fs $S3_BUCKET $S3_MOUNT_DIR  -o iam_role=$IAMROLE,rw,allow_other,use_cache=$CACHE_DIR,uid=222,gid=500

Also you will need to create IAM role that assigned to the instance with attached policy:

{"Statement":[{"Resource":"*","Action":["s3:*"],"Sid":"S3","Effect":"Allow"}],"Version":"2012-10-17"}

In you case, seems it is reasonable to use php sdk (other answer has usage example already), but you also can write images to s3 with aws console:

aws s3 cp /path_to_image/image.jpg s3://your_bucket/path

If you will have IAM role created and assigned to your instance you won't need to provide any additional credentials

Update - answer to your question:

  • I don't need to include the factory method for declare my IAM credentials?

Yes if you will have IAM assigned to ec2 instance, then at code you just need to create the client as:

     $s3Client = S3Client::factory();
     $bucket = 'my_s3_bucket';
     $keyname = $_POST['cat'].'/original_'‌​.uniqid('fu_').'.jpg';
     $localFilePath = '/local_path/some_image.jpg';

 $result = $s3Client->putObject(array(
        'Bucket' => $bucket,
        'Key'    => $keyname,
        'SourceFile'   => $filePath,
        'ACL'    => 'public-read',
        'ContentType' => 'image/jpeg'
    ));
    unlink($localFilePath);

option 2: If you do not need file local storage stage , but will put direclt from upload form:

 $s3Client = S3Client::factory();
 $bucket = 'my_s3_bucket';
 $keyname = $_POST['cat'].'/original_'‌​.uniqid('fu_').'.jpg';
 $dataFromFile = file_get_contents($_FILES['uploadedfile']['tmp_name']); 

$result = $s3Client->putObject(array(
    'Bucket' => $bucket,
    'Key'    => $keyname,
    'Body' => $dataFromFile,
    'ACL'    => 'public-read',
));

And to get s3 link if you will have public access

$publicUrl = $s3Client->getObjectUrl($bucket, $keyname);

Or generate signed url to private content:

$validTime = '+10 minutes';
$signedUrl = $s3Client->getObjectUrl($bucket, $keyname, $validTime);
like image 27
Evgeniy Kuzmin Avatar answered Oct 06 '22 19:10

Evgeniy Kuzmin