Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Allowing permission using S3FS bucket directory for other users

Tags:

s3fs

I'm having problem using S3FS. I'm using

ubuntu@ip-x-x-x-x:~$ /usr/bin/s3fs --version
Amazon Simple Storage Service File System 1.71

And I have the password file installed in the /usr/share/myapp/s3fs-password with 600 permission.

I have succeeded mounting the S3 bucket.

sudo /usr/bin/s3fs -o allow_other -opasswd_file=/usr/share/myapp/s3fs-password -ouse_cache=/tmp mybucket.example.com /bucket

And I have user_allow_other enabled in the /etc/fuse.conf

When I tried creating a file in the bucket as root it worked.

ubuntu@ip-x-x-x-x:~$ sudo su
root@ip-x-x-x-x:/home/ubuntu# cd /bucket
root@ip-x-x-x-x:/bucket# echo 'Hello World!' > test-`date +%s`.txt
root@ip-x-x-x-x:/bucket# ls
test-1373359118.txt

I checked the bucket mybucket.example.com's content and the file was successfully created.

But I was having difficulties writing into the directory /bucket as different user.

root@ip-x-x-x-x:/bucket# exit
ubuntu@ip-x-x-x-x:~$ cd /bucket
ubuntu@ip-x-x-x-x:/bucket$ echo 'Hello World!' > test-`date +%s`.txt
-bash: test-1373359543.txt: Permission denied

I desperately tried chmod-ing to 777 the test-1373359118.txt. And I can write into the file

ubuntu@ip-x-x-x-x:/bucket$ sudo chmod 777 test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ echo 'Test' > test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ cat test-1373359118.txt
Test

Funnily, I could create a directory inside the bucket, set the chmod to 777, and write a file there.

ubuntu@ip-x-x-x-x:/bucket$ sudo mkdir -m 1777 test
ubuntu@ip-x-x-x-x:/bucket$ ls
test  test-1373359118.txt
ubuntu@ip-x-x-x-x:/bucket$ cd test
ubuntu@ip-x-x-x-x:/bucket/test$ echo 'Hello World!' > test-`date +%s`.txt
ubuntu@ip-x-x-x-x:/bucket/test$ ls
test-1373360059.txt
ubuntu@ip-x-x-x-x:/bucket/test$ cat test-1373360059.txt
Hello World

But then I tried

ubuntu@ip-x-x-x-x:~$ sudo chmod 777 /mybucket
chmod: changing permissions of '/mybucket': Input/output error

It didn't work.

Initially I was thinking to use this /bucket directory to store large and rarely accessed files from my LAMP stacks located several EC2 machines. (I think it's suitable enough to use this without making a special handling library using AWS PHP SDK, but that's not the point.)

Because of that reason, I can settle using a directory inside the /mybucket to store the files. But I'm just curious if there is a way to allow entire /mybucket to other users?

like image 376
Petra Barus Avatar asked Jul 09 '13 09:07

Petra Barus


People also ask

How do I grant access to a specific directory in S3 bucket?

If the IAM user and S3 bucket belong to the same AWS account, then you can grant the user access to a specific bucket folder using an IAM policy. As long as the bucket policy doesn't explicitly deny the user access to the folder, you don't need to update the bucket policy if access is granted by the IAM policy.

How do I change permissions on my S3 bucket?

Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . In the Buckets list, choose the name of the bucket that you want to set permissions for. Choose Permissions. Under Access control list, choose Edit.


4 Answers

Permission was an issue with older versions of S3FS. Upgrade to latest version to get it working.

As already stated in the question itself and other answers, While mounting you will have to pass the following parameters: -o allow_other

Example:

s3fs mybucket:/ mymountlocation/ -o allow_other 

Also, before doing this ensure the following is enabled in /etc/fuse.conf:

user_allow_other

It is disabled by default ;)

like image 56
codersofthedark Avatar answered Oct 08 '22 08:10

codersofthedark


This works for me:

s3fs ec2downloads:/ /mnt/s3 -o use_rrs -o allow_other -o use_cache=/tmp

It must have been fixed in a recent version, I'm using the latest clone (1.78) from the github project.

like image 36
Chris Avatar answered Oct 08 '22 07:10

Chris


This is the only thing that worked for me:

You can pass the uid option to make sure it does:

    -o umask=0007,uid=1001,gid=1001 # replace 1001 with your ids

from: https://github.com/s3fs-fuse/s3fs-fuse/issues/673

To find your uid and gid, see the first two number from here:

sudo cat /etc/passwd | grep $USER
like image 39
user48956 Avatar answered Oct 08 '22 08:10

user48956


I would like to recommend to take a look at the new project RioFS (Userspace S3 filesystem): https://github.com/skoobe/riofs.

This project is “s3fs” alternative, the main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “testing” state, but it's been running on several high-loaded fileservers for quite some time.

We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.

Regarding your issue, in order to run RioFS as a root user and allow other users to have r/w access rights to the mounted directory:

  1. make sure /etc/fuse.conf contains user_allow_other option
  2. launch RioFS with -o "allow_other" parameter.

The whole command line to launch RioFS will look like:

sudo riofs -c /path/to/riofs.conf.xml http://s3.amazonaws.com mybucket.example.com /bucket

(make sure you exported both AWSACCESSKEYID and AWSSECRETACCESSKEY variables or set them in riofs.conf.xml configuration file).

Hope it helps you and we are looking forward to seeing you joined our community !

like image 41
Paul Avatar answered Oct 08 '22 08:10

Paul