Paperclip is a great upload plugin for Rails. Storing uploads on the local filesystem or Amazon S3 seems to work well. I'd just assume store files on the localhost, but the use of S3 is required for this app as it will be hosted on Heroku.
How would I go about getting all of my uploads/attachments from S3 in a single zipped download?
Getting a zip of files from the local filesystem seems straight forward. It's getting the files from S3 that has me puzzled. I think it may have something to do with the way that rubyzip handles files referenced by URL. I've tried various approaches but can't seem to avoid errors.
format.zip {
registrations_with_attachments = Registration.find_by_sql('SELECT * FROM registrations WHERE abstract_file_name NOT LIKE ""')
headers['Cache-Control'] = 'no-cache'
tmp_filename = "#{RAILS_ROOT}/tmp/tmp_zip_" <<
Time.now.to_f.to_s <<
".zip"
# rubyzip gem version 0.9.1
# rdoc http://rubyzip.sourceforge.net/
Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE) do |zip|
#get all of the attachments
# attempt to get files stored on S3
# FAIL
registrations_with_attachments.each { |e| zip.add("abstracts/#{e.abstract.original_filename}", e.abstract.url(:original, false)) }
# => No such file or directory - http://s3.amazonaws.com/bucket/original/abstract.txt
# Should note that these files in S3 bucket are publicly accessible. No ACL.
# works with local storage. Thanks to Henrik Nyh
# registrations_with_attachments.each { |e| zip.add("abstracts/#{e.abstract.original_filename}", e.abstract.path(:original)) }
end
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => tmp_filename.to_s)
File.delete tmp_filename
}
Zips S3 filesTakes an amazon s3 bucket folder and zips it to a: Stream. Local File. Local File Fragments (zip multiple files broken up by max number of files or size)
Amazon S3 is a service that enables you to store your data (referred to as objects) at massive scale. In this guide, you will create an Amazon S3 bucket (a container for data stored in S3), upload a file, retrieve the file, and delete the file.
To upload folders and files to an S3 bucketSign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . In the Buckets list, choose the name of the bucket that you want to upload your folders or files to. Choose Upload.
Setting Up PaperclipTo set up Paperclip, first we need to install the ImageMagick dependency. Paperclip uses ImageMagick to resize images after upload. If you are using another system, you can get download and install instructions from the ImageMagick website. Run bundle install to finish it up.
It's not an uncommon requirement to want to package files on S3 into a Zip file for a user to download multiple files in a single package. Maybe it's common enough for AWS to offer this functionality themselves one day.
Files that have been uploaded with Paperclip are stored in S3. However, metadata such as the file’s name, location on S3, and last updated timestamp are all stored in the model’s table in the database. Access the file’s url through the url method on the model’s file attribute ( avatar in this example).
S3zipper is written in Go (Golang) , and its main strength is to automate the process of compressing files in Amazon S3 and sharing them. It can do both Zip and Tar compression methods which are obviously the most popular. All you need to do is make a few API calls, and the rest is taken care of from our end.
Your first idea might be to download the files from S3, zip them up, upload the result. This will work fine until you fill up /tmp with the temporary files! 2 - Memory is constrained to 3GB. You could store the temporary files on the heap, but again you are constrained to 3GB.
You almost certainly want to use e.abstract.to_file.path
instead of e.abstract.url(...)
.
See:
TempFile
)From the changelog:
New in 3.0.1:
- API CHANGE:
#to_file
has been removed. Use the#copy_to_local_file
method instead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With