Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

heroku - Missing required arguments: aws_access_key_id, aws_secret_access_key, following Hartl tutorial

Running heroku run rake db:migrate, I get this error: Missing required arguments: aws_access_key_id, aws_secret_access_key.

I made the correction recommended in SO question 25596504, specifically changing the file carrier_wave.rb to carrierwave.rb with no luck. I am following the Hartl tutorial page 688 which specifies adding them as $heroku config:set S3_ACCESS_KEY=<access key> I replaced the '' with quoted and unquoted versions of the actual key. The keys show when I run heroku config, e.g., S3_ACCESS_KEY:

The application was working before I began working on this section (11.4) of the tutorial re uploading images. I, incidentally, know about the Figaro gem; however, I'd like to try following the tutorial's approach. What am I missing? Any thoughts would be appreciated. Thanks!

like image 782
user3763682 Avatar asked Jan 14 '15 17:01

user3763682


4 Answers

Go on Heroku, on your application, go to settings, hit Reveal Config Vars.

Click on on Edit on the right side and enter your secrets there:

S3_BUCKET: name of your bucket goes here
S3_ACCESS_KEY: xxxxx
S3_SECRET_KEY: xxxx

On config/initializers/carrierwave.rb or wherever you're entering your secrets should have:

CarrierWave.configure do |config|
  config.root = Rails.root.join('tmp') # adding these...
  config.cache_dir = 'carrierwave' # ...two lines

  config.fog_credentials = {
    :provider               => 'AWS',                        # required
    :s3_access_key_id      => ENV['S3_ACCESS_KEY'],                        # required
    :s3_secret_access_key  => ENV['S3_SECRET_KEY'],                     # required
    :region                 => 'eu-west-1',                  # optional, defaults to 'us-east-1'
    :host                   => 's3.example.com',             # optional, defaults to nil
    :endpoint               => 'https://s3.example.com:8080' # optional, defaults to nil
  }
  config.fog_directory  = ENV['S3_Bucket']                             # required
  config.fog_public     = false                                   # optional, defaults to true
  config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}  # optional, defaults to {}
end
like image 154
neo Avatar answered Oct 06 '22 12:10

neo


Here is a tutorial I made after much work getting AWS to work with Heroku as described in Chapter 11 of the Ruby on Rails Tutorial by Michael Hartl. I hope it helps:

Getting the Rails Tutorial Sample App to work between Heroku and AWS was a huge pain in the ass. But I did it. If you found this tutorial, that means you're probably encountering an error you can't get past. That's fine. I had a few of them.

The first thing you need to do is go back over the code that Hartl provided. Make sure you typed it (or copy/pasted it) in exactly as shown. Out of all the code in this section, there is only one small addition you might need to make. The "region" environment variable. This is needed if you create a bucket that is not in the default US area. More on this later. Here is the code for /config/initializers/carrier_wave.rb:

if Rails.env.production?
  CarrierWave.configure do |config|
    config.fog_credentials = {
      # Configuration for Amazon S3
      :provider              => 'AWS',
      :aws_access_key_id     => ENV['S3_ACCESS_KEY'],
      :aws_secret_access_key => ENV['S3_SECRET_KEY'],
      :region                => ENV['S3_REGION']
    }
    config.fog_directory     =  ENV['S3_BUCKET']
  end
end

That line :region => ENV['S3_REGION'] is a problem for a lot of people. As you continue this tutorial you will learn what it's for.

You should be using that block of code exactly as shown. Do NOT put your actual keys in there. We'll send them to Heroku separately.

Now let's move on to your AWS account and security.

  1. First of all, create your AWS account. For the most part, it is like signing up for any web site. Make a nice long password and store it someplace secure, like an encrypted password manager. When you make your account, you will be given your first set of AWS keys. You will not be using those in this tutorial, but you might need them at some point in the future so save those somewhere safe as well.
  2. Go to the S3 section and make a bucket. It has to have a unique name, so I usually just put the date on the end and that does it. For example, you might name it "my-sample-app-bucket-20160126". Once you have created your bucket, click on the name, then click on Properties. It's important for you to know what "Region" your bucket is in. Find it, and make a note of it. You'll use it later.
  3. Your main account probably has full permissions to everything, so let's not use that for transmitting random data between two web services. This could cost you a lot of money if it got out. We'll make a limited user instead. Make a new User in the IAM section. I named it "fog", because that's the cloud service software that handles the sending and receiving. When you create it, you will have the option of displaying and/or downoading the keys associated with the new user. It's important you keep this in a safe and secure place. It does NOT go into your code, because that will probably end up in a repository where other people can see it. Also, don't give this new user a password, since it will not be logging into the AWS dashboard.
  4. Make a new Group. I called mine "s3railsbucket". This is where the permissions will be assigned. Add "fog" to this group.
  5. Go to the Policies section. Click "Create Policy" then select "Create Your Own Policy". Give it a name that starts with "Allow" so it will show up near the top of the list of policies. It's a huge list. Here's what I did:

Policy Name: AllowFullAccessToMySampleAppBucket20160126
Description: Allows remote write/delete access to S3 bucket named my-sample-app-bucket-20160126.
Policy Document:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "s3:*",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::my-sample-app-bucket-20160126",
                "arn:aws:s3:::my-sample-app-bucket-20160126/*"
            ]
        }
    ]
}
  1. Go back to the Group section, select the group you made, then add your new policy to the group.

That's it for AWS configuration. I didn't need to make a policy to allow "fog" to list the contents of the bucket, even though most tutorials I tried said that was necessary. I think it's only necessary when you want a user that can log in through the dashboard.

Now for the Heroku configuration. This stuff gets entered in at your command prompt, just like 'heroku run rake db:migrate' and such. This is where you enter the actual Access Key and Secret Key you got from the "fog" user you created earlier.

$ heroku config:set S3_ACCESS_KEY=THERANDOMKEYYOUGOT  
$ heroku config:set S3_SECRET_KEY=an0tHeRstRing0frAnDomjUnK  
$ heroku config:set S3_REGION=us-west-2  
$ heroku config:set S3_BUCKET=my-sample-app-bucket-20160126

Look again at that last one. Remember when you looked at the Properties of your S3 bucket? This is where you enter the code associated with your region. If your bucket is not in Oregon, you will have to change us-west-2 to your actual region code. This link worked when this tutorial was written:

http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

If that doesn't work, Google "AWS S3 region codes".

After doing all this and double-checking for mistakes in the code, I got Heroku to work with AWS for storage of pictures!

like image 30
Max Wilder Avatar answered Oct 06 '22 12:10

Max Wilder


I think this error occurred because the var name does not match.

In carrierwave.rb, you should replace :s3_access_key_id and :s3_secret_access_key with the prefix "aws".


    if Rails.env.production?
        CarrierWave.configure do |config|
        config.root = Rails.root.join('tmp')
        config.cache_dir = 'carrierwave'

        config.fog_credentials = {
          # Configuration for Amazon S3
          :provider               => 'AWS',                        # change var's name
          :aws_access_key_id      => ENV['S3_ACCESS_KEY'],         # change var's name
          :aws_secret_access_key  => ENV['S3_SECRETE_KEY']
        }
        config.fog_directory     =  ENV['S3_BUCKET']
      end
    end

And it can be deployed.

like image 3
Yuri Chow Avatar answered Oct 06 '22 13:10

Yuri Chow


For whatever reason, running rake assets:precompile RAILS_ENV=development fixed this for me.
[and env vars don't need to be named S3_ACCESS_KEY etc, I used aws.access_key_id]

like image 2
lakesare Avatar answered Oct 06 '22 11:10

lakesare