When I run a cap <stage> deploy --trace
I got:
> $ bundle exec cap production deploy --trace
** Invoke production (first_time)
** Execute production
** Invoke load:defaults (first_time)
** Execute load:defaults
** Invoke rbenv:validate (first_time)
** Execute rbenv:validate
** Invoke rbenv:map_bins (first_time)
** Execute rbenv:map_bins
** Invoke bundler:map_bins (first_time)
** Execute bundler:map_bins
** Invoke deploy (first_time)
** Execute deploy
** Invoke deploy:starting (first_time)
** Invoke deploy:set_shared_assets (first_time)
** Execute deploy:set_shared_assets
** Execute deploy:starting
** Invoke deploy:check (first_time)
** Execute deploy:check
** Invoke git:check (first_time)
** Invoke git:wrapper (first_time)
** Execute git:wrapper
INFO [d204de77] Running /usr/bin/env mkdir -p /tmp/control-panel/ on 10.0.1.6
INFO [d204de77] Finished in 0.274 seconds with exit status 0 (successful).
INFO Uploading /tmp/prey-control-panel/git-ssh.sh 100.0%
INFO [a9e748c9] Running /usr/bin/env chmod +x /tmp/control-panel/git-ssh.sh on 10.0.1.6
INFO [a9e748c9] Finished in 0.274 seconds with exit status 0 (successful).
** Execute git:check
And it stops right there. I think the problem is related with the others public keys I've. I work as a DevOps and I've about 5 different keys that I use frequently.
Any ideas? Should I delete all my keys or something? :)
Thanks.
I solved the problem removing and adding again my ssh keys. Looks like I had too many keys on my ssh-agent...
ssh-add -D ; ssh-add ~/.ssh/id_rsa
I had a similar problem and it turned out to be that I needed to add the SSH key from my server to Bitbucket. Weirdly, it had been working for a bit without having to do that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With