Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it possible to run a script on a virtual machine after Vagrant finishes provisioning all of them?

I am using Vagrant v1.5.1 to create a cluster of virtual machines (VMs). After all the VMs are provisioned, is it possible to run a single script on one of the machines? The script I want to run will setup passwordless SSH from one VM to all the other VMs.

For example, my nodes provisioned in Vagrant (CentOS 6.5) are as follows.

  • node1
  • node2
  • node3
  • node4

My Vagrantfile looks like the following.

(1..4).each do |i|
 config.vm.define "node-#{i}" do |node|
  node.vm.box = "centos65"
  ...omitted..
 end
end

After all this is done, I then need to run a script on node1 to enable passwordless SSH to node2, node3, and node4.

I know you can run scripts as each VM is being provisioned, but in this case, I want to run a script after all VMs are provisioned, since I need all VMs to be up and running to run this last script.

Is this possible in Vagrant?

I realized that I can also iterate backwards too.

r = 4..1
(r.first).downto(r.last).each do |i|
 config.vm.define "node-#{i}" do |node|
  node.vm.box = "centos65"
  ...omitted..
  if i == 1
   node.vm.provision "shell" do |s|
    s.path = "/path/to/script.sh"
   end
  end
 end
end

This will work great, but, in reality, I also need to setup passwordless SSH from node2 to node1, node3, and node4. In the approach above, this could only ever work for node1, but not for node2 (since node1 will not be provisioned).

If there's a Vagrant plugin to allow password SSH between all nodes in my cluster, that would even be better.

like image 893
Jane Wayne Avatar asked Aug 01 '14 20:08

Jane Wayne


4 Answers

The question is one year old, anyway I found it because I had the same problem, so here it is the workarround I used to solve the problem, somebody might find it usefull.

We need "vagrant triggers" for this to work. The thing with vagrant triggers is that they fire for every machine you are creating, but we want to determine the moment ALL machines are UP. We can do that by checking on each UP event if that event corresponds to the last machine being created:

Vagrant.configure("2") do |config|

  (1..$machine_count).each do |i| 

  config.vm.define vm_name = "w%d" % i do |worker|

   worker.vm.hostname = vm_name
   workerIP = IP
   worker.vm.network :private_network, ip: workerIP

   worker.trigger.after :up do
     if(i == $machine_count) then
       info "last machine is up"
       run_remote  "bash /vagrant/YOUR_SCRIPT.sh"
     end   
   end

  end
 end
end

This works for providers that not support parallel execution on Vagrant (VBox, VMWare).

like image 101
amarruedo Avatar answered Oct 13 '22 02:10

amarruedo


There is no hook in Vagrant for "run after all VMs are provisioned", so you would need to implement it yourself. A couple options I can think of:

1: Run the SSH setup script after all VMs are running.

For example if the script was named ssh_setup.sh and present in the shared folder:

$ for i in {1..4}; do vagrant ssh node$i -c 'sudo /vagrant/ssh_setup.sh'; done

2: Use the same SSH keys for all hosts and set up during provisioning

If all nodes share the same passphrase-less SSH key, you could copy into ~.ssh the needed files like authorized_keys, id_rsa, etc.

like image 45
BrianC Avatar answered Oct 13 '22 00:10

BrianC


Adding an updated answer.

The vagrant-triggers plugin was merged to Vagrant 2.1.0 in may 2018.

We can simply use the only_on option option from the trigger class.

Let's say we have the following configuration:

servers=[
  {:hostname => "net1",:ip => "192.168.11.11"},
  {:hostname => "net2",:ip => "192.168.22.11"},
  {:hostname => "net3",:ip => "192.168.33.11"}
]

We can now easily execute the trigger after the last machine is up:

# Take the hostname of the last machine in the array
last_vm = servers[(servers.length) -1][:hostname]

Vagrant.configure(2) do |config|
    servers.each do |machine|
        config.vm.define machine[:hostname] do |node|

            # ----- Common configuration ----- #
            node.vm.box = "debian/jessie64"
            node.vm.hostname = machine[:hostname]
            node.vm.network "private_network", ip: machine[:ip]

            # ----- Adding trigger - only after last VM is UP ------ #
            node.trigger.after :up do |trigger|
                trigger.only_on = last_vm  # <---- Just use it here!
                trigger.info = "Running only after last machine is up!"
            end
        end
    end
end

And we can check the output and see the the trigger really fires only after "net3" is UP:

==> net3: Setting hostname...
==> net3: Configuring and enabling network interfaces...
==> net3: Installing rsync to the VM...
==> net3: Rsyncing folder: /home/rotem/workspaces/playground/vagrant/learning-network-modes/testing/ => /vagrant
==> net3: Running action triggers after up ...
==> net3: Running trigger...
==> net3: Running only after last machine is up!
like image 34
RtmY Avatar answered Oct 13 '22 00:10

RtmY


This worked for me pretty well: I've used per VM provision scripts, and in the last script I've called post provision script via ssh on the first VM.

In Vagrantfile:

require 'fileutils'

Vagrant.require_version ">= 1.6.0"

$max_nodes = 2
$vm_name = "vm_prefix"

#...<skipped some lines that are not relevant to the case >...
Vagrant.configure("2") do |config|
  config.ssh.forward_agent = true
  config.ssh.insert_key    = false
  #ubuntu 16.04
  config.vm.box = "ubuntu/xenial64"

  (1..$max_nodes).each do |i|
    config.vm.define vm_name = "%s-%02d" % [$vm_name, i] do |config|
      config.vm.hostname = vm_name
      config.vm.network "private_network", ip: "10.10.0.%02d" % [i+20], :name => 'vboxnet2'
      config.vm.network :forwarded_port, guest: 22, host: "1%02d22" % [i+20], id: "ssh"
      config.vm.synced_folder "./shared", "/host-shared"
      config.vm.provider :virtualbox do |vb|
        vb.name = vm_name
        vb.gui = false
        vb.memory = 4096
        vb.cpus = 2
        vb.customize ["modifyvm", :id, "--cpuexecutioncap", "100"]
        vb.linked_clone = true
      end

      # Important part:
      config.vm.provision "shell", path: "common_provision.sh"
      config.vm.provision "shell", path: "per_vm_provision#{i}.sh"

    end
  end
end

On disk: (ensure that post_provision.sh has at least owner execute permissions: rwxr..r..)

vm$ ls /vagrant/
...<skipped some lines that are not relevant to the case >...
config.sh
common_provision.sh
per_vm_provision1.sh
per_vm_provision2.sh
per_vm_provision3.sh
...
per_vm_provisionN.sh
post_provision.sh
Vagrantfile
...<skipped some lines that are not relevant to the case >...

In config.sh:

  num_vm="2" # should equal the $max_nodes in Vagrantfile
  name_vm="vm_prefix" # should equal the $vm_name in Vagrantfile
  username="user1"
  userpass="abc123"
    ...<skipped some lines that are not relevant to the case >...

In common_provision.sh:

  source /vagrant/config.sh
  ...<skipped some lines that are not relevant to the case >...

  sed -r -i 's/\%sudo.*$/%sudo       ALL=(ALL:ALL) NOPASSWD:ALL/' /etc/sudoers
  sed -r -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
  service ssh reload

  # add user ${username}
  useradd --create-home --home-dir /home/${username}  --shell /bin/bash  ${username}
  usermod -aG admin ${username} 
  usermod -aG sudo ${username}
  /bin/bash -c "echo -e \"${userpass}\n${userpass}\" | passwd ${username}"

  # provision additional ssh keys
  # copy ssh keys from disk
   cp /vagrant/ssh/* /home/vagrant/.ssh
   cat /vagrant/ssh/id_rsa.pub >> /home/vagrant/.ssh/authorized_keys
   mkdir /home/${username}/.ssh
   cp /vagrant/ssh/* /home/${username}/.ssh
   cat /vagrant/ssh/id_rsa.pub >> /home/${username}/.ssh/authorized_keys

# not required, just for convenience
 cat >> /etc/hosts <<EOF
10.10.0.21    ${name_vm}-01
10.10.0.22    ${name_vm}-02
10.10.0.23    ${name_vm}-03
...
10.10.0.2N    ${name_vm}-0N
EOF
  ...<skipped some lines that are not relevant to the case >...

In per_vm_provision2.sh:

#!/bin/bash

  # import variables from config
  source /vagrant/config.sh

  ...<skipped some lines that are not relevant to the case >...

 # check if this is the last provisioned vm 
 if [ "x${num_vm}" = "x2" ] ; then
   ssh [email protected] -o StrictHostKeyChecking=no -- '/vagrant/post_provision.sh'
 fi

In per_vm_provisionN.sh:

#!/bin/bash

  # import variables from config
  source /vagrant/config.sh

  ...<skipped some lines that are not relevant to the case >...

 # check if this is the last provisioned vm. N represents the highest number 
 if [ "x${num_vm}" = "xN" ] ; then
   ssh [email protected] -o StrictHostKeyChecking=no -- '/vagrant/post_provision.sh'
 fi

I hope, I didn't skip anything important, but I think the idea is clear in general.

Note: ssh keys for interVM access is provisioned by Vagrant by default. You can add your own ssh keys if needed using common_provision.sh

like image 34
VASャ Avatar answered Oct 13 '22 01:10

VASャ