Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Permanently binding static IP to preemptible google cloud VM

For our project we need a static IP binding to our Google Cloud VM instance due to IP whitelisting. Since it's a managed group preemptible, the VM will terminate once in a while.

However, when it terminates I see in the operations log compute.instances.preempted directly followed by compute.instances.repair.recreateInstance with the note:

Instance Group Manager 'xxx' initiated recreateInstance on instance 'xxx'. Reason: instance's intent is RUNNING but instance's status is STOPPING.

After that follows a delete and a insert operation in order to restore the instance.

The documentation states:

You can simulate an instance preemption by stopping the instance.

In which case the IP address will stay attached when the VM is started again.

A) So my question, is it possible to have the instance group manager stop and start the VM in the event of preemption, instead of recreating? Since recreating means that the static IP will be detached and needs to be manually attached each time.

B) If option A is not possible, how can I attach the static IP address automatically so that I don't have to attach it manually when the VM is recreated? I'd rather not have an extra NAT VM instance to take care of this problem.

Thanks in advance!

like image 541
Jurrian Avatar asked Oct 18 '17 08:10

Jurrian


People also ask

Which of the following are limitations of a preemptible VM?

Preemptible instance limitations The probability that Compute Engine stops a preemptible instance for a system event is generally low, but might vary from day to day and from zone to zone depending on current conditions. Compute Engine always stops preemptible instances after they run for 24 hours.

What are the limitations of preemptible instances?

Preemptible instances have the following limitations and restrictions: Preemptible instances can be terminated (deleted) at any time. As a result, they are not suitable for long-running workloads. Preemptible capacity cannot be used with capacity reservations or with the dedicated virtual machine host feature.


2 Answers

I figured out a workaround to this (specifically, keeping a static IP address assigned to a preemptible VM instance between recreations), with the caveat that your managed instance group has the following properties:

  1. Not autoscaling.
  2. Max group size of 1 (i.e. there is only ever meant to be one VM in this group)
  3. Autohealing is default (i.e. only recreates VMs after they are terminated).

The steps you need to follow are:

  1. Reserve a static IP.
  2. Create an instance template, configured as preemptible.
  3. Create your managed group, assigning your template to the group.
  4. Wait for the group to spin up your VM.
  5. After the VM has spun up, assign the static IP that you reserved in step 1 to the VM.
  6. Create a new instance template derived from the VM instance via gcloud (see https://cloud.google.com/compute/docs/instance-templates/create-instance-templates#gcloud_1).
  7. View the newly create instance template in the Console, and note that you see your External IP assigned to the template.
  8. Update the MiG (Managed Instance Group) to use the new template, created in step 6.
  9. Perform a proactive rolling update on the MiG using the Replace method.
  10. Confirm that your VM was recreated with the same name, the disks were preserved (or not, depending on how you configured the disks in your original template), and the VM has maintained its IP address.

Regards to step 6, my gcloud command looked like this:

gcloud compute instance-templates create vm-template-with-static-ip \
    --source-instance=source-vm-id \
    --source-instance-zone=us-east4-c

Almost goes without saying, this sort of setup is only useful if you want to:

  1. Minimize your costs by using a single preemptible VM.
  2. Not have to deal with the hassle of turning on a VM again after it's been preempted, ensuring as much uptime as possible.

If you don't mind turning the VM back on manually (and possibly not being aware it's been shutdown for who knows how long) after it has been preempted, then do yourself a favor and don't bother with the MiG and just standup the singular VM.

like image 166
Lester Peabody Avatar answered Oct 14 '22 01:10

Lester Peabody


I've found one way that ensures that all VM's in your network have the same outgoing IP address. Using Cloud NAT you can assign a static IP which all VM's will use, there is a downside though:

GCP forwards traffic using Cloud NAT only when there are no other matching routes or paths for the traffic. Cloud NAT is not used in the following cases, even if it is configured:

  • You configure an external IP on a VM's interface.

    If you configure an external IP on a VM's interface, IP packets with the VM's internal IP as the source IP will use the VM's external IP to reach the Internet. NAT will not be performed on such packets. However, alias IP ranges assigned to the interface can still use NAT because they cannot use the external IP to reach the Internet. With this configuration, you can connect directly to a GKE VM via SSH, and yet have the GKE pods/containers use Cloud NAT to reach the Internet.

    Note that making a VM accessible via a load balancer external IP does not prevent a VM from using NAT, as long as the VM network interface itself does not have an external IP address.

Removing the VM's external IP also prevents you from direct SSH access to the VM, even SSH access from the gcloud console itself. The quote above shows an alternative with a load balancer, another way is a bastion, but doesn't directly solve access from for example Kubernetes/kubectl.

If that's no problem for you, this is the way to go.

like image 35
Jurrian Avatar answered Oct 14 '22 02:10

Jurrian