Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Setting up AWS EC2 instance with Tensorflow 2.0 -- AMI versus building it yourself?

Tags:

I need to setup an AWS EC2 GPU instance with Tensorflow 2.0. All of the docs that I have seen indicate that the current AWS AMI images only support Tensorflow 1.14 or 1.15, but not Tensorflow 2.0. Hence I was wondering what is the best way to get Tensorflow-gpu 2.0 on an AWS instance.

I could create an EC2 GPU instance, install the Nvidia drivers, and then install a docker instance using nvidia-docker and Tensorflow 2.0. Or is it easier to just install an AWS AMI image with Tensorflow 1.14 and then upgrade to Tensorflow 2.0? It is not clear which approach makes more sense.

Any suggestions would be welcome.

like image 802
krishnab Avatar asked Nov 08 '19 04:11

krishnab


People also ask

What is the difference between AMI and EC2 instance?

The EC2 is used for creating the virtual server instance. The AMI is the EC2 virtual machines image. ECS provides container services such as docker and the AWS lambda is used to run the code without a server.

Can I install TensorFlow on AWS EC2?

If you want to run the latest, untested nightly build, you can Install TensorFlow's Nightly Build (experimental) manually. To activate TensorFlow, open an Amazon Elastic Compute Cloud (Amazon EC2) instance of the DLAMI with Conda.

What is the difference between an AMI and an instance?

An instance is a virtual server in the cloud. Its configuration at launch is a copy of the AMI that you specified when you launched the instance. You can launch different types of instances from a single AMI. An instance type essentially determines the hardware of the host computer used for your instance.


1 Answers

So I went through both routes. Right now I would say that setting up a docker container with Tensorflow 2.0 is easier than building from the AMI image.

For the docker route, you can spin up an Ubuntu 18.04 instance with GPUs. Then you have to follow the following steps. Now I lay out the basic steps but did not go into great detail. But hopefully this is enough guidance to help someone get started.

  1. Startup the instance and install the docker-ce software. Make sure that network port 8888 is accessible for incoming connections.

  2. Install the nvidia drivers for the particular GPU instance: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html

  3. Install the nvidia-docker software from the Nvidia github repository. This will enable the docker image to access the GPU drivers on the EC2 instance.

  4. Download and run the tensorflow 2.0 container with the command: docker run -it --gpus all --rm -v $(realpath ~/Downloads):/tf/notebooks -p 8888:8888 tensorflow/tensorflow:2.0.0-gpu-py3-jupyter

This should initiate a notebook that the user can access it from their computer.

If you want to do this through an AMI image, you basically have to install the Tensorflow 1.14 image and then upgrade it. This is actually harder than it looks. Again this is a high level outline of the steps, but I tried to include links or code as best I could.

  1. Setup ubuntu 18.04 Deep Learning AMI on the server (25.2).

  2. Update and upgrade ubuntu:

    sudo apt-get update
    sudo apt-get upgrade
  1. Update the Anaconda distribution, since the current distribution uses a broker version of the package manager.
conda update conda
conda update --all
  1. Create a tensorflow 2.0 conda environment
conda create -n tf2 python=3.7 tensorflow-gpu==2.0 cudatoolkit cudnn jupyter
  1. Initialize conda in the shell. You have to do this to use conda commands from the shell. You might need to exit out of the instance and then ssh back into it.
conda init bash
bash
  1. Install the environment_kernels package
pip install environment_kernels

  1. Install the jupyter notebook extensions
conda install -c conda-forge jupyter_contrib_nbextensions
  1. Install the Jupyter server on the instance. Follow the instructions on the link: https://docs.aws.amazon.com/dlami/latest/devguide/setup-jupyter-config.html

  2. ssh into the instance and start the Jupyter server.

ssh -N -f -L 8888:localhost:8888 ubuntu@aws-public-url
  1. Open a browser on your computer and browse to the public URL:8888 for that server.

Hence I would say use the first approach rather than the second approach, until Amazon releases a Tensorflow 2.0 AMI.

like image 126
krishnab Avatar answered Oct 16 '22 15:10

krishnab