I want to deploy multiple ML models in different pods within the same namespace. But whenever I pull a new image from aws ECR and deploy it using helm it terminates the current running pod and makes a new one. So I am unable to deploy multiple models. Every time it kills the previous one and makes a new pod.
helm upgrade --install tf-serving ./charts/tf-serving/ --namespace mlhub
OR
helm upgrade --recreate-pods --install tf-serving ./charts/tf-serving/ --namespace mlhub
tf-serving-8559fb87d-2twwl 1/1 Running 0 37s tf-serving-8559fb87d-m6hgs 0/1 Terminating 0 45s
It kills the previous one and makes a new, but the images of both models are different with different tags also.
A Deployment is meant to represent a single group of PODs fulfilling a single purpose together. You can have many Deployments work together in the virtual network of the cluster. For accessing a Deployment that may consist of many PODs running on different nodes you have to create a Service.
Usually a Helm upgrade or install requires the release_name, chart_folder and other necessary flags. You'll need to run your Helm commands with explicit flag -f or --values, with the value as the path to your environment specific values file. And make sure you add your application specific other Helm flags if required.
You can use one Helm chart to create multiple Releases. For example to deploy first model:
helm install ./charts/tf-serving/ --name tf-serving --namespace mlhub
And if you later want to add another one:
helm install ./charts/tf-serving/ --name tf-serving2 --namespace mlhub
Now when you run helm list
you will be able to see both tf-serving
and tf-serving2
.
Be aware that you can not have multiple Kubernetes resources of the same Kind with the same name, so I would recommend using {{ .Release.Name }}
value in your chart, as a prefix for all deployed resources.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With