Reading the Kubernetes "Run to Completion" documentation, it says that jobs can be run in parallel, but is it possible to chain together a series of jobs that should be run in sequential order (parallel and/or non-parallel).
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
Or is it up to the user to keep track of which jobs have finished and triggering the next job using a PubSub messaging service?
To execute and manage a batch task on your cluster, you can use a Kubernetes Job. You can specify the maximum number of Pods that should run in parallel as well as the number of Pods that should complete their tasks before the Job is finished. A Job can also be used to run multiple Pods at the same time.
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions.
A Deployment is intended to be a "service", e.g. it should be up-and-running, so it will try to restart the Pods it manage, to match the desired number of replicas. While a Job is intended to execute and successfully terminate.
In Kubernetes, pods can communicate with each other a few different ways: Containers in the same Pod can connect to each other using localhost , and then the port number exposed by the other container. A container in a Pod can connect to another Pod using its IP address.
Nearly 3 years later, I'll throw another answer into the mix.
Kubeflow Pipelines https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/
Which actually use Argo under the hood.
It is not possible to manage job workflows with Kubernetes core API objects.
Other alternatives include:
This document might also help: https://www.preprints.org/manuscript/202001.0378/v1/download
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With