im running cron job in kubernetes, jobs completes successfully and i log output to log file inside(path: storage/logs) but i cannot access that file due to container is in completed here is my job yaml.
apiVersion: v1 items: - apiVersion: batch/v1beta1 kind: CronJob metadata: labels: chart: cronjobs-0.1.0 name: cron-cronjob1 namespace: default spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: labels: app: cron cron: cronjob1 spec: containers: - args: - /usr/local/bin/php - -c - /var/www/html/artisan bulk:import env: - name: DB_CONNECTION value: postgres - name: DB_HOST value: postgres - name: DB_PORT value: "5432" - name: DB_DATABASE value: xxx - name: DB_USERNAME value: xxx - name: DB_PASSWORD value: xxxx - name: APP_KEY value: xxxxx image: registry.xxxxx.com/xxxx:2ecb785-e927977 imagePullPolicy: IfNotPresent name: cronjob1 ports: - containerPort: 80 name: http protocol: TCP imagePullSecrets: - name: xxxxx restartPolicy: OnFailure terminationGracePeriodSeconds: 30 schedule: '* * * * *' successfulJobsHistoryLimit: 3
is there anyway i can get my log file content display on kubectl log command or other alternatives?
If you have all the pods started and running fine. Then you can check the logs of your pod by using kubectl logs <pod_name> syntax.
Stop searching for logs On Ubuntu, Debian and related distributions, you will find cron jobs logs in /var/log/syslog . Your Syslog contains entries from many operating system components and it's helpful to grep to isolate cron-specific messages. You will likely require root/sudo privileges to access your Syslog.
You can see the logs of a particular container by running the command kubectl logs <container name> . Here's an example for Nginx logs generated in a container. If you want to access logs of a crashed instance, you can use –previous . This method works for clusters with a small number of containers and instances.
By default, the cron or cronjob logs are stored inside the syslog file. The syslog file is used as the default log file for most of the services and system-related events. The syslog is located inside the /var/log . The cat and grep commands can be used to filter logs.
Cronjob
runs pod according to the spec.schedule
. After completing the task the pod's status will be set as completed
, but the cronjob
controller doesn't delete the pod after completing. And the log file content still there in the pod's container filesystem. So you need to do:
# here you can get the pod_name from the stdout of the cmd `kubectl get pods` $ kubectl logs -f -n default <pod_name>
I guess you know that the pod is kept around as you have successfulJobsHistoryLimit: 3
. Presumably your point is that your logging is going logged to a file and not stdout and so you don't see it with kubectl logs
. If so maybe you could also log to stdout or put something into the job to log the content of the file at the end, for example in a PreStop hook.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With