I am running Airflow on Google's Cloud Composer. I am using the KubernetesPodOperator and would like to mount a google storage bucket to a directory in pod via gcsfuse. It seems like to do this I need to give k8s privileged security context as specified here. It seems like airflow recently added the security_context parameter to the KubernetesPodOperator. The security context I am specifying in the operator is :
security_context = {
'securityContext': {
'privileged': True,
'capabilities':
{'add': ['SYS_ADMIN']}
}
}
When I try running airflow test dag_id task_id date
in the airflow worker, the pod launches and when the code tries to mount the bucket via gcsfuse it throws the error "fusermount: fuse device not found, try 'modprobe fuse' first"
. This makes it seems as the security_context is not working (ex.).
Am I misunderstanding what the security_context parameter in the operator and/or is my securityContext dictionary definition invalid?
The security_context
kwarg for the KubernetesPodOperator sets the security context for the pod, not a specific container within the pod, so it only supports the options outlined in PodSecurityContext
. Since the parameters you are specifying aren't valid for a pod's security context, they are being discarded.
The privileged
and capabilities
properties are only valid as part of a container's SecurityContext
, meaning you'll need to somehow set them on the pod's container spec. You can do this by defining the entire pod spec yourself (as opposed to having the operator generate it for you). Using KubernetesPodOperator, you can set full_pod_spec
or pod_template_file
to specify a Kubernetes API Python object, or path to an object YAML, which would then be used to generate the pod. Example using the former:
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
import kubernetes.client.models as k8s
pod = k8s.V1Pod()
pod.spec = k8s.V1PodSpec()
pod.spec.containers = [
k8s.V1Container(
...,
security_context={
'privileged': True,
'capabilities': {'add': ['SYS_ADMIN']}
}
)
]
# Equivalent to setting security_context from the operator
pod.spec.security_context = {}
t1 = KubernetesPodOperator(..., full_pod_spec=pod)
If you want to use pod_template_file
with Cloud Composer, you can upload a pod YAML to GCS and set it to one of the mapped storage paths (e.g. /home/airflow/gcs/dags/my-pod.yaml
if you put it in the DAGs directory.
Reading through the airflow.providers.google.cloud
version of KubernetesPodOperator, it's possible that full_pod_spec
is broken in newer version of the operator. However, it should work with the old contrib version.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With