I've declared a Kubernetes deployment which has two containers. One is built locally, another needs to be pulled from a private registry.
const appImage = new docker.Image("ledgerImage", {
imageName: 'us.gcr.io/qwil-build/ledger',
build: "../../",
});
const ledgerDeployment = new k8s.extensions.v1beta1.Deployment("ledger", {
spec: {
template: {
metadata: {
labels: {name: "ledger"},
name: "ledger",
},
spec: {
containers: [
{
name: "api",
image: appImage.imageName,
},
{
name: "ssl-proxy",
image: "us.gcr.io/qwil-build/monolith-ssl-proxy:latest",
}
],
}
}
}
});
When I run pulumi up
it hangs - this is happening because of a complaint that You don't have the needed permissions to perform this operation, and you may have invalid credentials
. I see this complain when I run kubectl describe <name of pod>
. However, when I run docker pull us.gcr.io/qwil-build/monolith-ssl-proxy:latest
it executes just fine. I've re-reun gcloud auth configure-docker
and it hasn't helped.
I found https://github.com/pulumi/pulumi-cloud/issues/112 but it seems that docker.Image
requires a build
arg which suggests to me it's meant for local images, not remote images.
How can I pull an image from a private registry?
EDIT:
Turns out I have a local dockerfile for building the SSL proxy I need. I've declared a new Image
with
const sslImage = new docker.Image("sslImage", {
imageName: 'us.gcr.io/qwil-build/ledger-ssl-proxy',
build: {
context: "../../",
dockerfile: "../../Dockerfile.proxy"
}
});
And updated the image reference in the Deployment
correctly. However, I'm still getting authentication problems.
I have a solution which uses only code, which I use to retrieve images from a private repository on Gitlab:
config.ts
import { Config } from "@pulumi/pulumi";
//
// Gitlab specific config.
//
const gitlabConfig = new Config("gitlab");
export const gitlab = {
registry: "registry.gitlab.com",
user: gitlabConfig.require("user"),
email: gitlabConfig.require("email"),
password: gitlabConfig.requireSecret("password"),
}
import * as config from "./config";
import { Base64 } from 'js-base64';
import * as kubernetes from "@pulumi/kubernetes";
[...]
const provider = new kubernetes.Provider("do-k8s", { kubeconfig })
const imagePullSecret = new kubernetes.core.v1.Secret(
"gitlab-registry",
{
type: "kubernetes.io/dockerconfigjson",
stringData: {
".dockerconfigjson": pulumi
.all([config.gitlab.registry, config.gitlab.user, config.gitlab.password, config.gitlab.email])
.apply(([server, username, password, email]) => {
return JSON.stringify({
auths: {
[server]: {
auth: Base64.encode(username + ":" + password),
username: username,
email: email,
password: password
}
}
})
})
}
},
{
provider: provider
}
);
// Then use the imagePullSecret in your deployment like this
deployment = new k8s.apps.v1.Deployment(name, {
spec: {
selector: { matchLabels: labels },
template: {
metadata: { labels: labels },
spec: {
imagePullSecrets: [{ name: args.imagePullSecret.metadata.apply(m => m.name) }],
containers: [container]
},
},
},
});
Turns out running pulumi destroy --yes && pulumi up --skip-preview --yes
is what I needed. I guess I was in some weird inconsistent state but this is fixed now.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With