All the examples that I've come across have been of the following format:
gcloud container builds submit --config cloudbuild.yaml .
The man-page says the following:
[SOURCE]
The source directory on local disk or tarball in Google Cloud Storage
or disk to use for this build. If source is a local directory this
command skips files specified in the .gcloudignore file (see $ gcloud
topic gcloudignore for more information).
Now, the source-directory on my local disk is very large and a lot of time is being spent in transferring the source code from my local machine to the Google build servers/cloud. Is either of the following possible? How?
Repository event triggers. Cloud Build enables you to automatically execute builds on repository events such as pushes or pull requests. You can connect external repositories, such as repositories in GitHub or Bitbucket, to Cloud Build or use code in Cloud Source Repositories for your builds.
Manual triggers enable you to manually invoke builds by: Fetching source code from a hosted repository with a specified branch or tag. Parametizing your build with substitutions that don't need to be passed in manually each time you execute a build.
Unfortunately there isn't great support for this today in gcloud
. You can accomplish this a few other ways though:
Use curl
or the client library of your choice to send an API request to request a build that specifies a RepoSource
. For example:
{ "source": { "repoSource": { "repoName": "my-repo", "commitSha": "deadbeef" } }, "steps": [...] }
In your local environment, fetch the commit and build it using gcloud
:
git checkout && gcloud container builds submit . --config=cloudbuild.yaml
Create a trigger that automatically executes your build, then issue an API request to run the trigger manually, on the specific commit you want, again using curl
or a client library.
If you are building Docker images you can use a cached image present in your container registry to build upon. If you only have made changes to the last layers of the build you can actually avoid transferring most of the data and mostly build only the changes.
As in the linked example, you can add a --cache-from
to the .yaml file selecting the image on your Google container registry on to build on:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['pull', 'gcr.io/$PROJECT_ID/latest-image']
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--cache-from',
'gcr.io/$PROJECT_ID/latest-image',
'-t', 'gcr.io/$PROJECT_ID/latest-image',
'.'
]
images: ['gcr.io/$PROJECT_ID/latest-image']
Then, the command to build:
gcloud container builds submit --config cloudbuild.yaml .
This should avoid you quite a bit of transfer time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With