Example: Kubernetes
Learning objectives
- You know some cloud computing providers that offer Kubernetes as a service.
- You know the basics of creating a Kubernetes cluster on a cloud provider.
There exists a wide variety of cloud computing providers that offer Kubernetes as a service. These include Scaleway, AWS, Google Cloud, Azure, and Digital Ocean. Here, we'll briefly look into deploying a Kubernetes application on an online cloud provider.
In the example, we'll use Google Kubernetes Engine with autopilot.
Credit card required
If you want hands-on experience by following the example, you'll need a credit card. If you provide credit card details, make sure to dismantle all experiments once you've tried them out to avoid unnecessary billing.
To complete the course, deployment on a cloud provider is not required or expected.
Creating a project on Google Cloud
We start by logging into (and registering if needed) into Google Cloud and creating a new project. We'll call the project dab-kubernetes-autopilot-demo
(you could name the project anything you want).

Once the project has been created, select the project in the dashboard. Next, clicking on the link "APIs & Services", we choose "Enable APIs and Services". This opens up an API library that allows us to enable APIs for the project. For the present project, we enable the "Kubernetes Engine API" and the " Artifact Registry API".
Creating a cluster
Next, with the Kubernetes Engine API and the Google Container Registry API enabled, we can create a cluster. The cluster is created in the Kubernetes Engine dashboard at https://console.cloud.google.com/kubernetes/, shown in Figure 2.

In the dashboard, we click the "Create" button and choose the "Autopilot" option, where the cluster is managed by Google. This opens up a wizard, that is used to create the cluster, and that guides through the steps of creating a cluster.

When setting up the cluster, we use the default name from the wizard (here, autopilot-cluster-1
) and select europe-north1
as the region for the cluster. These settings are shown in Figure 3 above. We use the default options for networking and advanced settings, and in the review and create -step, we click the "Create cluster" button to create the cluster. It takes a while for the cluster to be created.
Once the cluster has been created, it is in the list of visible clusters. At the present stage, as shown in Figure 4, there are no nodes, no vCPUs, and the cluster does not use any memory.

Creating a repository for Docker images
Next, we open up the artifact registry at https://console.cloud.google.com/artifacts and create a new repository for Docker images by clicking "Create repository". When creating the repository, we call the repository "docker-images", select "Docker" as format, choose "Standard" mode and "Region" as location type. As the region, we'll choose europe-north1
. We use the default encryption (Google-managed encryption key). Once the options have been provided, we click "Create" to create the repository.
Once the repository has been created, it is in the list of visible repositories, shown in Figure 5.

Creating an application
For the purposes of the example, we'll create a simple application that responds to requests with a joke. The application is written in vanilla Deno. The app.js
for the application is as follows:
const jokes = [
"What did baby corn say to mama corn? -- Where's pop corn?",
"Why are eggs bad at handling stress? -- They crack under pressure.",
];
const server = `Server ${Math.floor(10000 * Math.random())}`;
const handleRequest = async (request) => {
const joke = jokes[Math.floor(Math.random() * jokes.length)];
return new Response(`${server}: ${joke}`);
};
Deno.serve({ hostname: "0.0.0.0", port: 7777 }, handleRequest);
And the Dockerfile
is as follows.
FROM denoland/deno:alpine-1.42.2
EXPOSE 7777
WORKDIR /app
COPY . .
CMD [ "run", "--unstable", "--allow-net", "app.js" ]
Creating the image and pushing it to the repository
When in the folder that contains the above files, we run the command docker build -t jokes-app .
, creating an image jokes-app
of the application.
docker build -t jokes-app .
...
=> => naming to docker.io/library/jokes-app
Next, we need to install Google's Cloud CLI gcloud
, which provides command-line functionality for maintaining projects on Google Cloud. With the gloud
installed, we authenticate to the project by running the command gcloud auth login
. This opens up a browser window, where we can log in to the Google account that is associated with the project. Once logged in, we can close the browser window.
Next, we authenticate to the specific artifact registry by running the command gcloud auth configure-docker europe-north1-docker.pkg.dev
, where europe-north1-docker.pkg.dev
is refers to the location of the artifact registry (selected when creating the repository). Running the command updates our Docker configuration to utilize the gloud
command when pushing images.
gcloud auth configure-docker europe-north1-docker.pkg.dev
Adding credentials for: europe-north1-docker.pkg.dev
After update, the following will be written to your Docker config file located at [/home/username/.docker/config.json]:
{
"credHelpers": {
"europe-north1-docker.pkg.dev": "gcloud"
}
}
Do you want to continue (Y/n)?
Docker configuration file updated.
Next, we tag the image by running the command docker tag jokes-app europe-north1-docker.pkg.dev/dab-kubernetes-autopilot-demo/docker-images/jokes-app:latest
. The tag is in the format europe-north1-docker.pkg.dev/<project-id>/<repository-name>/<image-name>:<tag>
.
docker tag jokes-app europe-north1-docker.pkg.dev/dab-kubernetes-autopilot-demo/docker-images/jokes-app:latest
And finally, we push the image to the registry using the command docker push europe-north1-docker.pkg.dev/dab-kubernetes-autopilot-demo/docker-images/jokes-app:latest
.
docker push europe-north1-docker.pkg.dev/dab-kubernetes-autopilot-demo/docker-images/jokes-app:latest
The push refers to repository [europe-north1-docker.pkg.dev/dab-kubernetes-autopilot-demo/docker-images/jokes-app]
...
latest: ...
Now, when we visit the artifact registry and open up the docker-images
repository, we see the image jokes-app
in the list of images, as shown in Figure 6.

Deploying an image
Next, it's time to deploy the image. Let's create a deployment configuration file jokes-app-deployment.yaml
with the following content.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jokes-app-deployment
labels:
app: jokes-app
spec:
selector:
matchLabels:
app: jokes-app
template:
metadata:
labels:
app: jokes-app
spec:
containers:
- name: jokes-app
image: europe-north1-docker.pkg.dev/dab-kubernetes-autopilot-demo/docker-images/jokes-app:latest
ports:
- containerPort: 7777
resources:
requests:
cpu: "250m"
memory: "500Mi"
limits:
cpu: "250m"
memory: "500Mi"
The key difference to our earlier deployment configurations is that we now use the image from the artifact registry and that the requested resources differ to some extent from our prior efforts. The above limits are related to the autopilot resource requests.
Next, we need to adjust kubectl
to deploy to our Google Cloud project. To do this, we provide cluster access for kubectl by running the command gcloud components install kubectl
. This installs the kubectl
command-line tool. Next, we run the command cloud container clusters get-credentials autopilot-cluster-1 --zone europe-north1 --project dab-kubernetes-autopilot-demo
. This updates the kubectl
configuration to point to the cluster in our project.
cloud container clusters get-credentials autopilot-cluster-1 --zone europe-north1 --project dab-kubernetes-autopilot-demo
Fetching cluster endpoint and auth data.
kubeconfig entry generated for autopilot-cluster-1.
Finally, we can apply our configuration to deploy the image to the cluster. This is done by running the command kubectl apply -f jokes-app-deployment.yaml
in the folder where the configuration file is located.
kubectl apply -f jokes-app-deployment.yaml
deployment.apps/jokes-app-deployment configured
Now, when we check out the pods, we see that a pod is running.
kubectl get pods
NAME READY STATUS RESTARTS AGE
jokes-app-deployment-7cb766f44f-vwgk2 1/1 Running 0 4m21s
Exposing a service
The next step is to expose the deployment. Let's create a service configuration file jokes-app-service.yaml
with the following content.
apiVersion: v1
kind: Service
metadata:
name: jokes-app-service
spec:
type: LoadBalancer
ports:
- port: 7777
targetPort: 7777
protocol: TCP
selector:
app: jokes-app
And apply the configuration.
kubectl apply -f jokes-app-service.yaml
service/jokes-app-service created
Now, when we list services running the command kubectl get services
, we see that the service is running.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jokes-app-service LoadBalancer 10.42.0.3 34.88.159.3 7777:30574/TCP 2m53s
..
The external IP address is the IP address of the load balancer. We can use this IP address to access the service. Let's try it out by running the command curl 34.88.159.3:7777
in the terminal.
curl 34.88.159.3:7777
Server 6252: What did baby corn say to mama corn? -- Where's pop corn?
curl 34.88.159.3:7777
Server 6252: What did baby corn say to mama corn? -- Where's pop corn?
curl 34.88.159.3:7777
Server 6252: Why are eggs bad at handling stress? -- They crack under pressure.

Continuation steps
The next steps would be to create a domain for the application, to configure the domain to point to the load balancer, to set up certificates to allow using HTTPS, and to set up a GitOps-flow to enable continuous deployment for the application (e.g. from GitHub). The Google Cloud documentation at https://cloud.google.com/docs/get-started offers plenty of resources for this.
Cleaning up
If you followed the above steps and created your own Kubernetes deployment, it is meaningful to also dismantle it to avoid any unnecessary costs (note the free tier though). We can delete the cluster by running the command gcloud container clusters delete autopilot-cluster-1 --zone europe-north1 --project dab-kubernetes-autopilot-demo
, which deletes the cluster (and the deployments and services in it).
gcloud container clusters delete autopilot-cluster-1 --zone europe-north1 --project dab-kubernetes-autopilot-demo
The following clusters will be deleted.
- [autopilot-cluster-1] in [europe-north1]
Do you want to continue (Y/n)? Y
Deleting cluster autopilot-cluster-1...done.
Deleted [https://container.googleapis.com/v1/projects/dab-kubernetes-autopilot-demo/zones/europe-north1/clusters/autopilot-cluster-1].
Next, we also clean up the artifact repository, which is done by the command gcloud artifacts repositories delete docker-images --location europe-north1
.
gcloud artifacts repositories delete docker-images --location europe-north1
You are about to delete repository [docker-images]
Do you want to continue (Y/n)? Y
Delete request issued for: [docker-images]
Waiting for operation [projects/dab-kubernetes-autopilot-demo/locations/europe-north1/operations/...] to complete...done.
Deleted repository [docker-images].
Other providers
Although we used Google Cloud above, the underlying principles are the same across cloud providers. We create a cluster, add an image to a registry, create a deployment with the image, and expose the deployment with a service. Subsequent steps such as adding a domain, creating an ingress with HTTPS support, and so on, also function similarly across the cloud providers.