CloudNativePG Operator
Learning Objectives
- You know how to install the CloudNativePG operator.
- You know how to deploy a PostgreSQL cluster using the CloudNativePG operator.
- You know how to run database migrations in a Kubernetes cluster.
- You know how to connect to a PostgreSQL database from a Deno application running in a Kubernetes cluster.
Here, we’ll briefly look into using the CloudNativePG operator for setting up a PostgreSQL cluster. We continue directly into the Minikube project that we’ve already been working on.
These examples have been written with the version 1.25. of the CloudNativePG operator. The operator is under active development, and the examples may not work with future versions of the operator.
Installing the operator
To install the operator, we apply the operator configuration from the CloudNativePG repository.
kubectl apply --server-side -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.25/releases/cnpg-1.25.1.yaml
// ... lots of resources being created
This installs the operator and starts a controller that can be used to manage the databases. The --server-side
flag is used to apply the configuration directly to the server, which is a more efficient way of applying configurations (and avoids an error stating that a resource definition is too long).
The operator is installed into the cnpg-system
namespace, which we can take a peek at using kubectl get all -n cnpg-system
.
$ kubectl get all -n cnpg-system
NAME READY STATUS RESTARTS AGE
pod/cnpg-controller-manager-554dbf98dd-mnnmb 1/1 Running 0 97s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cnpg-webhook-service ClusterIP 10.101.160.121 <none> 443/TCP 97s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cnpg-controller-manager 1/1 1 1 97s
NAME DESIRED CURRENT READY AGE
replicaset.apps/cnpg-controller-manager-554dbf98dd 1 1 1 97s
Deploying a cluster
With the operator installed, we can deploy a a CloudNativePG cluster. The cluster is a custom resource that is defined in a YAML file, which we then apply to the Kubernetes cluster. The cluster configuration defines the number of instances in the cluster, the storage size, and so on. The operator then takes care of setting up the cluster.
In our case, we’ll define a cluster with two instances and a storage size of 100Mi. Create a file minikube-demo-database-cluster.yaml
to the k8s
folder, and place the following contents to it.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: minikube-demo-database-cluster
spec:
instances: 2
storage:
size: 100Mi
Then, apply the configuration file.
$ kubectl apply -f k8s/minikube-demo-database-cluster.yaml
cluster.postgresql.cnpg.io/minikube-demo-database-cluster created
Once the configuration has been applied, the cluster is deployed. The status of the cluster can be checked with the command kubectl get cluster
— setting the cluster can take minutes.
$ kubectl get cluster
NAME AGE INSTANCES READY STATUS PRIMARY
minikube-demo-database-cluster 3m24s 2 2 Cluster in healthy state minikube-demo-database-cluster-1
Now, we have a database cluster that is up and running.
Usernames and passwords
When we created the database cluster, usernames and passwords were automatically created to the cluster. They are stored as Secrets that can be accessed from the cluster.
The secrets for the username app
— which is the one that we will be using — are available in minikube-demo-database-cluster-app
(the name of the cluster followed by the suffix -app
).
$ kubectl describe secret minikube-demo-database-cluster-app
...
Data
====
password: 64 bytes
pgpass: 112 bytes
uri: 132 bytes
username: 3 bytes
dbname: 3 bytes
host: 33 bytes
jdbc-uri: 151 bytes
port: 4 bytes
user: 3 bytes
Peeking into services
The CloudNativePG operator also creates a handful of services, which are used to direct traffic to the database instances. The services are created with the minikube-demo-database-cluster
prefix, which is the name of our database cluster. There are a handful of services, which have different levels of access ranging from read only (-ro
) to read and write (-rw
). The services are created with the type ClusterIP
, which means that they are only accessible from within the cluster.
We can see all the running services with the command kubectl get services
.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d3h
minikube-demo-database-cluster-r ClusterIP 10.104.31.105 <none> 5432/TCP 7m32s
minikube-demo-database-cluster-ro ClusterIP 10.102.47.217 <none> 5432/TCP 7m32s
minikube-demo-database-cluster-rw ClusterIP 10.104.28.221 <none> 5432/TCP 7m32s
minikube-demo-server-fetcher-service LoadBalancer 10.107.73.165 <pending> 8000:32563/TCP 63m
minikube-demo-server-service LoadBalancer 10.108.34.79 <pending> 8000:31130/TCP 63m
The database cluster has multiple instances — the services minikube-demo-database-cluster-r
and minikube-demo-database-cluster-ro
are for read-only access, and the service minikube-demo-database-cluster-rw
is for read and write access. The services are accessible from within the cluster, and the applications can connect to the database using the service names.
Database migrations
Previously, in our applications, we’ve used Flyway for database migrations. Why break the good habit? In the case of our present Kubernetes application, we want to run the database migrations within the cluster. A straightforward approach to achieve this is to create a container for running the migrations, and run the container as a Job.
Database migrations
Let’s first create our Flyway container. Create a folder called database-migrations
to the project. Then, create a folder called sql
into the database-migrations
folder, and create a file called V1__initial_schema.sql
to the sql
folder. Add the following content to the file and save it.
CREATE TABLE items (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL
);
Next, create a Dockerfile
to the folder database-migrations
and place the following contents to it.
FROM flyway/flyway:11.4-alpine
# Assuming we're building the image inside the `database-migrations` -folder
COPY sql/ /flyway/sql/
# Use shell form for entrypoint to get access to env variables
ENTRYPOINT ./flyway migrate -user=$FLYWAY_USER -password=$FLYWAY_PASSWORD -url="jdbc:postgresql://${MINIKUBE_DEMO_DATABASE_CLUSTER_RW_SERVICE_HOST}:${MINIKUBE_DEMO_DATABASE_CLUSTER_RW_SERVICE_PORT}/${FLYWAY_USER}"
The above Dockerfile
copies the SQL files to the container, and defines an entrypoint for running the Flyway migrations. The variables FLYWAY_USER
, FLYWAY_PASSWORD
, MINIKUBE_DEMO_DATABASE_CLUSTER_RW_SERVICE_HOST
, and MINIKUBE_DEMO_DATABASE_CLUSTER_RW_SERVICE_PORT
will be injected to the container by Kubernetes.
Next, build an image from the Dockerfile
. The image will be built into the Minikube’s container registry. Let’s call the image minikube-demo-database-migrations:1.0
.
minikube image build -t minikube-demo-database-migrations:1.0 .
Once the image has been built, we check that it is visible within the Minikube’s container registry using the minikube image list
command.
minikube image list
..
minikube-demo-database-migrations:1.0
..
Now, when the image is within the container registry, we can use it in our Kubernetes cluster. Let’s next create the needed configuration for it.
Flyway Job configuration
Database migrations are meaningful to run as Kubernetes Jobs, which are jobs that are meant to run once — in the need of a new migration, we just run them again.
Create a file called minikube-demo-database-migration-job.yaml
to the k8s
folder, and place the following contents to it.
apiVersion: batch/v1
kind: Job
metadata:
name: minikube-demo-database-migration-job
spec:
template:
metadata:
name: minikube-demo-database-migration-job
spec:
containers:
- name: minikube-demo-database-migrations
image: minikube-demo-database-migrations:1.0
imagePullPolicy: Never
env:
- name: FLYWAY_USER
valueFrom:
secretKeyRef:
name: minikube-demo-database-cluster-app
key: username
optional: false
- name: FLYWAY_PASSWORD
valueFrom:
secretKeyRef:
name: minikube-demo-database-cluster-app
key: password
optional: false
restartPolicy: Never
backoffLimit: 2
The above configuration creates a job that runs the minikube-demo-database-migration-job
container. The container is configured to use the username and password from the secret minikube-demo-database-cluster-app
, which are injected to the job as environment variables called called FLYWAY_USER
and FLYWAY_PASSWORD
.
Applying the migration
When we apply the above configuration, the job will be created to the cluster. Let’s apply the configuration using the kubectl apply
command.
$ kubectl apply -f k8s/minikube-demo-database-migration-job.yaml
job.batch/minikube-demo-database-migration-job created
Now, when we list the pods, we can see that the job has been completed.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
minikube-demo-database-cluster-1 1/1 Running 0 20m
minikube-demo-database-cluster-2 1/1 Running 0 20m
minikube-demo-database-migration-job-896r7 0/1 Completed 0 21s
minikube-demo-server-deployment-554f9fcf65-vmxtp 1/1 Running 0 72m
minikube-demo-server-fetcher-deployment-6548f75dd4-pcjsb 1/1 Running 0 76m
The above output also shows two database cluster instances and our deployment. The database migration job has been completed, and has also been removed, as indicated by the 0/1 in the READY
column. We can also check the logs of the job to see what happened during the migration.
$ kubectl logs minikube-demo-database-migration-job-896r7
WARNING: Storing migrations in 'sql' is not recommended and default scanning of this location may be deprecated in a future release
Flyway OSS Edition 11.4.0 by Redgate
See release notes here: https://rd.gt/416ObMi
Database: jdbc:postgresql://10.104.28.221:5432/app (PostgreSQL 17.4)
Schema history table "public"."flyway_schema_history" does not exist yet
Successfully validated 1 migration (execution time 00:00.019s)
Creating Schema History table "public"."flyway_schema_history" ...
Current version of schema "public": << Empty Schema >>
Migrating schema "public" to version "1 - initial schema"
Successfully applied 1 migration to schema "public", now at version v1 (execution time 00:00.008s)
From the logs, we can see that the migration was successful. The schema history table was created, and the migration was applied to the database. The database now has a table called items
.
To run a new migration, we build a new image, remove the old job, and run the
kubectl apply
command again.
Checking the database
Now that we’ve run the migration, we should be able to see the table items
in the database. We can connect to the database, for example, by logging to one of the cluster pods and running the psql
command.
$ kubectl exec -it minikube-demo-database-cluster-1 -- psql
efaulted container "postgres" out of: postgres, bootstrap-controller (init)
psql (17.4 (Debian 17.4-1.pgdg110+2))
Type "help" for help.
postgres=# \dt
Did not find any relations.
When we connect to the database, we do not see any tables. This is because by default we connect as the postgres
user and hence see the tables for the postgres
database. In the case of our database, the changes have been done with the user app
to the database app
. Let’s switch the database with the \c
shorthand and check again.
postgres=# \c app
You are now connected to database "app" as user "postgres".
app=# \dt
List of relations
Schema | Name | Type | Owner
--------+-----------------------+-------+-------
public | flyway_schema_history | table | app
public | items | table | app
(2 rows)
app=# \q
Now, we see the tables — let’s next adjust our application to connect to the database programmatically.
Connecting to the database
Let’s next modify our application to connect to the database.
Adding dependencies and credentials
First, modify the deno.json
file in the server
folder to include the PostgreSQL driver. After the modification, the deno.json
file should look as follows.
{
"imports": {
"@hono/hono": "jsr:@hono/hono@4.6.5",
"postgres": "npm:postgres@3.4.5"
}
}
Then, we need to inject the credentials from the Secret from minikube-demo-database-cluster-app
to our application. As Postgres.js expects specific environment variable names, we can read the values from the Secret and set them as environment variables in the pod. Modify the minikube-demo-server-deployment.yaml
file as follows.
apiVersion: apps/v1
kind: Deployment
metadata:
name: minikube-demo-server-deployment
labels:
app: minikube-demo-server
spec:
replicas: 1
selector:
matchLabels:
app: minikube-demo-server
template:
metadata:
labels:
app: minikube-demo-server
spec:
containers:
- name: minikube-demo-server
image: minikube-demo-server:1.1
imagePullPolicy: Never
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: minikube-demo-configmap
- secretRef:
name: minikube-demo-secret
env:
- name: PGHOST
valueFrom:
secretKeyRef:
name: minikube-demo-database-cluster-app
key: host
optional: false
- name: PGPORT
valueFrom:
secretKeyRef:
name: minikube-demo-database-cluster-app
key: port
optional: false
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: minikube-demo-database-cluster-app
key: dbname
optional: false
- name: PGUSERNAME
valueFrom:
secretKeyRef:
name: minikube-demo-database-cluster-app
key: user
optional: false
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: minikube-demo-database-cluster-app
key: password
optional: false
volumeMounts:
- name: data-storage
mountPath: "/app/data"
volumes:
- name: data-storage
persistentVolumeClaim:
claimName: minikube-demo-local-persistentvolume-claim
Now, after applying the file, the application will have the necessary environment variables set for connecting to the database.
Modifying the application
Next, modify the application so that it uses the database. As the environment variables for connecting to the database are already configured, we can use the postgres
function from the postgres
module to connect to the database. The connection details are read from the environment variables PGHOST
, PGPORT
, PGDATABASE
, PGUSERNAME
, and PGPASSWORD
.
Modify the file app.js
in the server
folder to match the following.
import { Hono } from "@hono/hono";
import { cors } from "@hono/hono/cors";
import { logger } from "@hono/hono/logger";
import postgres from "postgres";
const app = new Hono();
const sql = postgres();
const message = Deno.env.get("WELCOME_MESSAGE") || "Hello world!";
app.use("/*", cors());
app.use("/*", logger());
app.get("/", (c) => c.json({ message }));
app.get("/items", async (c) => {
const items = await sql`SELECT * FROM items`;
return c.json(items);
});
app.post("/items", async (c) => {
const { name } = await c.req.json();
const item = await sql`INSERT INTO items (name) VALUES (${name}) RETURNING *`;
return c.json(item);
});
export default app;
Building and deploying the image
With all the changes in place, it’s time to build the image. Go to the folder server
and build the image minikube-demo-server
with the tag 1.2
.
minikube image build -t minikube-demo-server:1.2 .
Then, modify the deployment configuration minikube-demo-server-deployment.yaml
to use the new image.
// ...
containers:
- name: minikube-demo-server
image: minikube-demo-server:1.2
// ...
And, finally, apply the changes.
kubectl apply -f k8s/minikube-demo-server-deployment.yaml
Querying the application
Finally, with the application deployed, it’s time to query the server. We again run the command minikube service minikube-demo-server-service --url
to determine the URL for the deployment.
$ minikube service minikube-demo-server-service --url
http://192.168.49.2:31130
And then query the server.
$ curl http://192.168.49.2:31130/items
[]%
$ curl -X POST -H "Content-Type: application/json" -d '{"name": "Kubernetes"}' http://192.168.49.2:31130/items
[{"id":1,"name":"Kubernetes"}]%
$ curl http://192.168.49.2:31130/items
[{"id":1,"name":"Kubernetes"}]%
Now, our application is connected to the database, and we can see the items that have been added to the database.