Combining multiple postgres databases into a single cloudnative-pg instance



With the release of v1.25, CloudNativePG introduces declarative database management, which empowers users to define and control the lifecycle of PostgreSQL databases using the Database Custom Resource Definition (CRD). This method seamlessly integrates with Kubernetes, providing a scalable, automated, and consistent approach to managing PostgreSQL databases.

I am currently running half-a-dozen CloudNativePG in different namesspaces supporting various services. This is working well but most of the databases are lightweight and a single postgres instance providing multiple databases would require fewer resources. This is the approach I’m using for my MariaDB Galera Cluster.

Time to experiment.

Upgrade the CloudNativePG Operator

As a first step, we need to upgrade the operator to v1.25. Checking the helm chart repo, we need chart version 0.23.0.

> helm search repo cnpg/cloudnative-pg --versions
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
cnpg/cloudnative-pg     0.23.0          1.25.0          CloudNativePG Operator Helm Chart

Upgrade the operator. This will take a few minutes and restart all the clusters.

helm upgrade cloudpg -n cloudpg cnpg/cloudnative-pg --version 0.23.0 --values cloudnative-pg-values.yml

Create a new CloudNativePG instance

Once all the clusters are back in a healthy state we can create a new CloudNativePG Instance to hold our databases. I could re-use one of the existing instances but I want to take the opportunity to move the deployment into a cloudpg namespace alongside the operator.

There is nothing special about this deployment, I’m using helm so just need to run this command with my standard values:

helm install cloudpg-single -n cloudpg cnpg/cluster --version 0.1.0 --values cloudnative-cluster-single-values.yml

Create a database with the new Database CRD:

The database CRD is straight-forward. Create a file cloudpg-database-atuin.yml:

apiVersion: postgresql.cnpg.io/v1
kind: Database
metadata:
  name: atuin
  namespace: cloudpg
spec:
  name: atuin
  owner: app
  cluster:
    name: cloudpg-single-cluster

And apply it:

k apply -f cloudpg-database-atuin.yml -n cloudpg

That’s it we have a new database in the cloudpg-single-cluster cluster. Grab the superuser password from the cloudpg-single-cluster-superuser secret and let’s have a look:

> k run postgresql-postgresql-client --rm -it --restart='Never' --namespace cloudpg --image bitnami/postgresql --env="PGPASSWORD=<postgres-password>" --command -- psql --host cloudpg-single-cluster-rw -U postgres
If you don't see a command prompt, try pressing enter.

postgres=# \l
                                                List of databases
   Name    |  Owner   | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules |   Access privileges
-----------+----------+----------+-----------------+---------+-------+--------+-----------+-----------------------
 app       | app      | UTF8     | libc            | C       | C     |        |           |
 atuin     | app      | UTF8     | libc            | C       | C     |        |           |
 postgres  | postgres | UTF8     | libc            | C       | C     |        |           |
 template0 | postgres | UTF8     | libc            | C       | C     |        |           | =c/postgres          +
           |          |          |                 |         |       |        |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | libc            | C       | C     |        |           | =c/postgres          +
           |          |          |                 |         |       |        |           | postgres=CTc/postgres
(5 rows)

All good. We have a new database. Note that the owner is app. The app user is the default user created by CloudNativePG. Let’s create a role just for our new database.

First some cleanup:

Delete the database instance we created

k delete -f cloudpg-database-atuin.yml -n cloudpg

By default, the database will not be deleted. The databaseReclaimPolicy field determines the behavior when a Database object is deleted:

This is good. We can manually remove the database from the cluster:

> drop database atuin;
DROP DATABASE
postgres=# \l
                                                List of databases
   Name    |  Owner   | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules |   Access privileges
-----------+----------+----------+-----------------+---------+-------+--------+-----------+-----------------------
 app       | app      | UTF8     | libc            | C       | C     |        |           |
 postgres  | postgres | UTF8     | libc            | C       | C     |        |           |
 template0 | postgres | UTF8     | libc            | C       | C     |        |           | =c/postgres          +
           |          |          |                 |         |       |        |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | libc            | C       | C     |        |           | =c/postgres          +
           |          |          |                 |         |       |        |           | postgres=CTc/postgres
(4 rows)

Create a role to own the database

Define a role in the cluster helm chart:

  # -- This feature enables declarative management of existing roles, as well as the creation of new roles if they are not
  # already present in the database.
  # See: https://cloudnative-pg.io/documentation/current/declarative_role_management/
  roles:
    - name: atuin
      login: true

Upgrade the helm install:

helm upgrade cloudpg-single -n cloudpg cnpg/cluster --version 0.1.0 --values cloudnative-cluster-single-values.yml

Update the database CRD to reference this new role:

apiVersion: postgresql.cnpg.io/v1
kind: Database
metadata:
  name: atuin
  namespace: cloudpg
spec:
  name: atuin
  owner: atuin
  cluster:
    name: cloudpg-single-cluster

And reapply:

k apply -f cloudpg-database-atuin.yml -n cloudpg

The database will now be owned by our new role atuin:

postgres=# \l
                                                List of databases
   Name    |  Owner   | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules |   Access privileges
-----------+----------+----------+-----------------+---------+-------+--------+-----------+-----------------------
 app       | app      | UTF8     | libc            | C       | C     |        |           |
 atuin     | atuin    | UTF8     | libc            | C       | C     |        |           |
 postgres  | postgres | UTF8     | libc            | C       | C     |        |           |
 template0 | postgres | UTF8     | libc            | C       | C     |        |           | =c/postgres          +
           |          |          |                 |         |       |        |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | libc            | C       | C     |        |           | =c/postgres          +
           |          |          |                 |         |       |        |           | postgres=CTc/postgres
(5 rows)

What about the password for the atuin role? Currently CloudNativePG will not automaticall create a secret. The secret can be manually specified in the helm chart but adding this does not seem to create the secret:

  roles:
    - name: atuin
      login: true
      passwordSecret:
        name: cluster-single-atuin

What’s next?

I’m pausing here for now. Declarative role management looks to be planned for v1.26 [ref: https://github.com/cloudnative-pg/cloudnative-pg/issues/5341] so will take another look after this has been released.

Thanks for reading!