mariadb crashes inside kubernetes pod with hostpath volume

I’m trying to move a number of docker containers on a linux server to a test kubernets-based deployment running on a different linux machine where I’ve installed kubernetes as a k3s instance inside a vagrant virtual machine.

One of these containers is a mariadb container instance, with a bind volume mapped

This is the relevant portion of the docker-compose I’m using:

    image: ''
    container_name: academy-db
      - MARIADB_USER=bn_moodle
      - MARIADB_DATABASE=bitnami_moodle
      - type: bind
        source: ./volumes/moodle/mariadb
        target: /bitnami/mariadb
      - '3306:3306'

Note that this works correctly. (the container is used by another application container which connects to it and reads data from the db without problems).

I then tried to convert this to a kubernetes configuration, copying the volume folder to the destination machine and using the following kubernetes .yaml deployment files. This includes a deployment .yaml, a persistent volume claim and a persistent volume, as well as a NodePort service to make the container accessible. For the data volume, I’m using a simple hostPath volume pointing to the contents copied from the docker-compose’s bind mounts.

apiVersion: apps/v1
kind: Deployment
  name: academy-db
  replicas: 1
      name: academy-db
    type: Recreate
        name: academy-db
        - env:
            - name: ALLOW_EMPTY_PASSWORD
              value: "yes"
            - name: MARIADB_DATABASE
              value: bitnami_moodle
            - name: MARIADB_USER
              value: bn_moodle
          name: academy-db
            - containerPort: 3306
          resources: {}
            - mountPath: /bitnami/mariadb
              name: academy-db-claim
      restartPolicy: Always
        - name: academy-db-claim
            claimName: academy-db-claim
apiVersion: v1
kind: PersistentVolume
  name: academy-db-pv
    type: local
    storage: 1Gi
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
    path: "<...full path to deployment folder on the server...>/volumes/moodle/mariadb"
apiVersion: v1
kind: PersistentVolumeClaim
  name: academy-db-claim
    - ReadWriteOnce
      storage: 1Gi
  storageClassName: ""
  volumeName: academy-db-pv
apiVersion: v1
kind: Service
  name: academy-db-service
  type: NodePort
    - name: "3306"
      port: 3306
      targetPort: 3306
    name: academy-db

after applying the deployment, everything seems to work fine, in the sense that with kubectl get ... the pod and the volumes seem to be running correctly

kubectl get pods

NAME                             READY   STATUS    RESTARTS   AGE
academy-db-5547cdbc5-65k79       1/1     Running   9          15d

kubectl get pv
NAME                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS   REASON   AGE
academy-db-pv           1Gi        RWO            Retain           Bound    default/academy-db-claim                                   15d

kubectl get pvc
NAME                       STATUS   VOLUME                  CAPACITY   ACCESS MODES   STORAGECLASS   AGE
academy-db-claim           Bound    academy-db-pv           1Gi        RWO                           15d

This is the pod’s log:

kubectl logs pod/academy-db-5547cdbc5-65k79

mariadb 10:32:05.66
mariadb 10:32:05.66 Welcome to the Bitnami mariadb container
mariadb 10:32:05.66 Subscribe to project updates by watching
mariadb 10:32:05.66 Submit issues and feature requests at
mariadb 10:32:05.66
mariadb 10:32:05.67 INFO  ==> ** Starting MariaDB setup **
mariadb 10:32:05.68 INFO  ==> Validating settings in MYSQL_*/MARIADB_* env vars
mariadb 10:32:05.68 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mariadb 10:32:05.69 INFO  ==> Initializing mariadb database
mariadb 10:32:05.69 WARN  ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO  ==> Using persisted data
mariadb 10:32:05.71 INFO  ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO  ==> Starting mariadb in background

and the describe pod command:

Name:         academy-db-5547cdbc5-65k79
Namespace:    default
Priority:     0
Node:         zdmp-kube/
Start Time:   Tue, 22 Dec 2020 13:33:43 +0000
Labels:       name=academy-db
Annotations:  <none>
Status:       Running
Controlled By:  ReplicaSet/academy-db-5547cdbc5
    Container ID:   containerd://68af105f15a1f503bbae8a83f1b0a38546a84d5e3188029f539b9c50257d2f9a
    Image ID:[email protected]:1d8ca1757baf64758e7f13becc947b9479494128969af5c0abb0ef544bc08815
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 07 Jan 2021 10:32:05 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 07 Jan 2021 10:22:03 +0000
      Finished:     Thu, 07 Jan 2021 10:32:05 +0000
    Ready:          True
    Restart Count:  9
      MARIADB_DATABASE:      bitnami_moodle
      MARIADB_USER:          bn_moodle
      MARIADB_PASSWORD:      bitnami
      /bitnami/mariadb from academy-db-claim (rw)
      /var/run/secrets/ from default-token-x28jh (ro)
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  academy-db-claim
    ReadOnly:   false
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-x28jh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations: for 300s
        for 300s
  Type    Reason          Age                  From     Message
  ----    ------          ----                 ----     -------
  Normal  Pulled          15d (x8 over 15d)    kubelet  Container image "" already present on machine
  Normal  Created         15d (x8 over 15d)    kubelet  Created container academy-db
  Normal  Started         15d (x8 over 15d)    kubelet  Started container academy-db
  Normal  SandboxChanged  18m                  kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          8m14s (x2 over 18m)  kubelet  Container image "" already present on machine
  Normal  Created         8m14s (x2 over 18m)  kubelet  Created container academy-db
  Normal  Started         8m14s (x2 over 18m)  kubelet  Started container academy-db

Later, though, I notice that the client application has problems in connecting. After some investigation I’ve concluded that though the pod is running, the mariadb process running inside it could have crashed just after startup. If I enter the container with kubectl exec and try to run for instance the mysql client I get:

kubectl  exec -it pod/academy-db-5547cdbc5-65k79 -- /bin/bash

I have no [email protected]:/$ mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)

Any idea of what could cause the problem, or how can I investigate further the issue? (Note: I’m not an expert in Kubernetes, but started only recently to learn it)

Edit: Following @Novo’s comment, I tried to delete the volume folder and let mariadb recreate the deployment from scratch. Now my pod doesn’t even start, terminating in CrashLoopBackOff !

By comparing the pod logs I notice that in the previous mariabd log there was a message:

mariadb 10:32:05.69 WARN  ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mariadb 10:32:05.70 INFO  ==> Using persisted data
mariadb 10:32:05.71 INFO  ==> Running mysql_upgrade
mariadb 10:32:05.71 INFO  ==> Starting mariadb in background

Now replaced with

mariadb 14:15:57.32 INFO  ==> Updating 'my.cnf' with custom configuration
mariadb 14:15:57.32 INFO  ==> Setting user option
mariadb 14:15:57.35 INFO  ==> Installing database

Could it be that the issue is related with some access right problem to the volume folders in the host vagrant machine?


By default, hostPath directories are created with permission 755, owned by the user and group of the kubelet. To use the directory, you can try adding the following to your deployment:

       fsGroup: <gid>

Where gid is the group used by the process in your container.

Also, you could fix the issue on the host itself by changing the permissions of the folder you want to mount into the container:

chown-R <uid>:<gid> /path/to/volume

where uid and gid are the userId and groupId from your app.

chmod -R 777 /path/to/volume

This should solve your issue.

But overall, a deployment is not what you want to create in this case, because deployments should not have state. For stateful apps, there are ‘StatefulSets’ in Kubernetes. Use those together with a ‘VolumeClaimTemplate’ plus spec.securityContext.fsgroup and k3s will create the persitent volume and the persistent volume claim for you, using it’s default storage class, which is local storage (on your node).

Leave a Reply

Your email address will not be published. Required fields are marked *