I’m trying to move a number of docker containers on a linux server to a test kubernets-based deployment running on a different linux machine where I’ve installed kubernetes as a k3s instance inside a vagrant virtual machine.
One of these containers is a mariadb container instance, with a bind volume mapped
This is the relevant portion of the docker-compose I’m using:
academy-db: image: 'docker.io/bitnami/mariadb:10.3-debian-10' container_name: academy-db environment: - ALLOW_EMPTY_PASSWORD=yes - MARIADB_USER=bn_moodle - MARIADB_DATABASE=bitnami_moodle volumes: - type: bind source: ./volumes/moodle/mariadb target: /bitnami/mariadb ports: - '3306:3306'
Note that this works correctly. (the container is used by another application container which connects to it and reads data from the db without problems).
I then tried to convert this to a kubernetes configuration, copying the volume folder to the destination machine and using the following kubernetes .yaml deployment files. This includes a deployment .yaml, a persistent volume claim and a persistent volume, as well as a NodePort service to make the container accessible. For the data volume, I’m using a simple hostPath volume pointing to the contents copied from the docker-compose’s bind mounts.
apiVersion: apps/v1 kind: Deployment metadata: name: academy-db spec: replicas: 1 selector: matchLabels: name: academy-db strategy: type: Recreate template: metadata: labels: name: academy-db spec: containers: - env: - name: ALLOW_EMPTY_PASSWORD value: "yes" - name: MARIADB_DATABASE value: bitnami_moodle - name: MARIADB_USER value: bn_moodle image: docker.io/bitnami/mariadb:10.3-debian-10 name: academy-db ports: - containerPort: 3306 resources: {} volumeMounts: - mountPath: /bitnami/mariadb name: academy-db-claim restartPolicy: Always volumes: - name: academy-db-claim persistentVolumeClaim: claimName: academy-db-claim --- apiVersion: v1 kind: PersistentVolume metadata: name: academy-db-pv labels: type: local spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain hostPath: path: "<...full path to deployment folder on the server...>/volumes/moodle/mariadb" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: academy-db-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: "" volumeName: academy-db-pv --- apiVersion: v1 kind: Service metadata: name: academy-db-service spec: type: NodePort ports: - name: "3306" port: 3306 targetPort: 3306 selector: name: academy-db
after applying the deployment, everything seems to work fine, in the sense that with kubectl get ...
the pod and the volumes seem to be running correctly
kubectl get pods NAME READY STATUS RESTARTS AGE academy-db-5547cdbc5-65k79 1/1 Running 9 15d . . . kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE academy-db-pv 1Gi RWO Retain Bound default/academy-db-claim 15d . . . kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE academy-db-claim Bound academy-db-pv 1Gi RWO 15d . . .
This is the pod’s log:
kubectl logs pod/academy-db-5547cdbc5-65k79 mariadb 10:32:05.66 mariadb 10:32:05.66 Welcome to the Bitnami mariadb container mariadb 10:32:05.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb mariadb 10:32:05.66 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues mariadb 10:32:05.66 mariadb 10:32:05.67 INFO ==> ** Starting MariaDB setup ** mariadb 10:32:05.68 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars mariadb 10:32:05.68 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment. mariadb 10:32:05.69 INFO ==> Initializing mariadb database mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file. mariadb 10:32:05.70 INFO ==> Using persisted data mariadb 10:32:05.71 INFO ==> Running mysql_upgrade mariadb 10:32:05.71 INFO ==> Starting mariadb in background
and the describe pod command:
Name: academy-db-5547cdbc5-65k79 Namespace: default Priority: 0 Node: zdmp-kube/192.168.33.99 Start Time: Tue, 22 Dec 2020 13:33:43 +0000 Labels: name=academy-db pod-template-hash=5547cdbc5 Annotations: <none> Status: Running IP: 10.42.0.237 IPs: IP: 10.42.0.237 Controlled By: ReplicaSet/academy-db-5547cdbc5 Containers: academy-db: Container ID: containerd://68af105f15a1f503bbae8a83f1b0a38546a84d5e3188029f539b9c50257d2f9a Image: docker.io/bitnami/mariadb:10.3-debian-10 Image ID: docker.io/bitnami/[email protected]:1d8ca1757baf64758e7f13becc947b9479494128969af5c0abb0ef544bc08815 Port: 3306/TCP Host Port: 0/TCP State: Running Started: Thu, 07 Jan 2021 10:32:05 +0000 Last State: Terminated Reason: Error Exit Code: 1 Started: Thu, 07 Jan 2021 10:22:03 +0000 Finished: Thu, 07 Jan 2021 10:32:05 +0000 Ready: True Restart Count: 9 Environment: ALLOW_EMPTY_PASSWORD: yes MARIADB_DATABASE: bitnami_moodle MARIADB_USER: bn_moodle MARIADB_PASSWORD: bitnami Mounts: /bitnami/mariadb from academy-db-claim (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-x28jh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: academy-db-claim: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: academy-db-claim ReadOnly: false default-token-x28jh: Type: Secret (a volume populated by a Secret) SecretName: default-token-x28jh Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 15d (x8 over 15d) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine Normal Created 15d (x8 over 15d) kubelet Created container academy-db Normal Started 15d (x8 over 15d) kubelet Started container academy-db Normal SandboxChanged 18m kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 8m14s (x2 over 18m) kubelet Container image "docker.io/bitnami/mariadb:10.3-debian-10" already present on machine Normal Created 8m14s (x2 over 18m) kubelet Created container academy-db Normal Started 8m14s (x2 over 18m) kubelet Started container academy-db
Later, though, I notice that the client application has problems in connecting. After some investigation I’ve concluded that though the pod is running, the mariadb process running inside it could have crashed just after startup. If I enter the container with kubectl exec
and try to run for instance the mysql client I get:
kubectl exec -it pod/academy-db-5547cdbc5-65k79 -- /bin/bash I have no [email protected]:/$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)
Any idea of what could cause the problem, or how can I investigate further the issue? (Note: I’m not an expert in Kubernetes, but started only recently to learn it)
Edit: Following @Novo’s comment, I tried to delete the volume folder and let mariadb recreate the deployment from scratch.
Now my pod doesn’t even start, terminating in CrashLoopBackOff
!
By comparing the pod logs I notice that in the previous mariabd log there was a message:
... mariadb 10:32:05.69 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file. mariadb 10:32:05.70 INFO ==> Using persisted data mariadb 10:32:05.71 INFO ==> Running mysql_upgrade mariadb 10:32:05.71 INFO ==> Starting mariadb in background
Now replaced with
... mariadb 14:15:57.32 INFO ==> Updating 'my.cnf' with custom configuration mariadb 14:15:57.32 INFO ==> Setting user option mariadb 14:15:57.35 INFO ==> Installing database
Could it be that the issue is related with some access right problem to the volume folders in the host vagrant machine?
Answer
By default, hostPath directories are created with permission 755, owned by the user and group of the kubelet. To use the directory, you can try adding the following to your deployment:
spec: securityContext: fsGroup: <gid>
Where gid is the group used by the process in your container.
Also, you could fix the issue on the host itself by changing the permissions of the folder you want to mount into the container:
chown-R <uid>:<gid> /path/to/volume
where uid and gid are the userId and groupId from your app.
chmod -R 777 /path/to/volume
This should solve your issue.
But overall, a deployment is not what you want to create in this case, because deployments should not have state. For stateful apps, there are ‘StatefulSets’ in Kubernetes. Use those together with a ‘VolumeClaimTemplate’ plus spec.securityContext.fsgroup and k3s will create the persitent volume and the persistent volume claim for you, using it’s default storage class, which is local storage (on your node).