cancel
Showing results for 
Search instead for 
Did you mean: 

Kyma Lost Databases

LuizGomes
Participant
0 Kudos
With each update of kyma the databases cease to exist, causing a great inconvenience. 

Has anyone been through this? is the default storageclass the best alternative?

Are there other possible configurations?
    
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:52.813 UTC [4690] FATAL: database \"sales_force_db_teste\" does not exist","stream":"stderr","time":"2023-03-14T14:23:52.813242278Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:46.078 UTC [4689] FATAL: database \"sales_force_db_teste\" does not exist","stream":"stderr","time":"2023-03-14T14:23:46.079042305Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:40.594 UTC [4688] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:40.594712373Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:37.586 UTC [4687] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:37.58789053Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:34.575 UTC [4686] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:34.575120791Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:31.567 UTC [4685] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:31.567486737Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:28.558 UTC [4684] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:28.558880422Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:25.549 UTC [4683] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:25.549097789Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:22.540 UTC [4682] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:22.541105437Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:19.531 UTC [4681] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:19.531693001Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:23:16.521 UTC [4680] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:23:16.521311094Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:18:41.915 UTC [4674] FATAL: database \"sales_force_db_teste\" does not exist","stream":"stderr","time":"2023-03-14T14:18:41.915709622Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:18:34.923 UTC [4673] FATAL: database \"sales_force_db_teste\" does not exist","stream":"stderr","time":"2023-03-14T14:18:34.923583994Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:18:05.634 UTC [4672] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:18:05.634250332Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:18:02.626 UTC [4671] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:18:02.626367159Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:17:59.618 UTC [4670] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:17:59.618177769Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:17:56.610 UTC [4668] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:17:56.610892022Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:17:53.603 UTC [4667] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:17:53.603236246Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:17:50.596 UTC [4666] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:17:50.596286447Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:17:47.589 UTC [4665] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:17:47.590095299Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:17:44.584 UTC [4664] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:17:44.584283313Z"}
{"_p":"F","cluster_identifier":"api.c-71b475b.kyma.internal.live.k8s.ondemand.com","log":"2023-03-14 14:17:41.573 UTC [4663] FATAL: database \"schedules\" does not exist","stream":"stderr","time":"2023-03-14T14:17:41.57361979Z"}
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pvc-storage-default
labels:
type: local
spec:
storageClassName: default
capacity:
storage: 15Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
labels:
app: postgres
name: postgres-pvc-storage-default
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: default
resources:
requests:
storage: 15Gi

postgres service

apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
revisionHistoryLimit: 2 ### default is 10
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.10
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresdb
volumes:
- name: postgresdb
persistentVolumeClaim:
claimName: postgres-pvc-storage-default
---
kind: Service
apiVersion: v1
metadata:
name: postgres-service
spec:
selector:
app: postgres
type: LoadBalancer
ports:
- protocol: TCP
port: 5432
targetPort: 5432


Are there other possible configurations?

Accepted Solutions (0)

Answers (3)

Answers (3)

gabbi
Advisor
Advisor

Hi 015454888556565665685989,

The issue here is you are using the wrong volume type (hostPath).

As per K8s documentation, https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

"A hostPath volume mounts a file or directory from the host node's filesystem into your Pod."

New nodes are provisioned with K8s updates. You should not be using host path, but use the storage that is backed by hyperscaler disks.

You do not need to explicitly define a persistence volume, instead a PersistenceVoluemClaim should suffice.

Please check this example:

https://github.com/SAP-samples/kyma-runtime-extension-samples/blob/main/database-mssql/k8s/pvc.yaml

https://github.com/SAP-samples/kyma-runtime-extension-samples/blob/main/database-mssql/k8s/deploymen...

BR

Gaurav

LuizGomes
Participant
0 Kudos

Thank you so much for your answer.

I followed the settings, even after deleting the postgres service it recognized data in the folder and I needed to add a new path for the service. so it seems to me that in these cases the previous settings worked, but let's test your proposed scenario.

BR, hue

Luiz Gomes

gabbi
Advisor
Advisor

Hi 015454888556565665685989

In either case, do not use hostPath as it will always get deleted.

BR

Gaurav

gabbi
Advisor
Advisor
0 Kudos

Hi Luiz,

Data loss happened due to using "hostPath". If you have other issues, please consider raising a ticket.

BR

Gaurav

LuizGomes
Participant
0 Kudos

hi gabbi

I was testing the postgres deployment removal and when I applied the files it lost the databases again.

apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
revisionHistoryLimit: 2 ### default is 10
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.10
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
volumeMounts:
- mountPath: /var/lib/postgresql/dbdata
name: postgresdb
volumes:
- name: postgresdb
persistentVolumeClaim:
claimName: postgres-data-pvc
---
kind: Service
apiVersion: v1
metadata:
name: postgres-service
spec:
selector:
app: postgres
type: LoadBalancer
ports:
- protocol: TCP
port: 5432
targetPort: 5432


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-data-pvc
namespace: default
labels:
app: postgres

spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
2023-03-17 12:28:51.720 UTC [127] FATAL: database "schedules" does not exist
2023-03-17 12:28:54.730 UTC [128] FATAL: database "schedules" does not exist
2023-03-17 12:30:00.263 UTC [130] FATAL: database "sales_force_db_teste" does not exist
2023-03-17 12:30:00.271 UTC [131] FATAL: database "sales_force_db_teste" does not exist
2023-03-17 12:30:23.573 UTC [132] FATAL: database "schedules" does not exist
2023-03-17 12:30:26.584 UTC [133] FATAL: database "schedules" does not exist
2023-03-17 12:30:29.595 UTC [134] FATAL: database "schedules" does not exist
2023-03-17 12:30:32.602 UTC [136] FATAL: database "schedules" does not exist
2023-03-17 12:30:35.612 UTC [137] FATAL: database "schedules" does not exist
2023-03-17 12:30:38.627 UTC [138] FATAL: database "schedules" does not exist
2023-03-17 12:30:41.637 UTC [139] FATAL: database "schedules" does not exist
2023-03-17 12:30:44.646 UTC [140] FATAL: database "schedules" does not exist
LuizGomes
Participant
0 Kudos

SAP recomendation

"

My colleagues analyzed the configuration you provided.

You have two possibilities: either change accessModes: to ReadWriteMany, or use statefulset. in this configuration it should work with both ReadWriteOnce or ReadWriteMany

Both ways should retain the data in deployment.

Let us know if you have further questions or issues.




"

gabbi
Advisor
Advisor
0 Kudos

Hi Luiz,

Deleting a deployment is different that what was happening when Kyma gets updated, in which case new nodes are created and your pods are deleted and recreated.

When you delete a deployment or a stateful set, the deletion or retention of data will depend upon the reclaim policy.

https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/#why-change-reclaim-pol...

Best regards,

Gaurav

quovadis
Product and Topic Expert
Product and Topic Expert

Hello,

With Kyma the RWO (read-write-once) PVCs are supported and that regardless of the underlying storage class;

A RWO persistent volume claim is a pod but must reside on the same node as the other worker pod using it for the volume binding to work.

And, with deployments there is not much guarantee that your PCV's pod and its worker pod do not become separated during the cluster lifetime...

Thus you might be better off using stateful sets and not deployments.

What is a solution?

Considering you are with a managed kyma (SAP BTP, Kyma runtime) the best is you raise a support ticket to mitigate the risk of any data loss.

I hope that helps; best regards; Piotr

PS.

You may want to consider the following advice; That always worked for me; However if you do it it will be at your own risk. And do not do it on production!

LuizGomes
Participant
0 Kudos

Thank you so much for your answer.

to open a ticket in SAP I need to make sure that the settings are correct, the bureaucracy to open a ticket and slow responses prevent me from doing something without being sure.

but from what I understood from your answer, you consider Gaurav Abbi's suggestion correct but insufficient. that is, we will still have this big problem of data loss. so I'm going to open a call at SAP with the scenario I have.

the Kyma runtime has been very unstable, services disappear, services reappear, data disappears and all of this has an impact. it has been a bad experience working with kyma + SAP. I hope they resolve this soon.