Running scripts via Helm hooks

Running scripts via Helm hooks

I have written Pre- and Post-upgrade hooks for my Helm chart, which will get invoked when I do a helm upgrade. My Pre-upgrade hook is supposed to write some information to a file in the shared persistent storage volume. Somehow, I dont see this file getting created though I am able to see the hook getting invoked.
This is what my pre-upgrade hook looks like:
apiVersion: batch/v1
kind: Job
name: “{{.Release.Name}}-preupgrade”
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: “{{.Chart.Name}}-{{.Chart.Version}}”
“”: pre-upgrade
“”: “0”
“”: hook-succeeded
name: “{{.Release.Name}}”
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: “{{.Chart.Name}}-{{.Chart.Version}}”
restartPolicy: Never
– name: pre-upgrade-job
image: {{ .Values.registry }}/{{ .Values.imageRepo }}:{{ .Values.imageTag }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
– mountPath: {{ .Values.pvc.shared_storage_path }}/{{ template “fullname” . }}
name: shared-pvc
command: [‘/bin/sh -c scripts/ {{ .Values.pvc.shared_storage_path }}/{{ template “fullname” . }}’]
– name: shared-pvc
claimName: {{ template “fullname” . }}-shared-pv-claim

My expectation is that the hook should be able to write information to the PVC volume which was already created prior to the upgrade. When I did a describe on the upgrade pods, I could see the following error:
Error: failed to start container “pre-upgrade-job”: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused “exec: \”/bin/sh -c scripts/ /opt/flink/share/myfl-flink\”: stat /bin/sh -c scripts/ /opt/flink/share/myfl-flink: no such file or directory”

Doesn’t the hook first mount the volume before running the command? Also, I’m packaging the script with the docker image, so I believe it should be there.
I am unable to exec into the hook pod as it goes into the Failed state.
Can anyone help me with this?
[Update] I added a sleep command to enter the pod and check if the script is available and if the mount path exists. All looks fine. I don’t understand why this error would come up.

Related:  running windows Container in Kubernetes over AWS cloud


Solution 1:

Looks like I needed to give the command differently:

command: ["/bin/sh", "-c", "scripts/","{{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}"]