-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add node selector for the check-new-samples job and a livenessProbe. #35
Conversation
@@ -53,6 +53,12 @@ spec: | |||
resources: | |||
{{- toYaml .Values.resources | nindent 10 }} | |||
{{- include "seqr.environment" $ | nindent 8 }} | |||
livenessProbe: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've seen some strange behavior recently with the seqr service failing to come up correctly during a re-spin.
My best guess at what's been happening here:
- our container runs regardless of whether or not the backgrounded server is actually running or not (
sleep 100000
) - gunicorn never comes up successfully if
gcloud
or something elseinit
-related is failing. Testing this locally showed that only a single worker failing will bring down all N worker processes. - the readiness probe never succeeds on that pod/
- we can end up in a serving-traffic-correctly state if we have multiple pods running (1 running correctly and 1 not running).
- we can also end up in a fully down state if both pods happen to have an init issue. I'm not sure this can happen during a deployment (the internet points to the rolling deploy mechanism using "readiness" rather than liveness), but it has happened during standard pod rotations.
Adding the livenessProbe will help k8s know the container is DOA and should be kicked.
This got mentioned in slack many months ago: https://the-tgg.slack.com/archives/C01FRFB36LR/p1682518495987779
@@ -55,5 +55,9 @@ spec: | |||
{{- with $.Values.deployment_sidecars }} | |||
{{- tpl . $ | nindent 8 }} | |||
{{- end }} | |||
{{- with $.Values.nodeSelector }} | |||
nodeSelector: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just saw this missing and added. I'm not sure if we've ever seen this workload assigned off default-pool, but certainly a possibility.
Result from investigation into the downtime incident.