You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On our clusters we use gatekeeper with an AssignImage mutation to rewrite Pod
images to use our private mirror. On about 2% of pods this rewrite does not
happen when rolling over our node pools.
We run Gatekeeper in a failing-open configuration with three
gatekeeper-controller-manager replicas.
We investigated this by installing gatekeeper in a controlled environment
(minikube) and used curl to query the webhook endpoint in a loop as fast as
possible and recording failures. Our test setup is outlined below. On scaling
events we observed failing requests. We root caused this to the following two
problems in gatekeeper:
On gatekeeper pod startup the readiness probe indicates a pod is ready but
unable to serve webhook traffic
On gatekeeper pod termination the service still points to the terminating pod
for a brief moment due to Services updating asynchronously. Gatekeeper does
not have a grace period on server shutdown, leading to refused connections.
What did you expect to happen:
Gatekeeper pods can receive requests when they are registered endpoints at the
service.
Mitigations:
We found that the health- and readinessprobes are misconfigured. They indicate a
ready-state as soon as the manager is started, even though the webhook is not
responding to requests yet.
While we can reconfigure the health probe to validate that the webhook server is
able to serve requests by passing --enable-tls-healthcheck=true to the
gatekeeper-controller-manager, this is not possible yet for the readiness probe.
If webhooks are enabled the readiness probe should check actual service health.
So we propose to implement the behavior of --enable-tls-healthcheck=true for
the readiness probe as well and enable it by default for the readiness probe
only.
We also found that adding a preStopHook to the gatekeeper-controller-manager
further prevents failing requests due to the webhook server terminating before
the endpoint gets removed from the K8s service.
Both mitigations yield zero failed requests over a test time frame of 30 minutes
with the test setup outlined below. Without the mitigations we saw requests
failing after less than a minute.
We have attached the kustomization we use to work around this problem, until the
final fix is upstream, here. It is a hack, which enables tls health checks and
uses the /healthz endpoint for readiness probes and adds the termination
preStopHook to buy the service time to disconnect the service and finish
in-flight requests.
The first thing is that the health check is not of interest to us. Rather, the readiness probe causes our problems, since that controls if traffic is routed to a pod. And with the current implementation the readiness probe does not indicate if the webhook is able to serve traffic. I already created a pull request for this issue.
The second issue is that preStop with the sleep type is only supported on 1.30+ without modifying feature gates. Which means gatekeeper would have to bump the last supported K8s version to 1.30. 1.29 goes EOL in the end of February 2025, so just 1.5 months to go according to the version skew policy of gatekeeper.
Alternatively we can wait when receiving a sigterm before cancelling all contexts by inserting a waiting period into the signal handler.
This is one of the few times a "sleep for x" is the proper solution, I think. Unless someone has an idea how to wait until an endpoint has been removed from a service and the changes are propagated through the routing tables I do not see any alternatives.
Co-Authored by @nilsfed
What steps did you take and what happened:
On our clusters we use gatekeeper with an
AssignImage
mutation to rewrite Podimages to use our private mirror. On about 2% of pods this rewrite does not
happen when rolling over our node pools.
We run Gatekeeper in a failing-open configuration with three
gatekeeper-controller-manager replicas.
We investigated this by installing gatekeeper in a controlled environment
(minikube) and used curl to query the webhook endpoint in a loop as fast as
possible and recording failures. Our test setup is outlined below. On scaling
events we observed failing requests. We root caused this to the following two
problems in gatekeeper:
unable to serve webhook traffic
for a brief moment due to Services updating asynchronously. Gatekeeper does
not have a grace period on server shutdown, leading to refused connections.
What did you expect to happen:
Gatekeeper pods can receive requests when they are registered endpoints at the
service.
Mitigations:
We found that the health- and readinessprobes are misconfigured. They indicate a
ready-state as soon as the manager is started, even though the webhook is not
responding to requests yet.
While we can reconfigure the health probe to validate that the webhook server is
able to serve requests by passing
--enable-tls-healthcheck=true
to thegatekeeper-controller-manager, this is not possible yet for the readiness probe.
If webhooks are enabled the readiness probe should check actual service health.
So we propose to implement the behavior of
--enable-tls-healthcheck=true
forthe readiness probe as well and enable it by default for the readiness probe
only.
We also found that adding a preStopHook to the gatekeeper-controller-manager
further prevents failing requests due to the webhook server terminating before
the endpoint gets removed from the K8s service.
Both mitigations yield zero failed requests over a test time frame of 30 minutes
with the test setup outlined below. Without the mitigations we saw requests
failing after less than a minute.
Anything else you would like to add:
Our test setup:
deployment.yaml
We have attached the kustomization we use to work around this problem, until the
final fix is upstream, here. It is a hack, which enables tls health checks and
uses the /healthz endpoint for readiness probes and adds the termination
preStopHook to buy the service time to disconnect the service and finish
in-flight requests.
Kustomization
kustomization.yaml
hotfix.yaml
Environment:
kubectl version
): 1.31.0The text was updated successfully, but these errors were encountered: