-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
grpcproxy: use metadata instead of context withvalue in with client auth token #19033
base: main
Are you sure you want to change the base?
Conversation
…uth token Signed-off-by: Kristoffer Johansson <kristoffer.johansson@gcore.com>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: krijohs The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @krijohs. Thanks for your PR. I'm waiting for a etcd-io member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Added test case which reproduces the issue, with included change it passes but without fails |
Signed-off-by: Kristoffer Johansson <kristoffer.johansson@gcore.com>
3d3b143
to
8d9e89f
Compare
Hi @krijohs, thanks for your pull request. Ideally, we would want to discuss the issue and possible solutions before a pull request. Could you please open an issue so other members with more expertise in this area can jump in? Thanks again. |
Hello @ivanvc ok, got it will open an issue so possible solutions can be discussed, thanks. |
Change to use metadata instead of
context.WithValue
to ensure each proxy watcher client has a new stream created with its token.Previously context.WithValue resulted in
streamKeyFromCtx
returning an empty string in the clientv3 watcher, causing stream reuse.When new clients connected to proxy after the token expired (token for the initial client which connected) the reused stream's context would still contain the expired token. This caused auth failures when
isWatchPermitted
on cluster checked the stream's context resulting in hanging proxy watcher clients.Issue can be reproduced by setting a low
--auth-token-ttl
on cluster and connect 1 client first to proxy and then connect a second one after token expired.Also adds increase of
watchersCoalescing
metric when watchers have been coalesced.