You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can have 2 tokens at any time and every time before controller validates the connectivity, it can pick the latest token to do the validation and can delete if the old token "lastUsedTimeStamp" is more than 30 mins. as soon as it deletes the oldest token it should create new token. What you expected to happen:
For every reconciliation at cluster-controller,
Get the best(recently created) token out of available tokens.
Validate the oldest token and delete it if lastusedTimeStamp is more than 30 mins
Delete the token only if connectivity is successful with other token
Create new token if there is only one token in the list.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
manager version
Kubernetes version :
$ kubectl version -o yaml
Other debugging information (if applicable):
- controller logs:
$ kubectl logs
The text was updated successfully, but these errors were encountered:
Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST
What happened:
Since we are using k8s native way to connect to the target clusters which is nothing but using bearer token of a service token, may be its a good idea to keep refreshing those credentials. we can add the token to a service token using https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#to-create-additional-api-tokens.
We can have 2 tokens at any time and every time before controller validates the connectivity, it can pick the latest token to do the validation and can delete if the old token "lastUsedTimeStamp" is more than 30 mins. as soon as it deletes the oldest token it should create new token.
What you expected to happen:
For every reconciliation at cluster-controller,
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
Other debugging information (if applicable):
$ kubectl logs
The text was updated successfully, but these errors were encountered: