You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST
What happened:
To communicate with target clusters, control plane cluster must have credentials to access managed clusters. As we have taken conscious decision to not to get into cluster creation as part of this project, we need to establish a pattern to provide managed cluster access information to the control plane cluster.
So, As part of Bring Your Own Cluster (BYOC) approach, we should provide multiple options
Kubernetes Native Way
AWS IAM Auth (Heptio Auth??) for AWS clusters.
GCP Way if there is any..
We should support Kubernetes native way in first release and can support cloud native ways in future releases.
What you expected to happen:
It should be as simple as running a cli command on the managed cluster to create ServiceAccount, Role and RoleBinding and can extract secret information which can be presented to the "manager" as part of custom resource.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
If we think about it now, we might not need cluster controller as it is not creating but CRD should be sufficient. May be we just use that secret and create a dummy namespace as part of the controller reconciliation and see if that access works?
Environment:
manager version
Kubernetes version :
$ kubectl version -o yaml
Other debugging information (if applicable):
- controller logs:
$ kubectl logs
The text was updated successfully, but these errors were encountered:
Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST
What happened:
To communicate with target clusters, control plane cluster must have credentials to access managed clusters. As we have taken conscious decision to not to get into cluster creation as part of this project, we need to establish a pattern to provide managed cluster access information to the control plane cluster.
So, As part of Bring Your Own Cluster (BYOC) approach, we should provide multiple options
We should support Kubernetes native way in first release and can support cloud native ways in future releases.
What you expected to happen:
It should be as simple as running a cli command on the managed cluster to create ServiceAccount, Role and RoleBinding and can extract secret information which can be presented to the "manager" as part of custom resource.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
If we think about it now, we might not need cluster controller as it is not creating but CRD should be sufficient. May be we just use that secret and create a dummy namespace as part of the controller reconciliation and see if that access works?
Environment:
Other debugging information (if applicable):
$ kubectl logs
The text was updated successfully, but these errors were encountered: