-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hub-spoke example extended to cross account access between accounts #58
Comments
Notes so far: This seems to work mostly out the box, ArgoCD assumes a role to get access so it does the chaining for you, rather than relying on the role provided by the Pod Identity controller for the argocd-server pod. Things I needed to change in the hub:
Things I had to change in the spokes:
As far as I can tell, this prevents needing an "uber" terraform user which can jump into each account, the hub and spoke clusters are deployed with different accounts with different credentials and the state sharing is what glues it together (relies on the state being in the same bucket/available from the planning/applying user) |
Okay so taken a stab at removing the "shared state" between terraform stacks and the best native AWS solution I can come up with is using Parameter Store. Thanks @agjmills for the idea. Important You need to enable resource sharing within an organisation before doing any of this. In the hub stack: resource "aws_ram_resource_share" "hub" {
name = "hub"
allow_external_principals = false
}
resource "aws_ram_principal_association" "hub" {
principal = data.aws_organizations_organization.current.arn
resource_share_arn = aws_ram_resource_share.hub.arn
}
resource "aws_ram_resource_association" "hub" {
resource_arn = aws_ssm_parameter.hub.arn
resource_share_arn = aws_ram_resource_share.hub.arn
}
resource "aws_ssm_parameter" "hub" {
name = "hub"
type = "String"
tier = "Advanced"
value = jsonencode(
{
"cluster_name" : module.eks.cluster_name,
"cluster_endpoint" : module.eks.cluster_endpoint
"cluster_certificate_authority_data" : module.eks.cluster_certificate_authority_data,
"cluster_region" : local.region,
"spoke_cluster_secrets_arn" : aws_iam_role.spoke_cluster_secrets.arn,
"argocd_iam_role_arn" : aws_iam_role.argocd_hub.arn
}
)
} In the spoke stacks: ################################################################################
# Kubernetes Access for Hub Cluster
################################################################################
provider "aws" {
alias = "hub"
region = data.aws_arn.hub_parameter.region
}
data "aws_arn" "hub_parameter" {
arn = var.hub_parameter_arn
}
data "aws_ssm_parameter" "hub" {
name = var.hub_parameter_arn
provider = aws.hub
}
locals {
hub = jsondecode(data.aws_ssm_parameter.hub.value)
}
provider "kubernetes" {
host = local.hub.cluster_endpoint
cluster_ca_certificate = base64decode(local.hub.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", local.hub.cluster_name, "--region", local.hub.cluster_region, "--role-arn", local.hub.spoke_cluster_secrets_arn]
}
alias = "hub"
} It's not perfect, there's still the last mile issue of outputting the parameter ARN itself and supplying it as a variable, but in theory it's quite a static string, thus somewhat predictable - you just need to know the account ID of the hub cluster. That could be stored in a CI pipeline variable for example, so the spoke clusters can just "know" where it is on successive runs with any number of future spoke clusters. |
This is great @danielloader 🎉 Do you required the user to use organizations? What if the user doesn't have organizations. Regardless I think adding your example using organizations and using ssm parameter to share the values to spoke, and the role to write secrets is great! |
Context: https://github.com/gitops-bridge-dev/gitops-bridge/tree/main/argocd/iac/terraform/examples/eks/multi-cluster/hub-spoke
Thanks for updating the multi-cluster examples to include EKS Pod Association, it's been a great simplification and improvement.
I'm currently working me way trying to bend this example into a cross account example internally for a tech demo, where as OIDC was little more forgiving cross account due to not needing to do Role Chaining.
This is less a request more an issue to track if anyone else is doing this and to open up some discussion on implementation details, perhaps with a hope to contributing an example back to this repository.
The text was updated successfully, but these errors were encountered: