You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We currently use AVI with vSphere and utilize the VRF features. We have all of our networks in one of two non-default VRFs. We don't use the global VRF.
We're experimenting with using AKO in a generic on-prem K8s cluster and have discovered that there is no way to specify the VRF that the objects are created in (we're using NodePort mode). It just always creates the objects (VIPs, VSs, Pools, etc) in the global VRF. This leads to an issue where we can't use our existing VIP network which already has IP addresses allocated and IPAM configured.
I've looked through the values.yaml for the latest version of the chart and I can't find any way to set this. It seems like the VRF is just hard-coded into AKO. I also don't see a way to configure this in the AviInfraSetting CRD.
Describe the solution you'd like
The ability to specify the VRF where objects should be created. This could either be during install using the values.yaml file, or after install with the AviInfraSetting CRD. I don't know which one makes more sense.
Describe alternatives you've considered
To use AVI we would need to manually expose our services as NodePorts and do the AVI configuration outside of Kubernetes.
Additional context
I submitted this as a ticket through VMware support and was told this is not currently possible, but that RFE-2860 has been opened regarding this issue. I wanted to also open it here for increased visibility.
The text was updated successfully, but these errors were encountered:
I'm running into a similar problem.
I want to use "Tenant VRF" on a tenant but when I activate it, deploy the operator and setup an demo applicaiton I'm getting the following error:
2023-07-03T06:21:25.138Z WARN rest/rest_operation.go:304 key: Test/dev--vmug-avisvc-lb, msg: RestOp method POST path /api/vsvip tenant Test Obj {"cloud_ref":"/api/cloud?name=Test vCenter","dns_info":[{"fqdn":"avisvc-lb.vmug.test.local"}],"east_west_placement":false,"markers":[{"key":"clustername","values":["dev"]}],"name":"dev--vmug-avisvc-lb","tenant_ref":"/api/tenant/?name=Test","vip":[{"auto_allocate_ip":true,"ipam_network_subnet":{"subnet":{"ip_addr":{"addr":"100.127.2.0","type":"V4"},"mask":24}},"vip_id":"0"}],"vrf_context_ref":"/api/vrfcontext?name=global","vsvip_cloud_config_cksum":"1485437106"} returned err {"code":0,"message":"map[error:Illegal cross-tenant references vrfcontext-6aa350c3-8ce9-4b38-ae86-04b12fdfece2]","Verb":"POST","Url":"https://nlb.test.local//api/vsvip","HttpStatusCode":400} with response null
Is your feature request related to a problem? Please describe.
We currently use AVI with vSphere and utilize the VRF features. We have all of our networks in one of two non-default VRFs. We don't use the global VRF.
We're experimenting with using AKO in a generic on-prem K8s cluster and have discovered that there is no way to specify the VRF that the objects are created in (we're using NodePort mode). It just always creates the objects (VIPs, VSs, Pools, etc) in the global VRF. This leads to an issue where we can't use our existing VIP network which already has IP addresses allocated and IPAM configured.
I've looked through the values.yaml for the latest version of the chart and I can't find any way to set this. It seems like the VRF is just hard-coded into AKO. I also don't see a way to configure this in the AviInfraSetting CRD.
Describe the solution you'd like
The ability to specify the VRF where objects should be created. This could either be during install using the values.yaml file, or after install with the AviInfraSetting CRD. I don't know which one makes more sense.
Describe alternatives you've considered
To use AVI we would need to manually expose our services as NodePorts and do the AVI configuration outside of Kubernetes.
Additional context
I submitted this as a ticket through VMware support and was told this is not currently possible, but that RFE-2860 has been opened regarding this issue. I wanted to also open it here for increased visibility.
The text was updated successfully, but these errors were encountered: