Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User provided router is not reconciled #2129

Open
MaysaMacedo opened this issue Jun 18, 2024 · 7 comments
Open

User provided router is not reconciled #2129

MaysaMacedo opened this issue Jun 18, 2024 · 7 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@MaysaMacedo
Copy link
Contributor

MaysaMacedo commented Jun 18, 2024

/kind bug

What steps did you take and what happened:
When an existent router is provided in the openstack cluster spec, that is not reconciled here. The router is only reconciled when created by CAPO here.

What did you expect to happen:
The provided router would be reconciled and the ID of the router would be present on the status of the OpenStack cluster object.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 18, 2024
@EmilienM
Copy link
Contributor

Bleurk, this is a valid issue. Let me know if you need help on this one.

@huxcrux
Copy link
Contributor

huxcrux commented Jun 19, 2024

Isn't the point with a user provided router that it shouldn't be reconciled? The only thing that should be needed is a router interface in all subnets?

@mdbooth
Copy link
Contributor

mdbooth commented Jun 19, 2024

What's the impact of this? Can you describe a problem that this causes?

@MaysaMacedo
Copy link
Contributor Author

@huxcrux @mdbooth I noticed the user provided network and subnet IDs are set in the status, but the ID of the user provided router in not the status. The only place where this is really a problem seems to be here. But, in general as a user I thought that all the resources would end up having the ID resolved in the status.

Here is the example:

  spec:
    controlPlaneEndpoint:
      host: x.x.x.x
      port: 6443
    externalNetwork:
      filter:
        name: test
    identityRef:
      cloudName: openstack
      name: openstack-credentials
    managedSecurityGroups:
      allowAllInClusterTraffic: false
    network:
      filter:
        tags:
        - test
    router:
      filter:
        tags:
        - test
    subnets:
    - filter:
        tags:
        - test
  status:
    controlPlaneSecurityGroup:
      id: 823b7ed4-76f3-4087-89c3-ab109cefda08
      name: k8s-cluster-clusters-openstack-openstack-secgroup-controlplane
    externalNetwork:
      id: 316eeb47-1498-46b4-b39e-00ddf73bd2a5
      name: provider_net_shared_3
    failureDomains:
      nova:
        controlPlane: true
    network:
      id: d57c6e63-4060-4f83-904d-1075da54aa18
      name: test
      subnets:
      - cidr: 10.200.0.0/24
        id: b58f80ad-e6b0-40d9-946a-23a4be3e8905
        name: test
        tags:
        - test
      tags:
      - test
    ready: true
    workerSecurityGroup:
      id: 389e230d-2cc4-4fc8-b542-b32dc8dd7264
      name: k8s-cluster-clusters-openstack-openstack-secgroup-worker

@mdbooth
Copy link
Contributor

mdbooth commented Jun 19, 2024

This is a weird situation. The purpose of specifying a router is to re-use an existing router when creating a network. If we're not creating a network we don't need a router specification. I'd almost be inclined to go the other way and emit a validation warning if the user specifies a router which isn't going to be used. The API remains a bit of a mess here, tbh.

If I'm honest I don't understand why we're adding router IPs to API allowed CIDRs. Still, resolving an unused router just for that use case seems a bit odd. I'd be happy to leave it ignored.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
Status: Inbox
Development

No branches or pull requests

6 participants