Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

5.x: Default Capacity Reservation is ignored when scaling a node pool #786

Open
OguzPastirmaci opened this issue Jul 25, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@OguzPastirmaci
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Expected Behavior

New nodes added by scaling would use the default capacity reservation.

Actual Behavior

The initial nodes that are created use the default capacity reservation, but when you scale the node pool, the new nodes are deployed as on-demand capacity instead of using the default capacity reservation. When you do everything from the console, the new nodes come from the default capacity reservation as expected.

If you explicitly add capacity_reservation_id to the pool config, it works. But This should not be needed with default capacity reservations.

Steps to Reproduce

  • Create a default capacity reservation for a shape in the console.
  • Deploy a node pool by using the shape you created the default reservation for. Don't add the capacity reservation ID to the pool block, because it's a default capacity reservation, all instances for that specific shape should be deployed from the reservation.
  • The initial nodes deployed in the pool will come from the default capacity reservation.
  • Add X more nodes by scaling the pool. The new nodes are deployed as on-demand capacity, instead of using the reservation.
module "oke" {
  source  = "oracle-terraform-modules/oke/oci"
  version = "5.0.0-beta.6"
Terraform v1.5.3
on darwin_arm64
+ provider registry.terraform.io/hashicorp/cloudinit v2.3.2
+ provider registry.terraform.io/hashicorp/helm v2.10.1
+ provider registry.terraform.io/hashicorp/http v3.4.0
+ provider registry.terraform.io/hashicorp/null v3.2.1
+ provider registry.terraform.io/hashicorp/random v3.5.1
+ provider registry.terraform.io/hashicorp/time v0.9.1
+ provider registry.terraform.io/oracle/oci v5.5.0
@OguzPastirmaci OguzPastirmaci added the bug Something isn't working label Jul 25, 2023
@syedthameem85
Copy link
Member

syedthameem85 commented Feb 4, 2024

@OguzPastirmaci
The module behaviour is inline with the OCI console and terraform provider behaviour. The default capacity reservation is created in root compartment. The OCI Console requires you to select the capacity reservation id (even if it is default) by selecting the compartment and the default reservation id.

As per oci_containerengine_node_pool terraform documentation, the capacity_reservation id is required to be specified even if it is default. The terraform provider doesn;' provide any way to automatically fetch default capacity reservation id.

capacity_reservation_id - (Optional) (Updatable) The OCID of the compute capacity reservation in which to place the compute instance.

@OguzPastirmaci @hyder @devoncrouse - Any feedback ?

@robo-cap
Copy link
Member

robo-cap commented Apr 3, 2024

Probably we can use a datasource and match the shape required for the pool with the shapes available in the default reservations within the region.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants