-
Notifications
You must be signed in to change notification settings - Fork 253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Control plane node is up, but worker node is stuck in pending state in openstack. #2126
Comments
Hi @andresache For the bug itself, you should file it against microk8s, not here. I don't see any error in CAPO however I can see failures in microk8s side. Thanks |
Hi @EmilienM, Thank you for your response! Below are the informations: cluster 'microk8s-openstack':
microk8s-openstack-control-plane:
microk8s-openstack-md-0
And below is the manifest I'm applying in my local management cluster to create the cluster in openstack:
Let me know if you need any other information. Thanks! |
Please share the describe of OpenStackCluster and the OpenStackMachines. |
I think this is what you were expecting: OpenStackCluster describe:
Control plane node describe (OpenStackMachine):
Worker node describe (OpenStackMachine):
Let me know if this is right. |
I don't see much Status in the OpenStackMachine. Please share the CAPO manager logs. |
Here are the logs of the CAPO manager: capo-system/capo-controller-manager-7c468b6c46-txxhb
And one more thing I've noticed in the capi microk8s manager logs:
It is stating that the control plane is not yet initialized even though the control plane is up. |
What I'm actually trying is to create an autoscaler for my cluster in openstack. There is no support for openstack horizon in the autoscaler project: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider, therefore I was thinking to use the cluster api to achieve this https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/clusterapi. I'm not sure if I'm going in the right direction or if there is a simpler way to achieve this. |
I have also tried to increase the number of control plane nodes and worker nodes to 2 and it looks like octopus is able to provision only one instance, therefore the issue might not be with the worker node. And these are the logs of the CAPO manager:
Could it be an issue on the openstack side? |
I tried this before (around 1year) and at least at that time it works fine to me for the auto scaler on CAPI + CAPO for this issue, looks to me the control plane has something wrong so it's not ready yet, are you able to ssh to the |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/kind bug
Hello,
Has anyone faced a similar issue before or can guide me in the right direction here?
I am trying to run cluster-api-bootstrap-provider-microk8s solution to create my k8s cluster in openstack: https://github.com/canonical/cluster-api-bootstrap-provider-microk8s?tab=readme-ov-file
Cluster and control plane node are being successfully created, but the worker node is getting stuck in pending state.
Cluster being created:
Control plane and worker nodes:
These are my capi pods that are running in my local management microk8s cluster:
And here are some logs of the capo-controller-manager-7c468b6c46-j8xrp pod:
And here are some logs of the capi-controller-manager-5d79cb94cf-qz8nr pod:
Let me know if any other additional informations are needed.
Any help would be much appreciated.
git rev-parse HEAD
if manually built):kubectl version
):/etc/os-release
): Ubuntu 20.04.6 LTSThe text was updated successfully, but these errors were encountered: