-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create_namespaced_binding() method shows target.name: Required value #547
Comments
Any luck with this? |
Also having this problem. Strange thing is the pod runs on the node a the same time I get the error message 🤷♂️ |
Check out kubernetes-client/gen#52. Like all the situations when the status None is reported, the call is actually performed. |
Any workarounds ? My pods remain in pending state and scheduler prints "target.name: Required value" when I run it on the terminal. Can anyone provide me link to a working custom python scheduler example? I was following this article |
I'm also having the same issue 😞 |
I was able to make it work staying with client version 2.0. According to the documentation, they have changed the function name and signatures
Hope it helps. Thanks to jibin parayil Thomas for reaching out to me. |
Thanks @abushoeb |
I got the same exception as the original poster, but I noticed that the binding actually was successful. The pod get the node assigned, but the exception is thrown anyways. I noticed that all the examples of custom schedulers call an empty constructor to V1Binding(), but in the API it shows target is mandatory now. However, even by adding target in, it still throws the except, but continues to bind it properly. Here is the code I'm using:
|
@cliffburdick what version of K8 and Python Client you are using? |
k8s 1.12.1 and client 8.0 |
I'm running something similar to @cliffburdick on v1.12.2 using client v8.0, and it seems to be working apart from the ValueError getting thrown. It looks to me that the value error gets raised before the value is set. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Any update on this ? I am still facing this issue with Version: 9.0.0 of the client |
I too get an error thrown using 8.0.x, but my pods still get scheduled. I use something similar to what @abushoeb suggested: def schedule(name, node, namespace='default'):
target = client.V1ObjectReference(kind = 'Node', api_version = 'v1', name = node)
meta = client.V1ObjectMeta(name = name)
body = client.V1Binding(target = target, metadata = meta)
try:
client.CoreV1Api().create_namespaced_binding(namespace=namespace, body=body)
except ValueError:
# PRINT SOMETHING or PASS |
@torgeirl , thanks for confirming that issue is still observed. Above looked a bit hackish and hence wanted to be sure that there is no better alternative before moving ahead with what is being suggested. |
Any update on this? I am also facing this issue with v9.0.0 of the client. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Any update? The issue has been posted over one year. |
To confirm, still seeing this in v 10.0.1, Pod does go on and gets scheduled though.
|
It's been year and half and the issue still here a lot of client versions before :( |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten I've noticed that another side effect of this issue is you can't tell if a pod was actually bound to a node due to this error. We've seen a couple times where we bind the pod, this error happens, and the pod well actually start up. It's stuck in pending. Re-running the exact same bind command makes it work. |
The same behavior has been long documented and the fix is on the server side. I think it’s coming in Kubernetes v1.17. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
I also still have this Problem with the Kubernetes Python Client v25.3.0 Stable Release |
This is still an issue on Kubernetes platform v1.26.2 using Kubernetes Python Client v26.1.0:
The workaround that skips the deserializing of the returned data still works:
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle stale |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
When calling the API create_namespaced_binding() method like so:
The following error is thrown:
Also when I use
body = client.V1Binding()
following error is thrown:Environment is:
Full code for the custom scheduler
The text was updated successfully, but these errors were encountered: