Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create_namespaced_binding() method shows target.name: Required value #547

Open
abushoeb opened this issue Jun 8, 2018 · 71 comments
Open

Comments

@abushoeb
Copy link

abushoeb commented Jun 8, 2018

When calling the API create_namespaced_binding() method like so:

config.load_kube_config()
v1 = client.CoreV1Api()
v1.create_namespaced_binding(namespace, body)

The following error is thrown:

Exception when calling CoreV1Api->create_namespaced_binding: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Wed, 06 Jun 2018 20:55:04 GMT', 'Content-Length': '120'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"target.name: Required value","code":500} 

Also when I use body = client.V1Binding() following error is thrown:

 File "/usr/lib/python2.7/site-packages/kubernetes/client/models/v1_binding.py", line 64, in __init__
    self.target = target
  File "/usr/lib/python2.7/site-packages/kubernetes/client/models/v1_binding.py", line 156, in target
    raise ValueError("Invalid value for `target`, must not be `None`")
ValueError: Invalid value for `target`, must not be `None`

Environment is:

Both Python 2.7 and Python 3.6  
Metadata-Version: 2.1
Name: kubernetes
Version: 6.0.0

Full code for the custom scheduler

from kubernetes import client, config, watch
from kubernetes.client.rest import ApiException
config.load_kube_config()
v1 = client.CoreV1Api()

scheduler_name = 'custom-scheduler-test'

def nodes_available():
    ready_nodes = []
    for n in v1.list_node().items:
        for status in n.status.conditions:
            if status.status == 'True' and status.type == 'Ready':
                ready_nodes.append(n.metadata.name)
    return ready_nodes

def scheduler(name, node, namespace='default'):
    body = client.V1ConfigMap()
    # or this # body = client.V1Binding() 
    target = client.V1ObjectReference()
    target.kind = 'Node'
    target.apiVersion = 'v1'
    target.name = node
    meta = client.V1ObjectMeta()
    meta.name = name
    body.target = target
    body.metadata = meta
    return v1.create_namespaced_binding(namespace, body)

def main():
    w = watch.Watch()
    for event in w.stream(v1.list_namespaced_pod, 'default'):
        if event['object'].status.phase == 'Pending' and event['object'
                ].spec.scheduler_name == scheduler_name:
            try:
                res = scheduler(event['object'].metadata.name,random.choice(nodes_available()))
            except ApiException as e:
                print ("Exception when calling CoreV1Api->create_namespaced_binding: %s\n" % e)

if __name__ == '__main__':
    main()
@jibinpt
Copy link

jibinpt commented Nov 20, 2018

Any luck with this?

@tomyan
Copy link

tomyan commented Nov 20, 2018

Also having this problem. Strange thing is the pod runs on the node a the same time I get the error message 🤷‍♂️

@micw523
Copy link
Contributor

micw523 commented Nov 20, 2018

Check out kubernetes-client/gen#52. Like all the situations when the status None is reported, the call is actually performed.

@jibinpt
Copy link

jibinpt commented Nov 20, 2018

Any workarounds ? My pods remain in pending state and scheduler prints "target.name: Required value" when I run it on the terminal. Can anyone provide me link to a working custom python scheduler example? I was following this article

@agassner
Copy link

I'm also having the same issue 😞

@abushoeb
Copy link
Author

I was able to make it work staying with client version 2.0. According to the documentation, they have changed the function name and signatures create_namespaced_binding(body, namespace) while calling the function. However, in the original function, the order of the parameters are not the same as the documentation which produces the above error and confuses everyone. So I decided to use the old Python Client 2.0 with K8 1.7. Please notice that the function name and signatures are different in 2.0 which is create_namespaced_binding_binding(name, namespace, body). Here is my modified code for scheduler function:

def scheduler(name, node, namespace=NAMESPACE):
    body = client.V1Binding()

    target = client.V1ObjectReference()
    target.kind = 'Node'
    target.apiVersion = 'v1'
    target.name = node

    meta = client.V1ObjectMeta()
    meta.name = name

    body.target = target
    body.metadata = meta

    try:
        # Method changed in clinet v6.0
        # return v1.create_namespaced_binding(body, namespace)
        # For v2.0
        res = v1.create_namespaced_binding_binding(name, namespace, body)
        if res:
            # print 'POD '+name+' scheduled and placed on '+node
            return True

    except Exception as a:
        print ("Exception when calling CoreV1Api->create_namespaced_binding: %s\n" % a)
        return False

Hope it helps. Thanks to jibin parayil Thomas for reaching out to me.

@jibinpt
Copy link

jibinpt commented Nov 21, 2018

Thanks @abushoeb

@cliffburdick
Copy link

I got the same exception as the original poster, but I noticed that the binding actually was successful. The pod get the node assigned, but the exception is thrown anyways. I noticed that all the examples of custom schedulers call an empty constructor to V1Binding(), but in the API it shows target is mandatory now. However, even by adding target in, it still throws the except, but continues to bind it properly. Here is the code I'm using:

            target        = client.V1ObjectReference()
            target.kind   = "Node"
            target.apiVersion = "v1"
            target.name   = node
            
            meta          = client.V1ObjectMeta()
            meta.name     = podname
            body          = client.V1Binding(target=target, metadata=meta)

            return self.v1.create_namespaced_binding(namespace=ns, body=body)

@abushoeb
Copy link
Author

@cliffburdick what version of K8 and Python Client you are using?

@cliffburdick
Copy link

@cliffburdick what version of K8 and Python Client you are using?

k8s 1.12.1 and client 8.0

@torgeirl
Copy link

I'm running something similar to @cliffburdick on v1.12.2 using client v8.0, and it seems to be working apart from the ValueError getting thrown.

It looks to me that the value error gets raised before the value is set.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@torgeirl
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 30, 2019
@spurti-chopra
Copy link

Any update on this ? I am still facing this issue with Version: 9.0.0 of the client

@torgeirl
Copy link

Any update on this ? I am still facing this issue with Version: 9.0.0 of the client

I too get an error thrown using 8.0.x, but my pods still get scheduled. I use something similar to what @abushoeb suggested:

def schedule(name, node, namespace='default'):
    target = client.V1ObjectReference(kind = 'Node', api_version = 'v1', name = node)
    meta = client.V1ObjectMeta(name = name)
    body = client.V1Binding(target = target, metadata = meta)
    try:
        client.CoreV1Api().create_namespaced_binding(namespace=namespace, body=body)
    except ValueError:
        # PRINT SOMETHING or PASS

@spurti-chopra
Copy link

@torgeirl , thanks for confirming that issue is still observed. Above looked a bit hackish and hence wanted to be sure that there is no better alternative before moving ahead with what is being suggested.

@hirenvadalia
Copy link

Any update on this? I am also facing this issue with v9.0.0 of the client.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2019
@cofiiwu
Copy link

cofiiwu commented Sep 15, 2019

Any update? The issue has been posted over one year.

@Urvik08
Copy link

Urvik08 commented Oct 9, 2019

To confirm, still seeing this in v 10.0.1, Pod does go on and gets scheduled though.

return v1.create_namespaced_binding(namespace, body)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 5425, in create_namespaced_binding
    (data) = self.create_namespaced_binding_with_http_info(namespace, body, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 5516, in create_namespaced_binding_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 334, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 176, in __call_api
    return_data = self.deserialize(response_data, response_type)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 249, in deserialize
    return self.__deserialize(data, response_type)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 289, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 635, in __deserialize_model
    instance = klass(**kwargs)
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_binding.py", line 64, in __init__
    self.target = target
  File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_binding.py", line 156, in target
    raise ValueError("Invalid value for `target`, must not be `None`")
ValueError: Invalid value for `target`, must not be `None`

@damaca
Copy link

damaca commented Oct 20, 2019

It's been year and half and the issue still here a lot of client versions before :(

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 19, 2019
@cliffburdick
Copy link

cliffburdick commented Nov 22, 2019

/remove-lifecycle rotten

I've noticed that another side effect of this issue is you can't tell if a pod was actually bound to a node due to this error. We've seen a couple times where we bind the pod, this error happens, and the pod well actually start up. It's stuck in pending. Re-running the exact same bind command makes it work.

@micw523
Copy link
Contributor

micw523 commented Nov 22, 2019

The same behavior has been long documented and the fix is on the server side. I think it’s coming in Kubernetes v1.17.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2022
@torgeirl
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2022
@icdef
Copy link

icdef commented Feb 9, 2023

I also still have this Problem with the Kubernetes Python Client v25.3.0 Stable Release

@torgeirl
Copy link

This is still an issue on Kubernetes platform v1.26.2 using Kubernetes Python Client v26.1.0:

  File "/usr/local/lib/python3.10/site-packages/kubernetes/client/models/v1_binding.py", line 155, in target
    raise ValueError("Invalid value for `target`, must not be `None`")  # noqa: E501
ValueError: Invalid value for `target`, must not be `None`

The workaround that skips the deserializing of the returned data still works:

api.create_namespaced_binding(namespace=namespace, body=body, _preload_content=False)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2023
@torgeirl
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@torgeirl
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2024
@torgeirl
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 20, 2024
@torgeirl
Copy link

/remove-lifecycle stale

@torgeirl
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2024
@torgeirl
Copy link

torgeirl commented Dec 3, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests