-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nginx Controller does not update configuration while recreating service. #11963
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I think it is fair to expect, that you will get a 200 response for all requests. You have typed/executed a command to recreate the application related K8S objects, while the pre-existing ingress object for that application was not deleted, But not all expectations can be met by the controller. Only after the application pod has passed the probes and the controller is able to get a endpoint for your application, it is possible for the controller to route traffic to your application. So after you delete the application and before the endpoint is ready, you will get failed requests. If you see old ip address in tcpdump, it is not proof that the new pod of the application has passed probes and is ready to receive traffic. Secondly, your controler service is of --type NodePort. We don't test that type of service extensively for all use-cases . The CI only tests it all in a kind cluster. So any networking related reasons for you to see failed requests is not related to the ingress-nginx controller code. You must be having a real-use problem to discuss this here. But it has to be clear that the data we have is not proof of a problem with the controller. Users have the option to play with gracefulShutdown, or use the RollingUpdateStrategy field and experiment with the timers and other related specs, but that does not mean that the controller should change the default value of configurations to extreme values. It will affect users who currently do not face this problem . |
/remove-kind bug kind support |
@longwuyuan: Those labels are not set on the issue: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/kind support |
This is a real problem that we are facing in an prod environment. One of the our teams uses such an exotic method in their pipelines. Ingress won't work until something forces ingress controller reload its configuration (patch/create ingress objects for example) The problem is only with ingress in tsl-passthrough mode. Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints. If i create second ingress with out annotations:
After running the script, one ingress will work (with out ssl-passthrough), and the second one will not(with ssl-passthrough annotation). I changed the service for ingress-nginx-controller to ClusterIP and create new ingress with out ssl-passthrough
Second ingress with out ssl-passthrough
With out ssl-passthrough is working because it uses endpoints.
The ingress with ssl-passthrough is not working. It uses ClusterIP.
Endpoints of service:
I understand that this is a rather degenerate case, but there is a non-zero probability that other users may encounter this. We cannot control the order in which entities are deleted. |
We have the same problem with this |
Hi, I 100% agree with you that you need to find a solution for your use-case.
So the problem definition you have provided is forced termination of backend pods.
So there is no data here on what the controller can change in its code to address this problem. Its not going to be possible for the controller to just check ipaddress of pods Specially in a ssl-passthrough annotation use-case, this issue is not clarifying some simple facts. Those facts are that the controller does not terminate the TLS connection. So if there is no termination of the connection in the controller pod, there is no impact of configuring annotation The other important factor here is that there is not enough clarity in this issue on the use-case of a NodePort --type or ClusterIP --type for the service created by the ingress-nginx controller installation. Specifically, the NodePort is not really the design of using the service created by the Ingress-NGINX controller installation. The reason is that the design of ingress involves a lot of work like protecting from DDOS (and managing scale of connections). That work is offloaded to a Loadbalancer like AWS-LoadBalancer or GCP-Loadbalancer (or even metallb in on-premise clusters). More relevance here is that the TLS related connections will get processed differently in each --type of the service created by the ingress-nginx controller installation. So if you provide tests with AWS-LB, or GCP-LB or other Cloud-LB, or Metallb, then there is a history of available information. If you test in production with NodePort or ClusterIP --type service for the Ingress-Nginx controller, and then force terminate backend pods (instead of graceful closing of connections etc), then other than playing with the healthcheck or other timeouts for pod lifecycle, the controller can not do anything (including changing over from using endpointSlice to something else). I suggest you share screen and show the problem so any other information that is not present can be posted here in the issue. Unless a problem can be proven with a kind cluster (or a minikube cluster) that affects all users of ssl-passthrough, its not going to be possible to allocate resources, just to do triaging and debugging of root-cause or suspected bug). Just because there is no developer-time available to work on this kind of feature, which is working for all other users of ssl-passthrough. Here is a link https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#kubernetesingress-nginx that shows that even a gitops tool like ArgoCD documents successful use of ssl-passthrough in production. So there are no bugs or problems as such in the feature. All the information so far is hinting that the forced termination of backend pods is making the endpointSlice empty and outdated. |
I can reproduce it in minikube, and it seems that this is not 100% reproducible. There is a certain probability that it can be forwarded to the new svc. I think this uncertainty should not be nginx-controller logs:
web svc:
|
Perhaps this is due to kubernetes/kubernetes#115658 ? |
no, because the nginx-controller log shows that the
In addition, nginx-controller only handles service delete events of type |
|
hi @longwuyuan , @anvpetrov and I are not on the same team. I don’t even know him. I just saw this issue and tried to reproduce it using minikube. I think this issue is quite interesting. I found that the fundamental reason has nothing to do with whether endpointslice is available, but that ingress is forwarded to old svc. ok, We can communicate on slack |
@chengjoey thank you for your comments. I want to close it because practically the project will not be able to allocate any resources to work on it. But I am not closing it because the data has to posted here in complete detail as to how the expectation/problem was processed. I will look forward to chat on slack. I agree 100% that you and @anvpetrov have observed the correct live state (request going to old svc or any other description). But open discussions here need to be complete and aimed at resolution, based on practical circumstances. So while in theory you see that the routing destination was old-svc, what is the alternative state that you guess should be happening, in the context of the endpoints in the endpointSlice, of the pods/svc, got terminated and removed from ETCD ? |
Hello, @chengjoey have you been able to identify the root cause of this behavior? |
I compared the logs forwarded to the new svc and the old svc. This seems related to the time when the endpoints are enqueued and the order in which they are updated, just as you mentioned (as fast as possible) I will upload two detailed logs later, and analyze them carefully when I have time. I think this is a potential hidden danger, so I hope the issue can remain open. |
Hi @chengjoey , thanks for implementing the fix. |
It seems to be caused by |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
What happened:
Nginx Controller does not update configuration while recreating service.
The problem reproduces under certain conditions:
What you expected to happen:
Application accesable via ingress url
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
v1.10.1
v1.11.2
and any other
Kubernetes version (use
kubectl version
):root@vm1:~# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
It is also Reproduced on 1.27 and other
Environment:
Dev, test, prod
OS
root@vm1:~# cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.1 LTS"
Kernel (e.g.
uname -a
):root@vm1:~# uname -a
Linux vm1 5.15.0-119-generic Improve how errors connecting to api-server are handled #129-Ubuntu SMP Fri Aug 2 19:25:20 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
How was the ingress-nginx-controller installed:
You can reproduce problen on minikube.
Current State of the controller:
How to reproduce this issue**:
#Install minikube
minikube start --force
#Install Ingress
minikube addons enable ingress
#Edit ingress deployment for enable passthrough
kubectl edit deployments.apps -n ingress-nginx ingress-nginx-controller
Add arg
--enable-ssl-passthrough=true
Install an application
Create an ingress (please add any additional annotation required)
make a request
Check that its working
curl -k --resolve "hello-world-https.example:443:$( minikube ip )" -i https://hello-world-https.example
curl: (35) error:0A00010B:SSL routines::wrong version number
SSL error because our application is running over HTTP, this is normal.
We will prepare the manifests:
Now, we cat start test.
kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web ClusterIP 10.101.12.157 8080/TCP 3m15s
IP 10.101.12.157
Run our bash script:
bash run.sh
deployment.apps "web" deleted
service "web" deleted
deployment.apps/web created
service/web created
Check again:
curl -k --resolve "hello-world-https.example:443:$( minikube ip )" -i https://hello-world-https.example
curl: (35) error:0A000126:SSL routines::unexpected eof while reading
If we run tcpdump, we can see, that packets send to old service IP
tcpdump -nnvvS -i any host 10.101.12.157
tcpdump: data link type LINUX_SLL2
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
18:56:27.059554 vethda6e844 P IP (tos 0x0, ttl 63, id 50339, offset 0, flags [DF], proto TCP (6), length 60)
192.168.49.2.57792 > 10.101.12.157.8080: Flags [S], cksum 0x08db (incorrect -> 0x20bc), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031445813 ecr 0,nop,wscale 7], length 0
18:56:27.059554 br-8f45197f04bc In IP (tos 0x0, ttl 63, id 50339, offset 0, flags [DF], proto TCP (6), length 60)
192.168.49.2.57792 > 10.101.12.157.8080: Flags [S], cksum 0x08db (incorrect -> 0x20bc), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031445813 ecr 0,nop,wscale 7], length 0
18:56:27.059571 enp0s3 Out IP (tos 0x0, ttl 62, id 50339, offset 0, flags [DF], proto TCP (6), length 60)
10.0.2.15.57792 > 10.101.12.157.8080: Flags [S], cksum 0x233f (incorrect -> 0x0658), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031445813 ecr 0,nop,wscale 7], length 0
18:56:28.066013 vethda6e844 P IP (tos 0x0, ttl 63, id 50340, offset 0, flags [DF], proto TCP (6), length 60)
192.168.49.2.57792 > 10.101.12.157.8080: Flags [S], cksum 0x08db (incorrect -> 0x1ccd), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031446820 ecr 0,nop,wscale 7], length 0
18:56:28.066013 br-8f45197f04bc In IP (tos 0x0, ttl 63, id 50340, offset 0, flags [DF], proto TCP (6), length 60)
192.168.49.2.57792 > 10.101.12.157.8080: Flags [S], cksum 0x08db (incorrect -> 0x1ccd), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031446820 ecr 0,nop,wscale 7], length 0
If you change Ingress after that script, then It will work.
The text was updated successfully, but these errors were encountered: