Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nginx Controller does not update configuration while recreating service. #11963

Open
anvpetrov opened this issue Sep 10, 2024 · 19 comments · May be fixed by #12034
Open

Nginx Controller does not update configuration while recreating service. #11963

anvpetrov opened this issue Sep 10, 2024 · 19 comments · May be fixed by #12034
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@anvpetrov
Copy link

anvpetrov commented Sep 10, 2024

What happened:
Nginx Controller does not update configuration while recreating service.
The problem reproduces under certain conditions:

  1. Ingress object with tsl-passthrough
  2. Delete deployment and service
  3. Recreate deployment and service (as fast as possible)

What you expected to happen:
Application accesable via ingress url

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
v1.10.1
v1.11.2
and any other

Kubernetes version (use kubectl version):
root@vm1:~# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
It is also Reproduced on 1.27 and other
Environment:
Dev, test, prod

OS
root@vm1:~# cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.1 LTS"

  • Kernel (e.g. uname -a):
    root@vm1:~# uname -a
    Linux vm1 5.15.0-119-generic Improve how errors connecting to api-server are handled #129-Ubuntu SMP Fri Aug 2 19:25:20 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

  • How was the ingress-nginx-controller installed:
    You can reproduce problen on minikube.

  • Current State of the controller:

root@vm1:~# kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
Annotations:  ingressclass.kubernetes.io/is-default-class: true
Controller:   k8s.io/ingress-nginx
Events:       <none>

 root@vm1:~# kubectl -n ingress-nginx get all -A -o wide
NAMESPACE       NAME                                            READY   STATUS      RESTARTS      AGE     IP             NODE       NOMINATED NODE   READINESS GATES
ingress-nginx   pod/ingress-nginx-admission-create-slvdf        0/1     Completed   0             29m     10.244.0.5     minikube   <none>           <none>
ingress-nginx   pod/ingress-nginx-admission-patch-vv6gt         0/1     Completed   1             29m     10.244.0.4     minikube   <none>           <none>
ingress-nginx   pod/ingress-nginx-controller-5b787686df-gwzj6   1/1     Running     0             9m45s   10.244.0.7     minikube   <none>           <none>
kube-system     pod/coredns-7db6d8ff4d-d5q7q                    1/1     Running     0             30m     10.244.0.2     minikube   <none>           <none>
kube-system     pod/coredns-7db6d8ff4d-mbw2q                    1/1     Running     0             30m     10.244.0.3     minikube   <none>           <none>
kube-system     pod/etcd-minikube                               1/1     Running     0             30m     192.168.49.2   minikube   <none>           <none>
kube-system     pod/kube-apiserver-minikube                     1/1     Running     0             30m     192.168.49.2   minikube   <none>           <none>
kube-system     pod/kube-controller-manager-minikube            1/1     Running     0             31m     192.168.49.2   minikube   <none>           <none>
kube-system     pod/kube-proxy-fdgdt                            1/1     Running     0             30m     192.168.49.2   minikube   <none>           <none>
kube-system     pod/kube-scheduler-minikube                     1/1     Running     0             30m     192.168.49.2   minikube   <none>           <none>
kube-system     pod/storage-provisioner                         1/1     Running     1 (30m ago)   30m     192.168.49.2   minikube   <none>           <none>

NAMESPACE       NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
default         service/kubernetes                           ClusterIP   10.96.0.1        <none>        443/TCP                      31m   <none>
ingress-nginx   service/ingress-nginx-controller             NodePort    10.100.205.88    <none>        80:31454/TCP,443:30567/TCP   29m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx   service/ingress-nginx-controller-admission   ClusterIP   10.111.234.227   <none>        443/TCP                      29m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system     service/kube-dns                             ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       31m   k8s-app=kube-dns

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES                               SELECTOR
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   31m   kube-proxy   registry.k8s.io/kube-proxy:v1.30.0   k8s-app=kube-proxy

NAMESPACE       NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                                                                                     SELECTOR
ingress-nginx   deployment.apps/ingress-nginx-controller   1/1     1            1           29m   controller   registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system     deployment.apps/coredns                    2/2     2            2           31m   coredns      registry.k8s.io/coredns/coredns:v1.11.1                                                                                    k8s-app=kube-dns

NAMESPACE       NAME                                                  DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                                                                                                                     SELECTOR
ingress-nginx   replicaset.apps/ingress-nginx-controller-5b787686df   1         1         1       9m46s   controller   registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5b787686df
ingress-nginx   replicaset.apps/ingress-nginx-controller-768f948f8f   0         0         0       29m     controller   registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=768f948f8f
kube-system     replicaset.apps/coredns-7db6d8ff4d                    2         2         2       30m     coredns      registry.k8s.io/coredns/coredns:v1.11.1                                                                                    k8s-app=kube-dns,pod-template-hash=7db6d8ff4d

NAMESPACE       NAME                                       STATUS     COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES                                                                                                                              SELECTOR
ingress-nginx   job.batch/ingress-nginx-admission-create   Complete   1/1           36s        29m   create       registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   batch.kubernetes.io/controller-uid=e80da85a-d5b7-41cb-a1d2-310539779154
ingress-nginx   job.batch/ingress-nginx-admission-patch    Complete   1/1           35s        29m   patch        registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   batch.kubernetes.io/controller-uid=3463fa08-6ccc-42ac-9d65-f9908d84a943
root@vm1:~# kubectl -n ingress-nginx describe po ingress-nginx-controller-5b787686df-gwzj6
Name:             ingress-nginx-controller-5b787686df-gwzj6
Namespace:        ingress-nginx
Priority:         0
Service Account:  ingress-nginx
Node:             minikube/192.168.49.2
Start Time:       Tue, 10 Sep 2024 17:54:37 +0000
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=ingress-nginx
                  app.kubernetes.io/name=ingress-nginx
                  gcp-auth-skip-secret=true
                  pod-template-hash=5b787686df
Annotations:      <none>
Status:           Running
IP:               10.244.0.7
IPs:
  IP:           10.244.0.7
Controlled By:  ReplicaSet/ingress-nginx-controller-5b787686df
Containers:
  controller:
    Container ID:  docker://154dd2e5c09ae5440a3bf4d9f5047a5e9794aaef41013d93511f614952fbea04
    Image:         registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e
    Image ID:      docker-pullable://registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --election-id=ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --watch-ingress-without-class=true
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --enable-ssl-passthrough=true
    State:          Running
      Started:      Tue, 10 Sep 2024 17:54:41 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-5b787686df-gwzj6 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sr6jw (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-sr6jw:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
                             minikube.k8s.io/primary=true
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From                      Message
  ----     ------            ----  ----                      -------
  Warning  FailedScheduling  10m   default-scheduler         0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
  Normal   Scheduled         10m   default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-5b787686df-gwzj6 to minikube
  Normal   Pulled            10m   kubelet                   Container image "registry.k8s.io/ingress-nginx/controller:v1.10.1@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e" already present on machine
  Normal   Created           10m   kubelet                   Created container controller
  Normal   Started           10m   kubelet                   Started container controller
  Normal   RELOAD            10m   nginx-ingress-controller  NGINX reload triggered due to a change in configuration
root@vm1:~# kubectl -n ingress-nginx describe svc ingress-nginx-controller
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/name=ingress-nginx
Annotations:              <none>
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.205.88
IPs:                      10.100.205.88
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31454/TCP
Endpoints:                10.244.0.7:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30567/TCP
Endpoints:                10.244.0.7:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

How to reproduce this issue**:

#Install minikube
minikube start --force
#Install Ingress
minikube addons enable ingress
#Edit ingress deployment for enable passthrough
kubectl edit deployments.apps -n ingress-nginx ingress-nginx-controller
Add arg
--enable-ssl-passthrough=true

Install an application

kubectl create ns test
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 -n test
kubectl expose deployment web --port=8080 -n test

Create an ingress (please add any additional annotation required)

echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: 'HTTPS'
    nginx.ingress.kubernetes.io/enable-access-log: 'true'
    nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
  name: example-ingress-https
  namespace: test
spec:
  ingressClassName: nginx
  rules:
  - host: hello-world-https.example
    http:
      paths:
      - backend:
          service:
            name: web
            port:
              number: 8080
        path: /
        pathType: Prefix
" | kubectl apply -f -

make a request

Check that its working
curl -k --resolve "hello-world-https.example:443:$( minikube ip )" -i https://hello-world-https.example
curl: (35) error:0A00010B:SSL routines::wrong version number

SSL error because our application is running over HTTP, this is normal.

We will prepare the manifests:

cat <<EOF > manifests.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
  namespace: test
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: web
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: gcr.io/google-samples/hello-app:1.0
        imagePullPolicy: IfNotPresent
        name: hello-app
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: web
  namespace: test
spec:
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: web
  sessionAffinity: None
  type: ClusterIP
EOF
cat <<EOF > run.sh
kubectl delete deployment web -n test
kubectl delete service web -n test
kubectl create -f manifests.yaml
EOF

Now, we cat start test.

  1. Check service
    kubectl get svc -n test
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    web ClusterIP 10.101.12.157 8080/TCP 3m15s

IP 10.101.12.157

Run our bash script:
bash run.sh
deployment.apps "web" deleted
service "web" deleted
deployment.apps/web created
service/web created

Check again:
curl -k --resolve "hello-world-https.example:443:$( minikube ip )" -i https://hello-world-https.example
curl: (35) error:0A000126:SSL routines::unexpected eof while reading

If we run tcpdump, we can see, that packets send to old service IP
tcpdump -nnvvS -i any host 10.101.12.157
tcpdump: data link type LINUX_SLL2
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
18:56:27.059554 vethda6e844 P IP (tos 0x0, ttl 63, id 50339, offset 0, flags [DF], proto TCP (6), length 60)
192.168.49.2.57792 > 10.101.12.157.8080: Flags [S], cksum 0x08db (incorrect -> 0x20bc), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031445813 ecr 0,nop,wscale 7], length 0
18:56:27.059554 br-8f45197f04bc In IP (tos 0x0, ttl 63, id 50339, offset 0, flags [DF], proto TCP (6), length 60)
192.168.49.2.57792 > 10.101.12.157.8080: Flags [S], cksum 0x08db (incorrect -> 0x20bc), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031445813 ecr 0,nop,wscale 7], length 0
18:56:27.059571 enp0s3 Out IP (tos 0x0, ttl 62, id 50339, offset 0, flags [DF], proto TCP (6), length 60)
10.0.2.15.57792 > 10.101.12.157.8080: Flags [S], cksum 0x233f (incorrect -> 0x0658), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031445813 ecr 0,nop,wscale 7], length 0
18:56:28.066013 vethda6e844 P IP (tos 0x0, ttl 63, id 50340, offset 0, flags [DF], proto TCP (6), length 60)
192.168.49.2.57792 > 10.101.12.157.8080: Flags [S], cksum 0x08db (incorrect -> 0x1ccd), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031446820 ecr 0,nop,wscale 7], length 0
18:56:28.066013 br-8f45197f04bc In IP (tos 0x0, ttl 63, id 50340, offset 0, flags [DF], proto TCP (6), length 60)
192.168.49.2.57792 > 10.101.12.157.8080: Flags [S], cksum 0x08db (incorrect -> 0x1ccd), seq 3739768812, win 64240, options [mss 1460,sackOK,TS val 4031446820 ecr 0,nop,wscale 7], length 0

If you change Ingress after that script, then It will work.

@anvpetrov anvpetrov added the kind/bug Categorizes issue or PR as related to a bug. label Sep 10, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 10, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

I think it is fair to expect, that you will get a 200 response for all requests. You have typed/executed a command to recreate the application related K8S objects, while the pre-existing ingress object for that application was not deleted,

But not all expectations can be met by the controller.
And since the controller is not owner of your application deployment, pods or services, the ingress-nginx controller can not meet this expectation of yours.

Only after the application pod has passed the probes and the controller is able to get a endpoint for your application, it is possible for the controller to route traffic to your application. So after you delete the application and before the endpoint is ready, you will get failed requests. If you see old ip address in tcpdump, it is not proof that the new pod of the application has passed probes and is ready to receive traffic.

Secondly, your controler service is of --type NodePort. We don't test that type of service extensively for all use-cases . The CI only tests it all in a kind cluster. So any networking related reasons for you to see failed requests is not related to the ingress-nginx controller code.

You must be having a real-use problem to discuss this here. But it has to be clear that the data we have is not proof of a problem with the controller. Users have the option to play with gracefulShutdown, or use the RollingUpdateStrategy field and experiment with the timers and other related specs, but that does not mean that the controller should change the default value of configurations to extreme values. It will affect users who currently do not face this problem .

@longwuyuan
Copy link
Contributor

/remove-kind bug kind support

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Sep 11, 2024
@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Those labels are not set on the issue: kind/kind, kind/support

In response to this:

/remove-kind bug kind support

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

/kind support

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Sep 11, 2024
@anvpetrov
Copy link
Author

This is a real problem that we are facing in an prod environment. One of the our teams uses such an exotic method in their pipelines.
Here I have described a simplified example.

Ingress won't work until something forces ingress controller reload its configuration (patch/create ingress objects for example)

The problem is only with ingress in tsl-passthrough mode.
In documentation https://kubernetes.github.io/ingress-nginx/user-guide/tls/#ssl-passthrough we see

Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.

If i create second ingress with out annotations:

    nginx.ingress.kubernetes.io/backend-protocol: 'HTTPS'
    nginx.ingress.kubernetes.io/ssl-passthrough: 'true'

After running the script, one ingress will work (with out ssl-passthrough), and the second one will not(with ssl-passthrough annotation).

I changed the service for ingress-nginx-controller to ClusterIP and create new ingress with out ssl-passthrough

echo "
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: ClusterIP
" | kubectl replace -f -

Second ingress with out ssl-passthrough

echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  namespace: test
spec:
  ingressClassName: nginx
  rules:
  - host: hello-world.example
    http:
      paths:
      - backend:
          service:
            name: web
            port:
              number: 8080
        path: /
        pathType: Prefix
" | kubectl apply -f -

With out ssl-passthrough is working because it uses endpoints.

root@vm1:~/test# curl -k --connect-timeout 5 --resolve "hello-world.example:80:$( minikube ip )" -i http://hello-world.example
HTTP/1.1 200 OK
Date: Wed, 11 Sep 2024 07:02:57 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 60
Connection: keep-alive

Hello, world!
Version: 1.0.0
Hostname: web-56bb54ff6d-fhsqf

The ingress with ssl-passthrough is not working. It uses ClusterIP.

root@vm1:~/test# curl -k --connect-timeout 5 --resolve "hello-world-https.example:443:$( minikube ip )" -i https://hello-world-https.example
curl: (28) SSL connection timeout

Endpoints of service:

root@vm1:~/test# kubectl describe  endpoints -n test web
Name:         web
Namespace:    test
Labels:       app=web
Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2024-09-11T07:04:32Z
Subsets:
  Addresses:          10.244.0.20
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  8080  TCP

Events:  <none>

I understand that this is a rather degenerate case, but there is a non-zero probability that other users may encounter this. We cannot control the order in which entities are deleted.

@Vovanys
Copy link

Vovanys commented Sep 11, 2024

We have the same problem with this
Apparently due to the order, nginx does not reread the services

@longwuyuan
Copy link
Contributor

Hi,

I 100% agree with you that you need to find a solution for your use-case.
But what practical steps can be taken are scoped within the factors like ;

  • define the problem in terms of Ingress-API specs and fields by triaging
  • debug the problem only from Ingress-API perspective because the project does not have resources to work on features that are not closer to and implied by the Ingress-API

So the problem definition you have provided is forced termination of backend pods.

  • For forced termination of backend pods or normal Rollout of backend pods, within the Ingress-API, there is no alternative but to wait for the endpoints to get iupdated in the endpointSlice.
  • For that to happen the probes must pass
  • For probes to pass, the healthcheck needs to succeed
  • For healthcheck to succeed the controller code does nothing
  • For healthcheck to pass the controller waits on Kubernetes code to put the backend pod in Ready state

So there is no data here on what the controller can change in its code to address this problem.

Its not going to be possible for the controller to just check ipaddress of pods

Specially in a ssl-passthrough annotation use-case, this issue is not clarifying some simple facts. Those facts are that the controller does not terminate the TLS connection. So if there is no termination of the connection in the controller pod, there is no impact of configuring annotation " backend-protocol: HTTPS" .

The other important factor here is that there is not enough clarity in this issue on the use-case of a NodePort --type or ClusterIP --type for the service created by the ingress-nginx controller installation. Specifically, the NodePort is not really the design of using the service created by the Ingress-NGINX controller installation. The reason is that the design of ingress involves a lot of work like protecting from DDOS (and managing scale of connections). That work is offloaded to a Loadbalancer like AWS-LoadBalancer or GCP-Loadbalancer (or even metallb in on-premise clusters). More relevance here is that the TLS related connections will get processed differently in each --type of the service created by the ingress-nginx controller installation. So if you provide tests with AWS-LB, or GCP-LB or other Cloud-LB, or Metallb, then there is a history of available information. If you test in production with NodePort or ClusterIP --type service for the Ingress-Nginx controller, and then force terminate backend pods (instead of graceful closing of connections etc), then other than playing with the healthcheck or other timeouts for pod lifecycle, the controller can not do anything (including changing over from using endpointSlice to something else).

I suggest you share screen and show the problem so any other information that is not present can be posted here in the issue.

Unless a problem can be proven with a kind cluster (or a minikube cluster) that affects all users of ssl-passthrough, its not going to be possible to allocate resources, just to do triaging and debugging of root-cause or suspected bug). Just because there is no developer-time available to work on this kind of feature, which is working for all other users of ssl-passthrough. Here is a link https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#kubernetesingress-nginx that shows that even a gitops tool like ArgoCD documents successful use of ssl-passthrough in production. So there are no bugs or problems as such in the feature. All the information so far is hinting that the forced termination of backend pods is making the endpointSlice empty and outdated.

@chengjoey
Copy link
Contributor

chengjoey commented Sep 12, 2024

I can reproduce it in minikube, and it seems that this is not 100% reproducible. There is a certain probability that it can be forwarded to the new svc. I think this uncertainty should not be

nginx-controller logs:

I0912 08:00:11.319503       7 nginx.go:851] "Handling TCP connection" remote="10.244.0.1:36506" local="10.244.0.15:443"
I0912 08:00:11.330163       7 tcp.go:74] "TLS Client Hello" host="hello-world-https.example"
I0912 08:00:11.330307       7 tcp.go:84] "passing to" hostport="10.106.80.37:8080"
I0912 08:00:26.485653       7 reflector.go:808] k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1.Service total 11 items received

web svc:

old svc:
kubectl get svc -n test
NAME   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
web    ClusterIP   10.106.80.37   <none>        8080/TCP   19m

new svc:
kubectl get svc -n test
NAME   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
web    ClusterIP   10.96.120.211   <none>        8080/TCP   6m45s

@anvpetrov
Copy link
Author

k8s.io/[email protected]/tools/cache/reflector.go:232

Perhaps this is due to kubernetes/kubernetes#115658 ?

@chengjoey
Copy link
Contributor

chengjoey commented Sep 12, 2024

k8s.io/[email protected]/tools/cache/reflector.go:232

Perhaps this is due to kubernetes/kubernetes#115658 ?

no, because the nginx-controller log shows that the delete event is received.may be caused by the different order of processing endpointslice and service.

W0912 08:07:40.346999       7 controller.go:1216] Service "test/web" does not have any active Endpoint.
I0912 08:07:40.347064       7 controller.go:1424] Ingress "test/example-ingress-https" does not contains a TLS section.
I0912 08:07:40.347134       7 controller.go:811] Replacing location "/" for server "hello-world-https.example" with upstream "upstream-default-backend" to use upstream "test-web-8080" (Ingress "test/example-ingress-https")
I0912 08:07:40.348966       7 controller.go:239] Dynamic reconfiguration succeeded.
I0912 08:07:40.448834       7 nginx.go:363] "Event received" type="DELETE" object="&EndpointSlice{ObjectMeta:{web-dcl7m web- test  054e538b-afa5-48ae-833d-63fee544f4c6 6157 4 2024-09-12 08:05:17 +0000 UTC <nil> <nil> map[app:web endpointslice.kubernetes.io/managed-by:endpointslice-controller.k8s.io kubernetes.io/service-name:web] map[] [{v1 Service web 3108e051-a28e-4fe3-ade2-f2bb8e09b642 0x4000c0d04c 0x4000c0d04d}] [] [{kube-controller-manager Update discovery.k8s.io/v1 2024-09-12 08:07:40 +0000 UTC FieldsV1 {\"f:addressType\":{},\"f:endpoints\":{},\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:app\":{},\"f:endpointslice.kubernetes.io/managed-by\":{},\"f:kubernetes.io/service-name\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"3108e051-a28e-4fe3-ade2-f2bb8e09b642\\\"}\":{}}},\"f:ports\":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.244.0.17],Conditions:EndpointConditions{Ready:*false,Serving:*true,Terminating:*true,},Hostname:nil,TargetRef:&v1.ObjectReference{Kind:Pod,Namespace:test,Name:web-68fdb9885b-6rhhg,UID:5c1ad3b6-bc54-4e82-94fb-c2febf690b8c,APIVersion:,ResourceVersion:,FieldPath:,},DeprecatedTopology:map[string]string{},NodeName:*minikube,Zone:nil,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,}"
I0912 08:07:40.448988       7 queue.go:85] "queuing" item="&EndpointSlice{ObjectMeta:{web-dcl7m web- test  054e538b-afa5-48ae-833d-63fee544f4c6 6157 4 2024-09-12 08:05:17 +0000 UTC <nil> <nil> map[app:web endpointslice.kubernetes.io/managed-by:endpointslice-controller.k8s.io kubernetes.io/service-name:web] map[] [{v1 Service web 3108e051-a28e-4fe3-ade2-f2bb8e09b642 0x4000c0d04c 0x4000c0d04d}] [] [{kube-controller-manager Update discovery.k8s.io/v1 2024-09-12 08:07:40 +0000 UTC FieldsV1 {\"f:addressType\":{},\"f:endpoints\":{},\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:app\":{},\"f:endpointslice.kubernetes.io/managed-by\":{},\"f:kubernetes.io/service-name\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"3108e051-a28e-4fe3-ade2-f2bb8e09b642\\\"}\":{}}},\"f:ports\":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.244.0.17],Conditions:EndpointConditions{Ready:*false,Serving:*true,Terminating:*true,},Hostname:nil,TargetRef:&v1.ObjectReference{Kind:Pod,Namespace:test,Name:web-68fdb9885b-6rhhg,UID:5c1ad3b6-bc54-4e82-94fb-c2febf690b8c,APIVersion:,ResourceVersion:,FieldPath:,},DeprecatedTopology:map[string]string{},NodeName:*minikube,Zone:nil,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,}"
I0912 08:07:40.449013       7 queue.go:129] "syncing" key="test/web-dcl7m"
I0912 08:07:40.604506       7 nginx.go:363] "Event received" type="CREATE" object="&EndpointSlice{ObjectMeta:{web-2kc4q web- test  1240f5c6-5c8a-4ab0-989a-193210609390 6173 1 2024-09-12 08:07:40 +0000 UTC <nil> <nil> map[app:web endpointslice.kubernetes.io/managed-by:endpointslice-controller.k8s.io kubernetes.io/service-name:web] map[endpoints.kubernetes.io/last-change-trigger-time:2024-09-12T08:07:40Z] [{v1 Service web aa9bd2ba-6715-4947-80dd-99fb812d0b20 0x4000c0d3dc 0x4000c0d3dd}] [] [{kube-controller-manager Update discovery.k8s.io/v1 2024-09-12 08:07:40 +0000 UTC FieldsV1 {\"f:addressType\":{},\"f:endpoints\":{},\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:endpoints.kubernetes.io/last-change-trigger-time\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:app\":{},\"f:endpointslice.kubernetes.io/managed-by\":{},\"f:kubernetes.io/service-name\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"aa9bd2ba-6715-4947-80dd-99fb812d0b20\\\"}\":{}}},\"f:ports\":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.244.0.17],Conditions:EndpointConditions{Ready:*false,Serving:*true,Terminating:*true,},Hostname:nil,TargetRef:&v1.ObjectReference{Kind:Pod,Namespace:test,Name:web-68fdb9885b-6rhhg,UID:5c1ad3b6-bc54-4e82-94fb-c2febf690b8c,APIVersion:,ResourceVersion:,FieldPath:,},DeprecatedTopology:map[string]string{},NodeName:*minikube,Zone:nil,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,}"
I0912 08:07:40.604676       7 queue.go:85] "queuing" item="&EndpointSlice{ObjectMeta:{web-2kc4q web- test  1240f5c6-5c8a-4ab0-989a-193210609390 6173 1 2024-09-12 08:07:40 +0000 UTC <nil> <nil> map[app:web endpointslice.kubernetes.io/managed-by:endpointslice-controller.k8s.io kubernetes.io/service-name:web] map[endpoints.kubernetes.io/last-change-trigger-time:2024-09-12T08:07:40Z] [{v1 Service web aa9bd2ba-6715-4947-80dd-99fb812d0b20 0x4000c0d3dc 0x4000c0d3dd}] [] [{kube-controller-manager Update discovery.k8s.io/v1 2024-09-12 08:07:40 +0000 UTC FieldsV1 {\"f:addressType\":{},\"f:endpoints\":{},\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:endpoints.kubernetes.io/last-change-trigger-time\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:app\":{},\"f:endpointslice.kubernetes.io/managed-by\":{},\"f:kubernetes.io/service-name\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"aa9bd2ba-6715-4947-80dd-99fb812d0b20\\\"}\":{}}},\"f:ports\":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.244.0.17],Conditions:EndpointConditions{Ready:*false,Serving:*true,Terminating:*true,},Hostname:nil,TargetRef:&v1.ObjectReference{Kind:Pod,Namespace:test,Name:web-68fdb9885b-6rhhg,UID:5c1ad3b6-bc54-4e82-94fb-c2febf690b8c,APIVersion:,ResourceVersion:,FieldPath:,},DeprecatedTopology:map[string]string{},NodeName:*minikube,Zone:nil,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,}"

In addition, nginx-controller only handles service delete events of type ServiceTypeExternalName

@longwuyuan
Copy link
Contributor

longwuyuan commented Sep 12, 2024

@chengjoey @anvpetrov

  • are you from the same team in the context that did you do the same test on a kind cluster ?

  • We can discuss this in the community meeting on I request please join the infress-nginx-users channel on Kubernetes slack (register at slack.k8s.io if required)

  • Expecting non-stop 200s as response, after you delete deployment as well as the service, can not be supported. It does not matter if you did not delete the ingress or not. If possible the project can solve your problem, but it becomes impossible to get non-stop response of 200s, after you delete both the deployment and the service. And you already know that the root-cause there is the endpoints not being available so no destination backend to which the request can be routed (until the new pods come to a "Ready" state)

  • If you use gitops tools like ArgoCD and maintain yaml manifests using kubectl kustomize, then ArgoCD can sync the changed values of the resources at the speed of compute+network. Then you have the best chance of reducing the non 200 responses when you upgrade your workload applications

@chengjoey
Copy link
Contributor

hi @longwuyuan , @anvpetrov and I are not on the same team. I don’t even know him. I just saw this issue and tried to reproduce it using minikube. I think this issue is quite interesting.

I found that the fundamental reason has nothing to do with whether endpointslice is available, but that ingress is forwarded to old svc.

ok, We can communicate on slack

@longwuyuan
Copy link
Contributor

@chengjoey thank you for your comments.

I want to close it because practically the project will not be able to allocate any resources to work on it. But I am not closing it because the data has to posted here in complete detail as to how the expectation/problem was processed. I will look forward to chat on slack.

I agree 100% that you and @anvpetrov have observed the correct live state (request going to old svc or any other description). But open discussions here need to be complete and aimed at resolution, based on practical circumstances. So while in theory you see that the routing destination was old-svc, what is the alternative state that you guess should be happening, in the context of the endpoints in the endpointSlice, of the pods/svc, got terminated and removed from ETCD ?

@anvpetrov
Copy link
Author

anvpetrov commented Sep 18, 2024

Hello, @chengjoey have you been able to identify the root cause of this behavior?

@chengjoey
Copy link
Contributor

Hello, @chengjoey have you been able to identify the root cause of this behavior?

I compared the logs forwarded to the new svc and the old svc. This seems related to the time when the endpoints are enqueued and the order in which they are updated, just as you mentioned (as fast as possible)

I will upload two detailed logs later, and analyze them carefully when I have time. I think this is a potential hidden danger, so I hope the issue can remain open.

@chengjoey chengjoey linked a pull request Sep 30, 2024 that will close this issue
10 tasks
@anvpetrov
Copy link
Author

Hi @chengjoey , thanks for implementing the fix.
But it remains a mystery to me why the problem does not reproduce if you perform the same actions but with a delay of a couple of seconds.
What causes config reload ?

@chengjoey
Copy link
Contributor

Hi @chengjoey , thanks for implementing the fix. But it remains a mystery to me why the problem does not reproduce if you perform the same actions but with a delay of a couple of seconds. What causes config reload ?

I1007 07:20:50.029249       7 nginx.go:732] "NGINX configuration change" diff=<
	--- /etc/nginx/nginx.conf	2024-10-07 07:20:46.878494295 +0000
	+++ /tmp/new-nginx-cfg1629542782	2024-10-07 07:20:50.026388629 +0000
	@@ -1,5 +1,5 @@
	
	-# Configuration checksum: 15768608418046704164
	+# Configuration checksum: 13413488848625732848
	
	 # setup custom paths that do not require root access
	 pid /tmp/nginx/nginx.pid;
 >

It seems to be caused by checksum. I will upload a more complete log later when I come back from vacation.

Copy link

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

@github-actions github-actions bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

Successfully merging a pull request may close this issue.

5 participants