Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: upstreamName(ns, backend) outputs same name for different backends #11942

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

Revolution1
Copy link

@Revolution1 Revolution1 commented Sep 7, 2024

What this PR does / why we need it:

The original upstreamName function can outputs the same name for different ingress backends.

// fmt.Sprintf("%s-%s-%s", namespace, service.Name, service.Port.Name)
fmt.Sprintf("%s-%s-%s", "a", "b", "c-d") == fmt.Sprintf("%s-%s-%s", "a", "b-c", "d") == fmt.Sprintf("%s-%s-%s", "a-b", "c", "d")

While the names in k8s must follow RFC1123, which means they cannot have underscore character(_)

So my fix use underscore to join the names to avoid conflicts.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • CVE Report (Scanner found CVE and adding report)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation only

Which issue/s this PR fixes

fixes #11938 #11937

How Has This Been Tested?

I built a image myself and tested in my local kind using the manifests from #11938

This function does not have unit test before, I'm still thinking how to make one.

Checklist:

  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I've read the CONTRIBUTION guide
  • I have added unit and/or e2e tests to cover my changes.
  • All new and existing tests passed.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 7, 2024
@k8s-ci-robot k8s-ci-robot requested review from Gacko and puerco September 7, 2024 03:13
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Revolution1
Once this PR has been reviewed and has the lgtm label, please assign puerco for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority labels Sep 7, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @Revolution1. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Sep 7, 2024
Copy link

netlify bot commented Sep 7, 2024

Deploy Preview for kubernetes-ingress-nginx canceled.

Name Link
🔨 Latest commit 1c6d3db
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-ingress-nginx/deploys/66dc3c2731786900085eb587

@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Sep 7, 2024
@longwuyuan
Copy link
Contributor

  • This reason
The original upstreamName function can outputs the same name for different ingress backends.

is not really true in the practical use-case context

  • My test here shows there are no duplicates
[~/Downloads] 
% k get all,ing
NAME       READY   STATUS    RESTARTS   AGE
pod/pod1   1/1     Running   0          4m22s
pod/pod2   1/1     Running   0          4m22s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   6d16h
service/svc1         ClusterIP   10.105.218.126   <none>        80/TCP    4m22s
service/svc2         ClusterIP   10.98.59.191     <none>        80/TCP    4m22s

NAME                                 CLASS   HOSTS           ADDRESS        PORTS   AGE
ingress.networking.k8s.io/ingress1   nginx   example.local   192.168.49.2   80      4m22s
[~/Downloads] 
% k describe ingress ingress1 
Name:             ingress1
Labels:           <none>
Namespace:        default
Address:          192.168.49.2
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host           Path  Backends
  ----           ----  --------
  example.local  
                 /service1   svc1:http (10.244.0.40:8000)
                 /service2   svc2:http (10.244.0.39:8000)
Annotations:     <none>
Events:
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  Sync    4m30s (x2 over 4m38s)  nginx-ingress-controller  Scheduled for sync
[~/Downloads] 
% curl example.local/service1 --resolve example.local:80:`minikube ip`
 Responsing From: pod1
[~/Downloads] 
% curl example.local/service2 --resolve example.local:80:`minikube ip`
 Responsing From: pod2
[~/Downloads] 

% k -n ingress-nginx exec -ti ingress-nginx-controller-7b7b559f8b-pdx9c -- sh
/etc/nginx $ grep svc1 /etc/nginx/nginx.conf -n
693:                    set $service_name   "svc1";
737:                    set $proxy_upstream_name "default-svc1-http";
813:                    set $service_name   "svc1";
857:                    set $proxy_upstream_name "default-svc1-http";
/etc/nginx $ grep svc2 /etc/nginx/nginx.conf -n
453:                    set $service_name   "svc2";
497:                    set $proxy_upstream_name "default-svc2-http";
573:                    set $service_name   "svc2";
617:                    set $proxy_upstream_name "default-svc2-http";
/etc/nginx $ 
  • The manifest for this test is
# pod1
apiVersion: v1
kind: Pod
metadata:
  name: "pod1"
  labels:
    app: "pod1"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# pod2
apiVersion: v1
kind: Pod
metadata:
  name: "pod2"
  labels:
    app: "pod2"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# service1
apiVersion: v1
kind: Service
metadata:
  name: "svc1"
spec:
  selector:
    app: "pod1"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: "http"
---
# service2
apiVersion: v1
kind: Service
metadata:
  name: "svc2"
spec:
  selector:
    app: "pod2"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: "http"
---
# ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress1
spec:
  ingressClassName: nginx
  rules:
    - host: "example.local"
      http:
        paths:
          - path: "/service1"
            pathType: Prefix
            backend:
              service:
                name: "svc1"
                port:
                  name: "http"
          - path: "/service2"
            pathType: Prefix
            backend:
              service:
                name: "svc2"
                port:
                  name: "http"

@longwuyuan
Copy link
Contributor

longwuyuan commented Sep 7, 2024

@Revolution1 It seems that your claim of a bug is based on a manifest, in which the same port number 80 is given 2 names. Once the name is "http" and the same port number 80 is also named as "pod-http". The manifest you provided to reproduce this is below

# pod1
apiVersion: v1
kind: Pod
metadata:
  name: "pod1"
  labels:
    app: "pod1"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# pod2
apiVersion: v1
kind: Pod
metadata:
  name: "pod2"
  labels:
    app: "pod2"
spec:
  containers:
    - name: pod1
      image: "busybox:latest"
      command:
        - "sh"
        - "-c"
        - |
          while true; do
          echo -e "HTTP/1.1 200 OK\n\n Responsing From: $HOSTNAME" | nc -l -p 8000;
          done;
      resources:
        limits:
          cpu: 200m
          memory: 500Mi
        requests:
          cpu: 100m
          memory: 200Mi
      ports:
        - containerPort: 8000
---
# service1
apiVersion: v1
kind: Service
metadata:
  name: "service-pod"
spec:
  selector:
    app: "pod1"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: http      # -------------------------------------->  First Name "http"
---
# service2
apiVersion: v1
kind: Service
metadata:
  name: "service"
spec:
  selector:
    app: "pod2"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: pod-http    # -------------------------------------->  Second Name "pod-http" for same port number 80
---
# ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
spec:
  rules:
    - host: "example.local"
      http:
        paths:
          - path: "/service1"
            pathType: Prefix
            backend:
              service:
                name: "service-pod"
                port:
                  name: "http"
          - path: "/service2"
            pathType: Prefix
            backend:
              service:
                name: "service"
                port:
                  name: "pod-http"

@longwuyuan
Copy link
Contributor

Its not an improvement to change to using underscores, but wait for comments from others on that.

@Gacko
Copy link
Member

Gacko commented Sep 7, 2024

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 7, 2024
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 7, 2024
@Revolution1
Copy link
Author

@Gacko Hi there. All existing unit and e2e tests passed. Does this change need a new unit or e2e test? If so, for this case, I'm not sure how to write one. Is there any guide or something.

@longwuyuan
Copy link
Contributor

@Revolution1 your statement below

The original upstreamName function can outputs the same name for different ingress backends.

is false.

I have copy/pasted the test details here and in the related issue, that clearly shows that the function "upstreamName" returns 2 different distinct "serviceName+servicePortName" combination. See below

% k -n ingress-nginx exec -ti ingress-nginx-controller-7b7b559f8b-pdx9c -- sh
/etc/nginx $ grep svc1 /etc/nginx/nginx.conf -n
693:                    set $service_name   "svc1";
737:                    set $proxy_upstream_name "default-svc1-http";
813:                    set $service_name   "svc1";
857:                    set $proxy_upstream_name "default-svc1-http";
/etc/nginx $ grep svc2 /etc/nginx/nginx.conf -n
453:                    set $service_name   "svc2";
497:                    set $proxy_upstream_name "default-svc2-http";
573:                    set $service_name   "svc2";
617:                    set $proxy_upstream_name "default-svc2-http";
/etc/nginx $ 

You have defined 2 names for the same portNumber in your test like this

# service1
apiVersion: v1
kind: Service
metadata:
  name: "service-pod"
spec:
  selector:
    app: "pod1"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: http      # -------------------------------------->  First Name "http"
---
# service2
apiVersion: v1
kind: Service
metadata:
  name: "service"
spec:
  selector:
    app: "pod2"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 8000
      name: pod-http    # -------------------------------------->  Second Name "pod-http" for same port number 80

but you are not explaining why

@Gacko
Copy link
Member

Gacko commented Sep 7, 2024

I'm holding this as I agree with Long. I didn't have the time to dig into the root cause, but what I read while scrolling through leaves the impression we first need more evidence before we can actually implement any changes.

Would you therefore please, if possible, stick to the issue and not discuss this in the PR?

@Revolution1
Copy link
Author

Revolution1 commented Sep 7, 2024

@Gacko
I also don't wanna and I'm not discussing these things in the PR, as I've already clarified everything in both issues #11938 #11937.

However, I'm struggling to explain to Long when he edited my case and claimed he couldn't reproduce the issue. And it's frustrating because the definition of a "bug" seems to be a matter of basic common sense.

If you don’t have time, I understand—I’m short on time too. I've been patient enough with these unproductive conversations.

@longwuyuan do not reply to me anymore please.

If Long was the only one I can explain this Bug to.
I'm done here.

@Revolution1
Copy link
Author

As I'm being emotional in the last comment, I just want to clarify that, I'm not rasing a PR in order to argue with anyone. I've benifit from this project, and I want to make it better. But I'm so tired of dealing with these conversations. This is not the first time I open a PR to this project, but probably the last time.

Copy link

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

@github-actions github-actions bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Oct 23, 2024
@k8s-triage-robot
Copy link

The lifecycle/frozen label can not be applied to PRs.

This bot removes lifecycle/frozen from PRs because:

  • Commenting /lifecycle frozen on a PR has not worked since March 2021
  • PRs that remain open for >150 days are unlikely to be easily rebased

You can:

  • Rebase this PR and attempt to get it merged
  • Close this PR with /close

Please send feedback to sig-contributor-experience at kubernetes/community.

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Corner Case: upstream name duplication causing ingress pointing to wrong service [following issue template]
5 participants