Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scale-server-slots on Ingress resource is ignored #670

Open
mike-code opened this issue Aug 26, 2024 · 8 comments
Open

scale-server-slots on Ingress resource is ignored #670

mike-code opened this issue Aug 26, 2024 · 8 comments

Comments

@mike-code
Copy link

appVersion: 3.0.1

Using the following Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-thanos
  namespace: monitoring
  annotations:
    haproxy.org/scale-server-slots: "4"
spec:
  ingressClassName: haproxy-thanos
  rules:
  - host: "..."
    http:
      ...

the backend still scales to 42.

@hdurand0710
Copy link
Contributor

Hi @mike-code ,

Could you check if you have any of the legacy annotation servers-increment or server-slots in your Ingress Controller Configmap ?

@mike-code
Copy link
Author

Hi @mike-code ,

Could you check if you have any of the legacy annotation servers-increment or server-slots in your Ingress Controller Configmap ?

The ConfigMap (created by helm chart) is empty

@hdurand0710 hdurand0710 removed their assignment Aug 28, 2024
@hdurand0710
Copy link
Contributor

Hi @mike-code
Can you also check the same thing on the Service ?

Copy link

stale bot commented Sep 29, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Sep 29, 2024
@mike-code
Copy link
Author

@hdurand0710
Both Service (k8s Service) and Ingress has haproxy.org/scale-server-slots: "4" annotation, yet the servers reported in haproxy stats shows 42 entries.

I don't have any "legacy" configuration, ie. this is a fresh haproxy ingress instance from latest helm chart.

btw. Is is possible to reduce the default http backends from 42 to 1? The excessive number of "virtual" backends pollutes the metrics with backend that'll never be running.

@hdurand0710
Copy link
Contributor

@mike-code
I did try to reproduce with the latest helm chart and having haproxy.org/scale-server-slots: "4" both in the Ingress and the Service (deployment/ingress/service http-echo). Nothing in the Configmap

Scale the http-echo to 9.
Here is what I have for the stats for http-echo
Screenshot from 2024-09-30 08-39-08

Info:

  • app.kubernetes.io/version: 3.0.1
  • helm.sh/chart: kubernetes-ingress-1.41.0

I was not able to reproduce.

So, in order to try to reproduce and solve your issue, could you send me:

  • a screenshot of your stats ?
  • What sequence did you do on the faulty deployment (scale from 1 to x? then back to y.... ). What is the current number of pod ? Did you at some point scale to 40 pods ?
  • the content of the haproxy.cfg file (/etc/haproxy/haproxy.cfg` on the ingress controller pod)

Thanks for your help.

@mike-code
Copy link
Author

mike-code commented Oct 2, 2024

@hdurand0710
hm, it worked now but there are two things that are not right:

  1. scale-server-slots on Ingress has no effect
  2. On Service resource however, if I set haproxy.org/scale-server-slots: "2" annotation and restart the haproxy pod, I will see 2 SRV_x (as expected). Then if change Service resource and scale from "2" -> "5", the stats page will show 5 SRV_x (as expected). But, if I now downscale to "3" ("5" -> "3") the stat page will show 6(!) SRV_x. Now if I downscale further ("3" -> "2") the stats page won't change at all (it won't reload the config because I can see status not being reset).

Is this expected?

@hdurand0710
Copy link
Contributor

hdurand0710 commented Oct 2, 2024

@mike-code ,

  1. scale-server-slots on Ingress has no effect

I could not reproduce this.
When I set scale-server-slots on Ingress (and not on Service, neither on ConfigMap), it's working.
I guess that you did remove the scale-server-slots on the Service then. If you did not, then the annotations on the Service is taken over the one on the Ingress.

  1. On Service resource however, ...

When you scale from "2" to "5", it's expected that you get an number of SRV_X >= number of pods, with the number of SRV_X being a multiple of the scale-server-slots. So you should get at least 6 SRV_X. Which is what I get when reproducing the same scaling.
If you downscale, it should remain the same number of SRV_X. It only increases, never decreases.The number of SRV_X UP should be the number of pods, the other ones should be in MAINT.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@mike-code @hdurand0710 and others