Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

global cluster http_proxy support / configuration #2089

Open
xinity opened this issue May 17, 2024 · 4 comments
Open

global cluster http_proxy support / configuration #2089

xinity opened this issue May 17, 2024 · 4 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@xinity
Copy link

xinity commented May 17, 2024

hello there,

i was wondering what would be the best way to support http_proxy globally ?
i am required to use an internal http_proxy and i want to make sure that all cluster deployed will be able to leverage it

is it something we can achieve using CAPO ?

not sure if it's a CAPO or a CAPI topic to be honest :)

@mdbooth
Copy link
Contributor

mdbooth commented May 17, 2024

I don't think it's either, tbh; I think it's a kubernetes question. I know OpenShift can do this, but I'm not 100% clear on the mechanism. It wouldn't surprise me if it's something like injecting proxy environment variables into all pods on admission.

@jichenjc
Copy link
Contributor

I think it's a k8s question , not something in CPO, so close this and feel free to reopen

not sure whether openshift did something internally through setting on ingress or other things, no clear detail info only how to use it here ...
https://docs.openshift.com/container-platform/4.15/networking/enable-cluster-wide-proxy.html

@pawcykca
Copy link
Contributor

pawcykca commented Jul 17, 2024

You can achieve this on Node level by:

  • image-builder -> preparing images with proxy configuration
  • CAPI -> setting proxy configuration in KubeadmControlPlane and KubeadmConfigTemplate

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
Status: Inbox
Development

No branches or pull requests

6 participants