Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP_PROXY proxy settings #252

Open
aleksasiriski opened this issue Sep 29, 2024 · 8 comments
Open

HTTP_PROXY proxy settings #252

aleksasiriski opened this issue Sep 29, 2024 · 8 comments

Comments

@aleksasiriski
Copy link

Is it possible to configure proxy settings? I am using k3s and possibly rke2 in future, but I don't want to set HTTP_PROXY but instead use CONTAINERD_HTTP_PROXY and that works for pulling images, but this then doesn't propagate to helm install jobs and I can't seem to find how to set this up.

@brandond
Copy link
Member

Any HTTP_PROXY related env vars set in the helm controller's environment will be passed into the helm job pods. For k3s and rke2, this means you must set that env var if you want the helm controller's job pods to use the proxy.

Why don't you want to set HTTP_PROXY?

@aleksasiriski
Copy link
Author

Why don't you want to set HTTP_PROXY?

It's not very well documented what it affects, since I only need it for pulling images and helm install pods I managed to workaround this by using CiliumEgressGatewayPolicy with label keys for helm install pods.

Could you send me a link on what is being affected by setting HTTP_PROXY instead of CONTAINERD_PROXY if that docs exist?

@brandond
Copy link
Member

what is being affected by setting HTTP_PROXY instead of CONTAINERD_PROXY

Everything, just as you'd expect. Kubelet, apiserver, scheduler, controller-manager, etcd, all the things that run in the main K3s process. Pods are not of course, because they have their own environments.

@aleksasiriski
Copy link
Author

what is being affected by setting HTTP_PROXY instead of CONTAINERD_PROXY

Everything, just as you'd expect. Kubelet, apiserver, scheduler, controller-manager, etcd, all the things that run in the main K3s process. Pods are not of course, because they have their own environments.

Yep, that's what I thought. I don't want my kubeapi and everything all to be at the mercy of proxy's uptime.

So, in the end, I would need to deploy my own Helm controller in order to give it proper HTTP_PROXY env vars, there's no way around it?

@brandond
Copy link
Member

The apiserver, controller-manager, and such don't actually go out to the internet for anything, and the cluster CIDRs and cluster domain are all automatically added to the NO_PROXY list (as your proxy is unlikely to have access to things running inside the cluster), so I doubt you'd actually run into any problems simply setting the HTTP_PROXY env var. You might want to add your node LAN CIDR and internal DNS zone to the NO_PROXY list as well, just to be sure.

Have you actually tried it or are you just expecting there would be problems?

@aleksasiriski
Copy link
Author

The apiserver, controller-manager, and such don't actually go out to the internet for anything, and the cluster CIDRs and cluster domain are all automatically added to the NO_PROXY list (as your proxy is unlikely to have access to things running inside the cluster), so I doubt you'd actually run into any problems simply setting the HTTP_PROXY env var. You might want to add your node LAN CIDR and internal DNS zone to the NO_PROXY list as well, just to be sure.

Oh that's good then.

Have you actually tried it or are you just expecting there would be problems?

I had problems that my proxy is deployed inside the cluster and using NodePort service with localhost:<port> and that doesn't work within the helm install container, but is required for containerd for example.

@brandond
Copy link
Member

I'm confused what the value proposition would be there - why make your nodes go through something running in the cluster, to get out to the internet?

@aleksasiriski
Copy link
Author

I'm confused what the value proposition would be there - why make your nodes go through something running in the cluster, to get out to the internet?

Because I have to apply some filtering, to limit what can be download by commands like curl / helm install for example. I agree that it's an unconventional situation, but the idea was to avoid having yet another set of servers just for the proxy if possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants