Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi tenant support #131

Closed
Hendrik-H opened this issue Jan 24, 2023 · 6 comments
Closed

multi tenant support #131

Hendrik-H opened this issue Jan 24, 2023 · 6 comments

Comments

@Hendrik-H
Copy link
Member

Hi,

I have a multi tenant Kubernetes setup where each tenant has a separate namespace. Is it possible to make the core dumps available to every tenant separately? So that each tenant has a special S3 bucket.
From looking at the config I had the impression that I can select pods by a label for which the core dump upload should be enabled. If so, would I have to install the core dump handler for every namespace, configure it separately and make sure that all pods in a namespace have a special label?

thanks,
Hendrik

@No9
Copy link
Collaborator

No9 commented Jan 24, 2023

Hi @Hendrik-H
The challenge with multi-tenancy is going to be name clashes where pods are given the same name in different namespaces.
This is already an issue with statefulsets where the recommendation is to name the sets differently for each instance.
See #115.

You will either need to guarantee the names are always unique across the cluster or you will need to provide a unique identifier in the container that can be parsed on crash so the correct pod can be identified.
Unfortunately the namespace doesn't come as part of the crashdump.

Some work on the parse crash option has started over here but I don't know how it's progressed.
Ninja-Kiwi@48b0078

Once that option is solved I would suggest using the event feature in the last release to do some post processing of the files from a single bucket to the tenant bucket. Although if it isn''t too intrusive and we had an integration test I would land a multi bucket support PR.

Sorry this isn't available at the moment but as you can see it's a tricky feature to implement.

@Hendrik-H
Copy link
Member Author

Thanks for the answer. The restriction is unfortunate but makes sense. I assume kernel changes would be required to improve the situation. I think in our case the pod names and thus the host names should actually be unique. So there is a chance that it works. Is there a way to make sure that the core dump handler only uploads the core dump if the found container belongs to a specific namespace?

@No9
Copy link
Collaborator

No9 commented Jan 26, 2023

Yes - There was a proposed patch a while ago but it didn't land in the kernel. https://lore.kernel.org/lkml/ce9a21e42a846150a2a482278f5b5a5e0ea27839.1454588184.git.zhaolei@cn.fujitsu.com/

Can you mind being a little be more specific with the requirement:
The first note suggested you wanted a different bucket config per namespace.
The last note suggested you just want to disable for certain namespaces?
The last is relatively straight forward but the first is a lot more involved.

@Hendrik-H
Copy link
Member Author

We have a multi tenant Jenkins setup with each tenant having its own namespace. When I hit a core dump I want to be able to access the dump. Each tenant should only be able to see the dumps from its namespace. So I want a bucket per namespace and also make sure that a dump is not uploaded to the wrong bucket. The Jenkins build pods should have a unique name thus it should be possible to find the correct pod from the hostname of the dump.
My understanding was that the core dump handler has a config, which allows to select pods with a label. As tenants have control over the labels of their pods this would not be save. So if I would configure the handle to upload dumps to bucket X I would want to be able to restrict that to pods from namespace X with label Y.

@No9
Copy link
Collaborator

No9 commented Jan 27, 2023

OK I think this scenario is complex enough to use the eventing feature that enables custom uploaders as it's quite specific and I don't want to make this project any harder to maintain.

The eventing feature landed in v8.9.0 and had updates in the v8.10.0
The idea is that a JSON file containing some metadata is also generated along with the core dump.
This JSON file can then be used as a signal to a 3rd Party container that can perform arbitrary actions.

The documentation for it is non-existent right now as I was intending to build out this scenario but I will use your requirements to build out a demo in a separate repo and you can take it from there.

The end game is to see which uploaders gain traction and then bundle them along with this project in an operator.
[Edit] as an interim step we could look at updating the chart so the install is smoother.

I'm a bit short for time so I'll update this when I have the baseline in place and if you need it more urgently you can pick it up.

Tagging the Operator repo IBM/core-dump-operator#1

@Hendrik-H
Copy link
Member Author

Thanks, I'm actually only in the role of a tenant in the described system and thus was so far trying to investigate what we could do to get our core dumps. Thanks for all the info will try to get that into our setup.

@No9 No9 closed this as completed Apr 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants