Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Megalos: Allow devices to be reachable within the cluster #296

Open
GioBar00 opened this issue May 28, 2024 · 3 comments
Open

Megalos: Allow devices to be reachable within the cluster #296

GioBar00 opened this issue May 28, 2024 · 3 comments

Comments

@GioBar00
Copy link

Related Bug

No response

Feature Description

Currently in the KubernetesMachine the dns configuration and the default route are removed to isolate completely the scenario. The problem is that there is no way to actively monitor the scenario from within the cluster with external tools like Prometheus.

Solution

A possible solution could be create a setting that allows external access to the devices or use the bridged option to allow a device access to the cluster network using the already present eth0 interface.

Alternative Solutions

No response

Additional Context

No response

@GioBar00
Copy link
Author

Currently the lines of code that isolate completely the devices are these:

# Removes /etc/bind already existing configuration from k8s internal DNS
"rm -Rf /etc/bind/*",
# Unmount the /etc/resolv.conf and /etc/hosts files, automatically mounted by Kubernetes inside the container.
# In this way, they can be overwritten by custom user files.
"umount /etc/resolv.conf",
"umount /etc/hosts",

# Remove the Kubernetes' default gateway which points to the eth0 interface and causes problems sometimes.
"ip route del default dev eth0 || true",

dns_policy="None",
dns_config=dns_config,

Removing these lines the virtual devices would be able to access the Kubernetes cluster and the Internet allowing the use external tools to communicate with them through the eth0 interface.
Could this be a valid solution to support the bridged option in Megalos or does this break/limit something in some way?

@Skazza94
Copy link
Member

Hi @GioBar00,

I think that there are few caveats that we should take into account if we want to adapt the bridged on k8s:

Here, there could be conflicts between the DNS of the scenario and the DNS of the Pod. For example, if in your scenario there is a device that has an IP which is included in the DNS solver, it could potentially point to the real device, possibly causing some "damage". That's why the original files were deleted/unmounted.

# Removes /etc/bind already existing configuration from k8s internal DNS
"rm -Rf /etc/bind/*",
# Unmount the /etc/resolv.conf and /etc/hosts files, automatically mounted by Kubernetes inside the container.
# In this way, they can be overwritten by custom user files.
"umount /etc/resolv.conf",
"umount /etc/hosts",

In some scenarios, there were devices setting the default route and nothing else. Having the k8s one was causing conflicts, i.e., it was selected as the first default instead of the scenario one.

# Remove the Kubernetes' default gateway which points to the eth0 interface and causes problems sometimes.
"ip route del default dev eth0 || true",

Maybe, this is the only safe thing that can be changed. What I used to do was put a "dummy" DNS configuration so that I was sure that it wasn't causing problems in the Pod when it was deleted in KubernetesMachine.py#L49-L55.

dns_policy="None",
dns_config=dns_config,

I think that we should design how to have connectivity with external tools like Prometeus without changing most of these chunks of code. What are the requirements of such tools? Do you need to expose only the cluster subnet (the one associated to eth0)?

Thanks,
Mariano.

@GioBar00
Copy link
Author

Hi @Skazza94,
let's start with Prometheus.

Let's say we have a Prometheus Server deployed in Kubernetes in the namespace monitoring and we want to monitor some kathara virtual devices in the namespace of the Lab.
The Prometheus Server will periodically poll the metrics from the kathara devices (targets), so each target needs to have a route to the cluster network.
Given this, I would say that for sure we need a route to the cluster network on eth0.

In some scenarios, there were devices setting the default route and nothing else. Having the k8s one was causing conflicts, i.e., it was selected as the first default instead of the scenario one.

# Remove the Kubernetes' default gateway which points to the eth0 interface and causes problems sometimes.
"ip route del default dev eth0 || true",

I don't think that the scenarios with devices that use custom default routes also have the bridged option enabled, so I think that leaving the default Kubernetes route on eth0 when the bridged option is enabled is a valid solution. This way the device will also have access to the internet, like when bridging in Docker.

If instead we want to expose the devices to the cluster network without internet access then we would need to add a route to the cluster network on eth0. Unfortunately some conflicts could appear with scenarios that coincidentally use the same network inside the scenario (could be a disclaimer when using Megalos).


Regarding the default Kubernetes DNS, there exists other tools like Jaeger that instead of polling data from the devices, the devices need to push data to the server.
For how Kubernetes is designed, to access a service in a different namespace, a Kubernetes Service is required.
As written in the documentation there are two possible ways to access these services:

  • Environment variables: this method would not require the Kubernetes DNS, but it imposes that the service needs to be created before the devices.
  • DNS: this requires to not remove the Kubernetes DNS from the devices.

Let me know what you think could be the best solution.

Thanks,
Giovanni

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants