Skip to content
This repository has been archived by the owner on May 14, 2024. It is now read-only.

Ingress

oliver-tarrant-tessella edited this page May 1, 2019 · 8 revisions

Ingress controller

The Ingress controller controls external access to pods in the Kubernetes cluster. Is it downloaded and installed via Helm, and is installed as a pod on the Kubernetes cluster.

helm install stable/nginx-ingress --set controller.service.nodePorts.http=31856 --set controller.priorityClassName=piezo-essential --set defaultBackend.priorityClassName=piezo-essential

Note this requires the piezo-essential priority class to be present on the cluster. (see here)

Click here for a link to the official documentation.

Ingress rules are passed to the Ingress controller as a YAML file. Example files and a script for passing them can be found in piezo/examples/ingress/.

Ingress provides a method of controlled access to a Kubernetes cluster. By default some of the services associated with the spark operator (spark UI) are exposed by default via a node port. This can be accessed directly on the port number which it is exposed on. This is assigned randomly and provides less control than ingress and so we are not giving users the information required to access them.

Setting up ingress rules

By using the Piezo web app deployment script as outlined here ingress rules are automatically set up to accommodate communications with the web app. In addition, rules are set up to expose Prometheus on /prometheus where users can find metrics relating to their job and the Kubernetes cluster.

Finally when jobs are run through the Piezo web app, ingress rules are dynamically created to allow access for users to the spark UI for their job. This gives them a way to monitor the progress of their job and observe what is happening within the cluster. The URL to access the spark UI is returned to a user when they submit a job.