Skip to content

vertx-howtos/clustering-kubernetes-howto

Repository files navigation

Deploying clustered Vert.x apps on Kubernetes with Infinispan

Build Status

This document will show you how to deploy clustered Vert.x apps on Kubernetes with Infinispan.

What you will build

You will build a clustered Vert.x application which:

  • listens to HTTP requests for the /hello URI

  • extracts the HTTP query param name

  • replies with a greeting such as "Hello <name> from <pod> where

    • <name> is the query param value

    • <pod> is the name of the Kubernetes pod that generated the greeting

It consists in two parts (or microservices) communicating over the Vert.x event bus.

The frontend handles HTTP requests. It extracts the name param, sends a request on the bus to the greetings address and forwards the reply to the client.

The backend consumes messages sent to the greetings address, generates a greeting and replies to the frontend.

What you need

  • A text editor or IDE

  • Java 8 higher

  • Maven or Gradle

  • Minikube or any Kubernetes cluster

  • kubectl command-line tool

Create the projects

The code of the frontend and backend projects contains Maven and Gradle build files that are functionally equivalent.

Dependencies

Both projects depend on:

Vert.x Infinispan is a cluster manager for Vert.x based on the Infinispan in-memory key/value data store. In Vert.x a cluster manager is used for various functions. In particular it provides discovery/membership of cluster nodes and stores event bus subscription data.

Vert.x Web is a set of building blocks which make it easy to create HTTP applications.

Vert.x Health Check is a component that standardizes the process of checking the different parts of your system, deducing a status and exposing it.

Containerization

To create containers we will use Jib because:

  • it creates images with distinct layers for dependencies, resources and classes, thus saving build time and deployment time

  • it supports both Maven and Gradle

  • it requires neither Docker nor Podman

Using Maven

Here is the content of the pom.xml file you should be using for the frontend:

For the backend, the content is similar:

Using Gradle

Assuming you use Gradle with the Kotlin DSL, here is what your build.gradle.kts file should look like for the frontend:

For the backend, the content is similar:

Implement the services

Let’s start with the backend service. We will continue with the frontend and then test them on the development machine.

Backend service

The backend service is encapsulated in a BackendVerticle class.

It is configured with environment variables:

When the verticle starts, it registers an event bus consumer, sets up a Vert.x Web Router and binds an HTTP server:

The event bus consumer takes messages sent to the greetings address and formats a reply:

The Router exposes health and readiness checks over HTTP:

Tip
Vert.x Infinispan provides a cluster health check out of the box. io.vertx.ext.cluster.infinispan.ClusterHealthCheck verifies the underlying Infinispan cluster status.

For local testing, a main method is an easy way to start the verticle from the IDE:

On startup, Vert.x Infinispan uses the default networking stack which combines IP multicast for discovery and TCP connections for group messaging. This networking stack is fine for testing on our development machine. We will see later on how to switch to a stack that is suitable when deploying to Kubernetes.

Frontend service

The frontend service is encapsulated in a FrontendVerticle class.

It is configured with an environment variable:

When the verticle starts, it sets up a Vert.x Web Router and binds an HTTP server:

The Router defines a GET handler for the /hello URI, besides it exposes health and readiness checks over HTTP:

The HTTP request handler for /hello URI extracts the name parameter, sends a request over the event bus and forwards the reply to the client:

For local testing, a main method is an easy way to start the verticle from the IDE:

Test locally

You can start each service:

  • straight from your IDE or,

  • with Maven: mvn compile exec:java, or

  • with Gradle: ./gradlew run (Linux, macOS) or gradlew run (Windows).

The frontend service output should print a message similar to the following:

2020-07-16 16:29:39,478 [vert.x-eventloop-thread-2] INFO  i.v.howtos.cluster.FrontendVerticle - Server started and listening on port 8080

The backend:

2020-07-16 16:29:40,770 [vert.x-eventloop-thread-2] INFO  i.v.howtos.cluster.BackendVerticle - Server started and listening on port 38621
Tip
Take note of the backend HTTP server port. By default it uses a random port to avoid conflict with the frontend HTTP server.
Note
The following examples use the HTTPie command line HTTP client. Please refer to the installation documentation if you don’t have it installed on your system yet.

First let’s send a request to the frontend for the /hello URI with the name query param set to Vert.x Clustering

http :8080/hello name=="Vert.x Clustering"

You should see something like:

HTTP/1.1 200 OK
content-length: 36

Hello Vert.x Clustering from unknown
Note
unknown is the default pod name used by the backend when the POD_NAME environment variable is not defined.

We can also verify the readiness of the frontend:

http :8080/readiness
HTTP/1.1 200 OK
content-length: 65
content-type: application/json;charset=UTF-8

{
    "checks": [
        {
            "id": "cluster-health",
            "status": "UP"
        }
    ],
    "outcome": "UP"
}

And the backend:

http :38621/readiness
HTTP/1.1 200 OK
content-length: 65
content-type: application/json;charset=UTF-8

{
    "checks": [
        {
            "id": "cluster-health",
            "status": "UP"
        }
    ],
    "outcome": "UP"

Deploy to Kubernetes

First, make sure Minikube has started with minikube status.

Note
If you don’t use Minikube, verify that kubectl is connected to your cluster.

Push container images

There are different ways to push container images to Minikube.

In this document, we will push directly to the in-cluster Docker daemon. To do so, we must point our shell to Minikube’s docker-daemon:

eval $(minikube -p minikube docker-env)

Then, within the same shell, we can build the images with Jib:

  • with Maven: mvn compile jib:dockerBuild, or

  • with Gradle: ./gradlew jibDockerBuild (Linux, macOS) or gradlew jibDockerBuild (Windows).

Note
Jib will not use the Docker daemon to build the image but only to push it.
Note
If you don’t use Minikube, please refer to the Jib Maven or Jib Gradle plugin documentation for details about how to configure them when pushing to a registry.

Clustered app headless service

On Kubernetes, Infinispan shouldn’t use the default networking stack because most often IP multicast is not available.

Instead, we will configure it to use a stack that relies on headless service lookup for discovery and TCP connections for group messaging.

Let’s create a clustered-app headless service which selects member pods having the label cluster:clustered-app:

Important
The headless service must account for pods even when not ready (publishNotReadyAddresses set to true).

Apply this configuration:

kubectl apply -f headless-service.yml

Then verify it was succesful:

kubectl get services clustered-app

You should see something like:

NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
clustered-app   ClusterIP   None         <none>        7800/TCP   63m

Frontend deployment and service

Let’s deploy the frontend service now.

We want at least two replicas for high availability.

To configure Vert.x Infinispan, we need to start the JVM with a few system properties:

  • java.net.preferIPv4Stack

  • vertx.jgroups.config: networking stack configuration file, set to default-configs/default-jgroups-kubernetes.xml

  • jgroups.dns.query: the DNS name of the headless service we just created

And then Kubernetes needs to know the URI for liveness, readiness and startup probes.

Tip
The startup probe can point to the readiness URI with different timeout settings.

Apply this configuration:

kubectl apply -f frontend/deployment.yml

Verify the pods have started successfully:

kubectl get pods

You should see something like:

NAME                                  READY   STATUS    RESTARTS   AGE
frontend-deployment-8cfd4d966-lpvsb   1/1     Running   0          4m58s
frontend-deployment-8cfd4d966-tctgv   1/1     Running   0          4m58s

We also need a service to load-balance the HTTP traffic. Pods will be selected by the label app:frontend that was defined in the deployment:

Apply this configuration:

kubectl apply -f frontend/service.yml

Verify the service has been created successfully:

kubectl get services frontend

You should see something like:

NAME       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
frontend   LoadBalancer   10.106.16.88   <pending>     80:30729/TCP   62s

If you use Minikube, open another terminal window and run:

minikube tunnel

Minikube tunnel runs as a separate process and exposes the service to the host operating system.

If you run kubectl get services frontend again, then the external IP should be set:

NAME       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
frontend   LoadBalancer   10.100.254.64   10.100.254.64   80:30660/TCP   30m
Tip
Take note of the external IP.
Important

Minikube tunnel requires privilege escalation. If you are not granted to do this, you can still access the service via the NodePort with minikube service --url frontend.

Note
If you don’t use Minikube and no external IP has been assigned to your service, please refer to your cluster documentation.

Backend deployment

The backend service deployment is similar to the frontend one.

Notice that in this case:

  • 3 replicas should be created

  • the POD_NAME environment variable will be set in the container

Apply this configuration:

kubectl apply -f backend/deployment.yml

Verify the pods have started successfully:

kubectl get pods

You should see something like:

NAME                                  READY   STATUS    RESTARTS   AGE
backend-deployment-74d7f45c67-h7h9c   1/1     Running   0          63s
backend-deployment-74d7f45c67-r45bc   1/1     Running   0          63s
backend-deployment-74d7f45c67-r75ht   1/1     Running   0          63s
frontend-deployment-8cfd4d966-lpvsb   1/1     Running   0          15m
frontend-deployment-8cfd4d966-tctgv   1/1     Running   0          15m

Testing remotely

We can now send a request to the frontend for the /hello URI with the name query param set to Vert.x Clustering

http 10.100.254.64/hello name=="Vert.x Clustering"

You should see something like:

HTTP/1.1 200 OK
content-length: 64

Hello Vert.x Clustering from backend-deployment-74d7f45c67-6r2g2

Notice that we can see now the name of the pod instead of the default value (unknown).

Also, if you send requests repeatedly, you will see that the backend services receive event bus requests in a round-robin fashion.

Summary

This document covered:

  • dependencies required to deploy clustered Vert.x apps on Kubernetes with Infinispan

  • containerization of Vert.x services with Jib

  • configuration of the Vert.x Infinispan cluster manager for local testing and deployment on Kubernetes

About

Deploying clustered Vert.x apps on Kubernetes with Infinispan

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published