Konstellation AI is a platform to manage the lifecycle of AI solutions.
Component | Coverage | Bugs | Maintainability Rating | Go report |
---|---|---|---|---|
Admin API | ||||
K8s Manager | ||||
NATS Manager |
Refer to chart's README.
KAI design is based on a microservice pattern to be run on top of a Kubernetes cluster.
The following diagram shows the main components and how they relate with each other.
Below are described the main concepts of KAI.
Before installing KAI an already existing Kubernetes namespace is required. It will be named kai
by convention, but
feel free to use whatever you like. The installation process will deploy some components that are responsible of
managing the full lifecycle of this AI solution.
The Engine is composed of the following components:
- Admin API
- K8s Manager
- MongoDB
- NATS-Streaming
Konstellation Runtime Transport file is a yaml file describing the desired workflows definitions.
The generic structure of a krt.yaml
is as follows:
version: 'v1.0.0'
description: 'Training workflow github event based'
workflows:
- name: 'training-workflow'
type: training
processes:
- name: 'github-trigger'
image: 'registry.kai.local/demo_github-trigger-mock:v1'
type: 'trigger'
resourceLimits:
CPU:
request: 100m
limit: 200m
memory:
request: 100M
limit: 200M
config:
webhook_events: push
github_secret: secret
networking:
targetPort: 3000
destinationPort: 3000
- name: 'splitter'
image: 'registry.kai.local/demo_splitter:v1'
type: 'task'
resourceLimits:
CPU:
request: 100m
limit: 200m
memory:
request: 100M
limit: 200M
subscriptions:
- github-trigger
- name: 'training-go'
image: 'registry.kai.local/demo_training-go:v1'
type: 'task'
resourceLimits:
CPU:
request: 100m
limit: 200m
memory:
request: 100M
limit: 200M
subscriptions:
- splitter.go
- name: 'training-py'
image: 'registry.kai.local/demo_training-py:v1'
type: 'task'
resourceLimits:
CPU:
request: 100m
limit: 200m
memory:
request: 100M
limit: 200M
subscriptions:
- splitter.py
- name: 'validation'
image: 'registry.kai.local/demo_validation:v1'
type: 'task'
resourceLimits:
CPU:
request: 100m
limit: 200m
memory:
request: 100M
limit: 200M
subscriptions:
- 'training-go'
- 'training-py'
- name: 'exit'
image: 'registry.kai.local/demo_exit:v1'
type: 'exit'
resourceLimits:
CPU:
request: 100m
limit: 200m
memory:
request: 100M
limit: 200M
subscriptions:
- 'validation'
In order to start development on this project you will need these tools:
- gettext: OS package to fill templates during deployment
- minikube: Local version of Kubernetes to deploy KAI
- kubectl: Kubernetes' command line tool for communicating with a Kubernetes cluster's control plane, using the Kubernetes API.
- helm: K8s package manager. Make sure you have v3+
- helm-docs: Helm doc auto-generation tool
- helm-files: Declarative spec for deploying helm charts
- yq: YAML processor. Make sure you have v4+
- gitlint: Checks your commit messages for style.
- pre-commit: Pre-commit hooks execution tool ensures the best practices are followed before commiting any change
From the repository root execute the following commands:
pre-commit install
pre-commit install-hooks
pre-commit install --hook-type commit-msg
Note: Contributing commits that had not passed the required hooks will be rejected.
-
Minikube >= 1.26
-
Docker (for Linux) >= 18.9, default driver for Minikube.
-
Hyperkit (for MacOS) default driver for Minikube.
NOTE: You can use a different driver updating
.kaictl.conf
; Check this for a complete list of drivers for Minikube
This repo contains a tool called ./kaictl.sh
to handle common actions you will need during development.
All the configuration needed to run KAI locally can be found in .kaictl.conf
file. Usually you'd be ok with the
default values. Check Minikube's parameters if you need to tweak the resources assigned to it.
Run help to get info for each command:
$> kaictl.sh [command] --help
// Outputs:
kaictl.sh -- a tool to manage KAI environment during development.
syntax: kaictl.sh <command> [options]
commands:
dev creates a complete local environment.
start starts minikube kai profile.
stop stops minikube kai profile.
build calls docker to build all images inside minikube.
deploy calls helm to create install/upgrade a kai release on minikube.
delete calls kubectl to remove runtimes or versions.
global options:
h prints this help.
v verbose mode.
To install KAI in your local environment:
$ ./kaictl.sh dev
It will install everything in the namespace specified in your development .kaictl.conf
file.
As part of KAI server we deploy a Docker registry that is published via ingress using http, which is consider insecure.
As Kubernetes does not trust on insecure registries, if you want to perform local development or to run this on other insecure environments you need to configure your cluster to accept this registry hostname to be accepted. (check .Values.registry.host
value in the chart's values.yaml file).
To configure this for local development just update the value of the MINIKUBE_INSECURE_REGISTRY_CIDR
environment variable inside the .kaictl.conf
file to fit your local CIDR. If you created a previous KAI development environment you will need to destroy it and recreate again.
Remember to edit your /etc/hosts
, see ./kaictl.sh dev
output for more details.
NOTE: If you have the hostctl tool installed, updating /etc/hosts
will be
done automatically too.
There are three stages in the development lifecycle of KAI there are three main stages depending on if we are going to add a new feature, release a new version with some features or apply a fix to a current release.
To add new features just create a feature branch from main, and after merging the Pull Request a workflow will run the
tests. If all tests pass, a new alpha
tag will be created (e.g v0.0-alpha.0), and a new release will be generated
from this tag.
After releasing a number of alpha versions, you would want to create a release version. This process must be triggered with the Release workflow, that is a manual process. This workflow will create a new release branch and a new tag following the pattern v0.0.0. Along this tag, a new release will be created.
If you find out a bug in a release, you can apply a bugfix just by creating a fix
branch from the specific release
branch, and create a Pull Request towards the same release branch. When merged, the tests will be run against it, and
after passing all the tests, a new fix tag
will be created increasing the patch portion of the version, and a new
release will be build and released.