Infra environment for deploy consul with Federation and vault integration in differents environments with same deploy.
Have some dependecies accordingly what environment are you using.
libvirtd
needs to works in your environment.
Add follow options to your configuration.nix
:
{
users.users.myname.extraGroups = [
"qemu-libvirtd" "libvirtd"
"wheel" "video" "audio" "disk" "networkmanager"
];
virtualisation.libvirtd.enable = true;
# optional
boot.kernelModules = [ "kvm-amd" "kvm-intel" ];
# optional
services.qemuGuest.enable = true;
}
Need instalation of gcloud, or append it inside devShell.
{
environment.systemPackages = [
pkgs.google-cloud-sdk
];
}
Get auth to GCP with:
gcloud auth application-default login
required az command. you can put it inside devShell too.
{
environment.systemPackages = [
pkgs.azure-cli
];
}
Get auth to Azure with:
az login
If you use external VAULT, it's required to set the envs to point to it.
To set it, use:
IC_VAULT_ADDR=https://vault:443
IC_VAULT_TOKEN=root-token
By default it will build libvirt
To build libvirt Image to be imported use:
nix build .#qcow # or `build-qcow`
To build GCP Image use:
nix build .#gce # or `build-gce`
To build azure Image to be imported use:
nix build .#azure # or `build-azure`
Required to build image before apply.
Required access to ./result
. Add this access with git add -Nf result
It will call terraform in the end with the configuration made by terranix
Converting the config.nix
files in config.tf.json
In the end of provision it will output an JSON with the values of each machine created.
Defaults to libvirt
# Apply infra
nix run ".#apply" # or `nix run` or `apply`
# Destroy infra
nix run ".#destroy" # or `nix run` or `destroy`
Provide infra in Libvirtd environment.
# Apply
nix run ".#apply-libvirt" # or `apply-libvirt`
# Destroy
nix run ".#destroy-libvirt" # or `destroy-libvirt`
Provide infra in GCP environment.
# Apply
nix run ".#apply-gcp" # or `apply-gcp`
# Destroy
nix run ".#destroy-gcp" # or `destroy-gcp`
Provide infra in Azure environment.
# Apply
nix run ".#apply-azure" # or `apply-azure`
# Destroy
nix run ".#destroy-azure" # or `destroy-azure`
Required access to output.json
generated by provision.
Add this access with git add -Nf output.json
Required vault access. you can pass Environment or use an local vault.
Deploy uses colmena as backend.
For deploy, you can follow all patterns of provision, but replacing apply
to deploy
It defaults to libvirt
nix run ".#deploy" # or `deploy`
For deploy to libvirt explicit use:
nix run ".#deploy-libvirt" # or `colmena deploy --on @libvirt` # or `deploy-libvirt`
For deploy to GCP use:
nix run ".#deploy-gcp" # or `colmena deploy --on @gcp` # or `deploy-gcp`
For deploy to Azure use:
nix run ".#deploy-azure" # or `colmena deploy --on @azure` # or `deploy-azure`
Use vault as pki and secret manager.
To configure it, set Environment with token and addr.
And then use:
./scripts/vault-init.sh
to configure it.
Set environment vars with addr and token. If you are using local-vault, you don't need to set it.
IC_VAULT_ADDR=https://vault:443
IC_VAULT_TOKEN=root-token
Use docker-compose by arion-compose to run a dev local-vault.
For libvirt environment you can use local vault.
Start vault dev server in http://127.0.0.1:8200
with:
local-vault # or arion up
Federation in Kubernetes can be configured using scripts and devShell alias.
If you are using libvirt
environment, you can use an local-k8s
Configure it with federation with
./scripts/k8s/configure.sh # configure k8s
Start local kubernetes with k3d for libvirt local tests.
local-k8s # start k3d local