-
Notifications
You must be signed in to change notification settings - Fork 0
Project 4 Custos Deployment
Project 4
-
Spawn four instances on Jetstream 1 of medium size
-
Install Rancher on one of the instances .
To install Rancher please refer to our peer team Terra's writeup : https://github.com/airavata-courses/terra/wiki/Installing-Rancher---Step--1
Only difference is that it is Jetstream 1 , so you would need to generate the ssh password yourself using
- sudo passwd "username"
Replace your username in the above command.
- Make a k8s cluster : Again refer to https://github.com/airavata-courses/terra/wiki/Step-2:-Setting-up-Kubernetes-cluster-using-Rancher
While adding the nodes to the cluster, choose the calico network
Now that your Rancher and cluster are done, login to the master node .
-
Create Namespaces : Custos, Keycloak, Vault
-
Install helm:
https://helm.sh/docs/intro/install/
git checkout project_4-dev
cd custos/deployment_files/
On all the nodes ,
sudo mkdir /bitnami
sudo mkdir /bitnami/mysql
sudo mkdir /bitnami/postgresql
sudo mkdir /hashicorp
sudo mkdir /hashicorp/consul
sudo mkdir /hashicorp/consul/data
chmod 777 -R /hashicorp
Make sure you change the permissions for all directories for hashicorp/consul/data
There are few more folders to be created further in the steps but all of them are to be created in master node.
Steps are also present in the Readme file in custos/deployment_files/cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
check
kubectl get all -n cert-manager
All the pods should be in running phase. If not , there would be an error so check the kubectl logs to debug the issue.
It should look like this :
kubectl apply -f issuer.yaml
Steps are also present in the Readme file in custos/deployment_files/keycloak
helm repo add bitnami https://charts.bitnami.com/bitnami
- Create PVs
Create three PVs for each mount point /bitnami/postgresql
- kubectl apply -f pv.yaml,pv1.yaml,pv2.yaml
Check output :
Then deploy postgresql
- helm install keycloak-db-postgresql bitnami/postgresql -f values.yaml -n keycloak --version 10.12.3
Check output :
- cd ..
-
kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yaml
-
kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml
-
cp operator.yaml keycloak-operator/deploy/
-
cd keycloak-operator
-
make cluster/prepare
-
kubectl apply -f deploy/operator.yaml -n keycloak
-
cd ..
-
kubectl apply -f keycloak-db-secret.yaml -n keycloak
-
kubectl apply -f custos-keycloak.yaml -n keycloak
-- Replace hostname in ingress.yaml
- kubectl apply -f ingress.yaml -n keycloak
Check output :
user: admin
Get admin password.
-
kubectl get secret credential-custos-keycloak -o yaml -n keycloak
-
echo "passwordhere" | base64 --decode
**Store this password , it would be used in further steps **
Steps are also present in the Readme file in custos/deployment_files/consul
- helm repo add hashicorp https://helm.releases.hashicorp.com
- Create directory /hashicorp/consul/data in each of your nodes,
- sudo chmod 777 -R hashichorp
- kubectl apply -f pv.yaml,pv1.yaml
kubectl apply -f storage.yaml
helm install consul hashicorp/consul --version 0.31.1 -n vault --values config.yaml
Check output :
Steps are also present in the Readme file in custos/deployment_files/vault
helm install vault hashicorp/vault --namespace vault -f values.yaml --version 0.10.0
Change hostname in ingress.yaml
Deploy Ingress
kubectl apply -f ingress.yaml -n vault
At this point, your output should something like this :
-
Follow instructions in UI which is hosted on 443 to generate vault token.
-
Put in 5 and 3 to initialize the keys. It would generate 5 keys, download the keys in the file .
-
In the next step , put the keys in the UI one by one to unseal the vault
After this step, your UI should look like :
The root_token to be used would be found at the end of the file you downloaded.
- Other way to do (if you don't want to follow through the UI instructions ):
Follow step 4 and step 5 from this : https://dwops.com/blog/deploy-hashicorp-vault-on-kubernetes-using-helm/
- You shall see the vault-0 pod in the vault namespace change from 0/1 to 1/1 Ready phase.
Check output for unsealed vault:
Steps are also present in the Readme file in custos/deployment_files/mysql
- kubectl apply -f pv.yaml,pv1.yaml
Check output :
- helm install mysql bitnami/mysql -f values.yaml -n custos --version 8.8.8
Check output :
Steps are present in the Readme file in custos/deployment_files/custos Follow the steps on new instance VM(recommended) or on local machine
In your cluster master, do this for all nodes
kubectl label nodes node_name custosServiceWorker="enabled"
In your local or any other VM. do it with VM if fails in local
-
In the new instance run install maven and java using
sudo apt install maven
-
Do docker login using
sudo docker login
-
Create a new ssh key with RSA format
ssh-keygen -t rsa -b 4096 -m pem
-
Now copy the public key .pub file and copy it in the "authorized_keys" file of the MASTER node
-
Do the below steps in the new instance
git clone https://github.com/apache/airavata-custos.git
cd airavata-custos
git checkout develop
-
A. Do the following changes in pom.xml file located in the root file
<spring.profiles.active>dev</spring.profiles.active>
<vault.token>YOUR_ROOT_TOKEN_FROM_VAULT_JSON_FILE</vault.token>
<vault.scheme>http</vault.scheme> <vault.host>vault.vault.svc.cluster.local</vault.host> <vault.port>8200</vault.port> <vault.uri>http://vault.vault.svc.cluster.local:8200</vault.uri>
<!-- KEYCLOCK_PASSWORD can be found from the command while deploying keycloak --> <!-- BASE64 format only --> <iam.dev.username>KEYCLOAK_USERNAME</iam.dev.username> <iam.dev.password>KEYCLOCK_PASSWORD_BASE64</iam.dev.password> <iam.staging.username>KEYCLOAK_USERNAME</iam.staging.username> <iam.staging.password>KEYCLOCK_PASSWORD_BASE64</iam.staging.password>
<!-- Found while deploying MYSQL --> <spring.datasource.username>MYSQL_USERNAME</spring.datasource.username> <spring.datasource.password>MYSQL_PASSWORD</spring.datasource.password>
<!-- Same username which you logged in, in the above steps --> <docker.image.prefix>DOCKER_USERNAME</docker.image.prefix> <docker.image.repo>DOCKER_USERNAME</docker.image.repo>
MASTER_NODE_HOST
<!-- ssh-keygen -t rsa -b 4096 -m pem --> <!-- Path in this instance --> <ssh.privatekey>/home/rishijain15/.ssh/id_rsa</ssh.privatekey>
<ssh.passphrase></ssh.passphrase>
<ssh.username>rishijain15</ssh.username>
B. Comment from line 225 - 249 in the following file
custos-integration-services/tenant-management-service-parent/tenant-management-service/src/main/java/org/apache/custos/tenant/management/tasks/TenantActivationTask.java
C. Change the following in the pom.xml file found in the following path
custos-core-services/utility-services/custos-configuration-service/pom.xml
D. Now change "iam.server.url" property in all the *-dev.properties and *-staging.properties file found in the following path custos-core-services/utility-services/custos-configuration-service/src/main/resources with the value found running the below commands in your MASTER node
``` kubectl delete all --all -n ingress-nginx //wait for 20 - 30 seconds and then procceed kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml //wait for 20 - 30 seconds and then procceed, you will get a port in the next command, we will use this port associated with 443 kubectl get svc -n ingress-nginx ```
Using the port you found in the previous command construct your final URL which will placed in all the finals mentioned in the first line of this step
https://MASTER_NODE_URL:PORT/auth/
eg. https://js-156-79.jetstream-cloud.org:30367/auth/
Replace the new url in all the "iam.server.url" property in the files mentioned above
E. If any of the command fails try with sudo
NOTE: make sure you have a custos folder in the home directory of the master node with 777 permission before running below commands
- Build code
`mvn clean install -P container`
- Push code images to repo
`mvn dockerfile:push -P container`
- deploy artifacts
`mvn antrun:run -P scp-to-remote`
Custos deployed on dev :
F. Once this is done check if all the services are deployed in the MASTER node using kubectl get all --all-namespaces
Everything will be running/ready except "deployment.apps/custos-messaging-core-service" which can be ignored
Also, run the following command in the MASTER node, you need to change path
```
helm install cluster-management-core-service /PATH/custos/artifacts/cluster-management-core-service-1.1-SNAPSHOT.tgz -n keycloak
eg. helm install cluster-management-core-service /home/ssh_user/custos/artifacts/cluster-management-core-service-1.1-SNAPSHOT.tgz -n keycloak
```
![](https://github.com/airavata-courses/garuda/blob/project_4-dev/docs/custos-deployment-screenshots/after_manually_deploying_core-service.png)
G. Now we have to deploy 2 service again in staging environment
*** Redeploying service 1 - "iam-admin-core-service"
1. In Master Node check the deployed service
a. helm list -n custos --short
b. Uninstall iam-admin-core-service service using
helm uninstall iam-admin-core-service -n custos
c. You won't find "iam-admin-core-service" in the list if you run step a. command again
2. In your INSTANCE VM from where we deployed all the services
a. change the environment in the root pom.xml file to "staging"
1. <!-- 1. Environment -->
<spring.profiles.active>staging</spring.profiles.active>
b. Deploy iam-admin-core-service again (We will only deply this single service)
1. Go to the following folder
/custos-core-services/iam-admin-core-service
2. And run this commands again
- Build code
`mvn clean install -P container`
- Push code images to repo
`mvn dockerfile:push -P container`
- deploy artifacts
`mvn antrun:run -P scp-to-remote`
3. In Master Node check the deployed service
a. helm list -n custos --short
You will find "iam-admin-core-service" in the list again
*** Redeploying service 2 - "identity-core-service"
1. In Master Node check the deployed service
a. helm list -n custos --short
b. Uninstall identity-core-service service using
helm uninstall identity-core-service -n custos
c. You won't find "identity-core-service" in the list if you run step a. command again
2. In your INSTANCE VM from where we deployed all the services
a. change the environment in the root pom.xml file to "staging" (Already there don't need to do it again)
1. <!-- 1. Environment -->
<spring.profiles.active>staging</spring.profiles.active>
b. Deploy identity-core-service again (We will only deply this single service)
1. Go to the following folder
/custos-core-services/identity-core-service
2. And run this commands again
- Build code
`mvn clean install -P container`
- Push code images to repo
`mvn dockerfile:push -P container`
- deploy artifacts
`mvn antrun:run -P scp-to-remote`
3. In Master Node check the deployed service
a. helm list -n custos --short
You will find "identity-core-service" in the list again
After deploying two services and in staging
-
In any browser go to the URL we created in step 6-D. You will see the vault UI. Login using the root_token
- Create 'secret' named engine
- Click on 'Enable new engine'
- Select 'KV' option and do next
- enter 'secret' in the path filed and select '1' in the version dropdown
- click on enable engine
- Create 'resourcesecret' named engine
- Click on 'Enable new engine'
- Select 'KV' option and do next
- enter 'resourcesecret' in the path filed and select '1' in the version dropdown
- click on enable engine
- Create 'secret' named engine
-
Make an REST api call to
- POST request to YOUR_HOST_NAME and endpoint
- /tenant-management/v1.0.0/oauth2/tenant
- https://js-156-79.jetstream-cloud.org:30367/tenant-management/v1.0.0/oauth2/tenant
- In body pass json object
{ "client_name":"garuda", "requester_email":"[email protected]", "admin_username":"rishabh", "admin_first_name":"Rishabh", "admin_last_name":"Jain", "admin_email":"[email protected]", "contacts":["[email protected]","[email protected]"], "redirect_uris":["http://localhost:8080/callback*", "https://js-156-79.jetstream-cloud.org/callback*"], "scope":"openid profile email org.cilogon.userinfo", "domain":"https://js-156-79.jetstream-cloud.org", "admin_password":"rishabh123", "client_uri":"https://js-156-79.jetstream-cloud.org", "logo_uri":"https://js-156-79.jetstream-cloud.org", "application_type":"web", "comment":"Custos super tenant for production" }
-You will receive following json format in response
``` { "client_id": "CLIENT_ID", "client_secret": "CLIENT_SECRET", "is_activated": false, "client_id_issued_at": 1651714179000, "client_secret_expires_at": 0, "registration_client_uri": "https://custos.scigap.org/apiserver/tenant-management/v1.0.0/oauth2/tenant?client_id=custos-m482zzqpwc1jf9oog0zx-10000000", "token_endpoint_auth_method": "client_secret_basic", "msg": "Use Base64 encoded clientId:clientSecret as auth token for authorization, Credentials are activated after admin approval" } ```
- POST request to YOUR_HOST_NAME and endpoint
-
Save the response of the received in the above step
-
Go to Vault UI again
- Click on 'secret' you will find an entry
- In custos folder edit the file and set superTenant to true "superTenant": true
-
Make one last REST api call
- POST request to YOUR_HOST_NAME and endpoint
- /tenant-management/v1.0.0/status
- https://js-156-79.jetstream-cloud.org:30367/tenant-management/v1.0.0/status
- With following body
You will receive following json format in response
{ "client_id":"RECEIVED_IN_PREVIOUS_POST_REQUEST", "status":"ACTIVE", "super_tenant":true, "updatedBy":"admin_username KEY_PASSED_IN_BODY_OF_PREVIOUS_REQUEST" }
{ "tenant_id": "10000000", "status": "ACTIVE" }
- POST request to YOUR_HOST_NAME and endpoint
Once, we receive ACTIVE in the above API response then we have successfully deployed custos.