diff --git a/content/en/docs/deployment/deployment.md b/content/en/docs/deployment/deployment.md index db6f18c1..db403c1d 100644 --- a/content/en/docs/deployment/deployment.md +++ b/content/en/docs/deployment/deployment.md @@ -37,7 +37,6 @@ plgd https://charts.plgd.dev // helm search repo plgd NAME CHART VERSION APP VERSION DESCRIPTION -plgd/plgd-dps 0.9.0 0.9.0 A Helm chart for plgd device provisioning service plgd/plgd-hub 2.7.15 2.7.15 A Helm chart for plgd-hub ``` diff --git a/content/en/docs/deployment/device-provisioning-service/device-provisioning-service.md b/content/en/docs/deployment/device-provisioning-service/device-provisioning-service.md index 1f775907..c4dd9123 100644 --- a/content/en/docs/deployment/device-provisioning-service/device-provisioning-service.md +++ b/content/en/docs/deployment/device-provisioning-service/device-provisioning-service.md @@ -61,29 +61,6 @@ mockoauthserver: For production, you need to set the OAuth server client credential flow, as is described in [Customize OAuth server client credential flow](/docs/deployment/device-provisioning-service/advanced). -{{< /warning >}} - -To allow download the Device Provisioning Service docker image by k8s, the following configuration needs to extend the configuration: - -```yaml -deviceProvisioningService: - image: - dockerConfigSecret: | - { - "auths": { - "ghcr.io": { - "auth": "" - } - } - } -``` - -{{< note >}} - -To access ghcr.io, please reach out to us at [connect@plgd.dev](mailto:connect@plgd.dev) in order to request permission for your GitHub account to access the plgd device provisioning server images. You can refer to the [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) for instructions on how to allow access. - -{{< /note >}} - ## Configure Enrollment Groups The enrollment groups can be configured via deployment, utilizing the setup from the hub configuration to populate the values. @@ -155,14 +132,12 @@ To deploy the hub with the Device Provisioning Service, apply the following comm ```sh helm upgrade -i -n plgd --create-namespace -f withMock.yaml hub plgd/plgd-hub -helm upgrade -i -n plgd --create-namespace -f withMock.yaml dps plgd/plgd-dps ``` You can execute these commands multiple times to update the configuration. In such cases, you will need to restart the pods by deleting them: ```sh kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "hub-plgd") -kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "dps-plgd") ``` ## Final configuration with mock oauth server diff --git a/content/en/docs/deployment/device-provisioning-service/troubleshooting.md b/content/en/docs/deployment/device-provisioning-service/troubleshooting.md index 74526249..830efabe 100644 --- a/content/en/docs/deployment/device-provisioning-service/troubleshooting.md +++ b/content/en/docs/deployment/device-provisioning-service/troubleshooting.md @@ -27,14 +27,12 @@ If you encounter issues with the Device Provisioning Service or Hub, follow thes ```sh helm upgrade -i -n plgd --create-namespace -f ./withUpdatedMock.yaml hub plgd/plgd-hub - helm upgrade -i -n plgd --create-namespace -f ./withUpdatedMock.yaml dps plgd/plgd-dps ``` 3. Restart the pods by deleting them: ```sh kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "hub-plgd") - kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "dps-plgd") ``` These steps will enable debug logging and restart the necessary components, providing more detailed information for troubleshooting the issues with the Device Provisioning Service or Hub. @@ -46,7 +44,7 @@ If your device is unable to connect to DPS, follow these steps: 1. Check the DPS logs by running the following command: ```sh - kubectl -n plgd logs $(kubectl -n plgd get pods -o name | grep "dps-plgd") + kubectl -n plgd logs $(kubectl -n plgd get pods -o name | grep "hub-plgd") ``` 2. Check the device logs in the console. diff --git a/content/en/docs/deployment/device-provisioning-service/verify-device-onboarding.md b/content/en/docs/deployment/device-provisioning-service/verify-device-onboarding.md index 2350e8c8..14052d60 100644 --- a/content/en/docs/deployment/device-provisioning-service/verify-device-onboarding.md +++ b/content/en/docs/deployment/device-provisioning-service/verify-device-onboarding.md @@ -75,8 +75,8 @@ These steps will enable you to generate the necessary certificates and configure ```sh cd "$HOME" cat ./withMock.yaml | yq -e ".deviceProvisioningService.enrollmentGroups[0].attestationMechanism.x509.certificateChain=\"$(cat ./plgd_certs/intermediate_ca.crt)\"" > ./withUpdatedMock.yaml - helm upgrade -i -n plgd --create-namespace -f withUpdatedMock.yaml dps plgd/plgd-dps - kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "dps-plgd") + helm upgrade -i -n plgd --create-namespace -f withUpdatedMock.yaml hub plgd/plgd-hub + kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "hub-plgd") ``` Now, you can test the Device Provisioning Service with the following methods depending on the network trust level: diff --git a/content/en/docs/tutorials/disaster-recovery-replica-set.md b/content/en/docs/tutorials/disaster-recovery-replica-set.md index ac960448..41e7a2ad 100644 --- a/content/en/docs/tutorials/disaster-recovery-replica-set.md +++ b/content/en/docs/tutorials/disaster-recovery-replica-set.md @@ -291,7 +291,7 @@ Ensure that you have cert-manager installed on the standby cluster as well. The primary cluster will deploy the Hub with all APIs exposed on the `primary.plgd.cloud` domain. The CoAP gateway listens on NodePort `15684`, and the device provisioning service listens on NodePort `5684`. The MongoDB replica set is exposed via a LoadBalancer service type, requiring a client certificate (mTLS) to connect to MongoDB. -To deploy the plgd-hub and plgd-dps Helm charts on the primary cluster, use the following Helm command: +To deploy the plgd-hub Helm charts on the primary cluster, use the following Helm command: ```bash # Set variables @@ -387,6 +387,7 @@ resourcedirectory: publicConfiguration: coapGateway: "coaps+tcp://$DOMAIN:15684" deviceProvisioningService: + enabled: true apiDomain: "$DOMAIN" service: type: NodePort @@ -419,7 +420,6 @@ $(sed 's/^/ /' $MANUFACTURER_CERTIFICATE_CA) audience: "https://$DOMAIN" EOF helm upgrade -i -n plgd --create-namespace -f values.yaml hub plgd/plgd-hub -helm upgrade -i -n plgd --create-namespace -f values.yaml dps plgd/plgd-dps ``` Now we need to get the IP addresses of the MongoDB members and set them to the DNS. The external IP address of the LoadBalancer is used to connect to the MongoDB replica set from the other cluster. @@ -556,6 +556,7 @@ resourcedirectory: publicConfiguration: coapGateway: "coaps+tcp://$DOMAIN:15684" deviceProvisioningService: + enabled: true apiDomain: "$DOMAIN" service: type: NodePort @@ -588,7 +589,6 @@ $(sed 's/^/ /' $MANUFACTURER_CERTIFICATE_CA) audience: "https://$DOMAIN" EOF helm upgrade -i -n plgd --create-namespace -f values.yaml hub plgd/plgd-hub -helm upgrade -i -n plgd --create-namespace -f values.yaml dps plgd/plgd-dps ``` Next, we need to get the IP addresses of the MongoDB members and set them to the DNS server running on `192.168.1.1`, similar to the primary cluster. @@ -650,7 +650,6 @@ The final step is to run plgd pods on the standby cluster. Set the `global.stand ```bash helm upgrade -i -n plgd --create-namespace -f values.yaml --set mongodb.standbyTool.mode=active --set global.standby=false --set nats.enabled=true hub plgd/plgd-hub -helm upgrade -i -n plgd --create-namespace -f values.yaml --set mongodb.standbyTool.mode=active --set global.standby=false --set nats.enabled=true dps plgd/plgd-dps ``` After rotating the device provisioning endpoints, the devices will connect to the standby cluster. @@ -661,7 +660,6 @@ When the primary cluster is back up, set the `global.standby` flag to `true`, di ```bash helm upgrade -i -n plgd --create-namespace -f values.yaml --set global.standby=true --set nats.enabled=false hub plgd/plgd-hub -helm upgrade -i -n plgd --create-namespace -f values.yaml --set global.standby=true --set nats.enabled=false dps plgd/plgd-dps ``` ### How to Switch Back to the Primary Cluster @@ -687,7 +685,6 @@ The final step is to run plgd pods on the standby cluster. Set the `global.stand ```bash helm upgrade -i -n plgd --create-namespace -f values.yaml --set mongodb.standbyTool.mode=standby --set global.standby=true --set nats.enabled=false hub plgd/plgd-hub -helm upgrade -i -n plgd --create-namespace -f values.yaml --set mongodb.standbyTool.mode=standby --set global.standby=true --set nats.enabled=false dps plgd/plgd-dps ``` #### Turn On plgd Pods on the Primary Cluster @@ -696,7 +693,6 @@ When the standby cluster is ready for devices, switch back to the primary cluste ```bash helm upgrade -i -n plgd --create-namespace -f values.yaml --set global.standby=false --set nats.enabled=true hub plgd/plgd-hub -helm upgrade -i -n plgd --create-namespace -f values.yaml --set global.standby=false --set nats.enabled=true dps plgd/plgd-dps ``` After rotating the device provisioning endpoints, the devices will connect to the primary cluster.