Skip to content

Commit

Permalink
fixup! Update device provisioning service documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
Danielius1922 committed Sep 18, 2024
1 parent 814c82d commit 7b335ad
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 41 deletions.
1 change: 0 additions & 1 deletion content/en/docs/deployment/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ plgd https://charts.plgd.dev

// helm search repo plgd
NAME CHART VERSION APP VERSION DESCRIPTION
plgd/plgd-dps 0.9.0 0.9.0 A Helm chart for plgd device provisioning service
plgd/plgd-hub 2.7.15 2.7.15 A Helm chart for plgd-hub

```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The basic deployment uses Mock OAuth Server, so it shall be used only for test/d
Before deploying the Device Provisioning Service on Kubernetes, make sure to follow the steps in [Hub](/docs/deployment/device-provisioning-service/hub) first. Then apply the changes from this page to the configuration. Once done, you can deploy the hub with the Device Provisioning Service.
{{< /note >}}

For Device Provisioning Service, all configuration values are documented [here](https://github.com/plgd-dev/device-provisioning-service/blob/main/charts/device-provisioning-service/README.md#values).
For Device Provisioning Service, all configuration values are documented [here](https://github.com/plgd-dev/hub/blob/main/charts/plgd-hub/README.md#values). Look for values starting with `deviceProvisioningService`.

## Device provider for Device Provisioning Service

Expand Down Expand Up @@ -61,29 +61,6 @@ mockoauthserver:
For production, you need to set the OAuth server client credential flow, as is described in [Customize OAuth server client credential flow](/docs/deployment/device-provisioning-service/advanced).
{{< /warning >}}
To allow download the Device Provisioning Service docker image by k8s, the following configuration needs to extend the configuration:
```yaml
deviceProvisioningService:
image:
dockerConfigSecret: |
{
"auths": {
"ghcr.io": {
"auth": "<DOCKER_AUTH_TOKEN>"
}
}
}
```
{{< note >}}
To access ghcr.io, please reach out to us at [[email protected]](mailto:[email protected]) in order to request permission for your GitHub account to access the plgd device provisioning server images. You can refer to the [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) for instructions on how to allow access.
{{< /note >}}
## Configure Enrollment Groups
The enrollment groups can be configured via deployment, utilizing the setup from the hub configuration to populate the values.
Expand Down Expand Up @@ -155,14 +132,12 @@ To deploy the hub with the Device Provisioning Service, apply the following comm

```sh
helm upgrade -i -n plgd --create-namespace -f withMock.yaml hub plgd/plgd-hub
helm upgrade -i -n plgd --create-namespace -f withMock.yaml dps plgd/plgd-dps
```

You can execute these commands multiple times to update the configuration. In such cases, you will need to restart the pods by deleting them:

```sh
kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "hub-plgd")
kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "dps-plgd")
```

## Final configuration with mock oauth server
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,14 +27,12 @@ If you encounter issues with the Device Provisioning Service or Hub, follow thes
```sh
helm upgrade -i -n plgd --create-namespace -f ./withUpdatedMock.yaml hub plgd/plgd-hub
helm upgrade -i -n plgd --create-namespace -f ./withUpdatedMock.yaml dps plgd/plgd-dps
```

3. Restart the pods by deleting them:

```sh
kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "hub-plgd")
kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "dps-plgd")
```

These steps will enable debug logging and restart the necessary components, providing more detailed information for troubleshooting the issues with the Device Provisioning Service or Hub.
Expand All @@ -46,7 +44,7 @@ If your device is unable to connect to DPS, follow these steps:
1. Check the DPS logs by running the following command:

```sh
kubectl -n plgd logs $(kubectl -n plgd get pods -o name | grep "dps-plgd")
kubectl -n plgd logs $(kubectl -n plgd get pods -o name | grep "plgd-hub-device-provisioning-service")
```

2. Check the device logs in the console.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,8 @@ These steps will enable you to generate the necessary certificates and configure
```sh
cd "$HOME"
cat ./withMock.yaml | yq -e ".deviceProvisioningService.enrollmentGroups[0].attestationMechanism.x509.certificateChain=\"$(cat ./plgd_certs/intermediate_ca.crt)\"" > ./withUpdatedMock.yaml
helm upgrade -i -n plgd --create-namespace -f withUpdatedMock.yaml dps plgd/plgd-dps
kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "dps-plgd")
helm upgrade -i -n plgd --create-namespace -f withUpdatedMock.yaml hub plgd/plgd-hub
kubectl -n plgd delete $(kubectl -n plgd get pods -o name | grep "hub-plgd")
```

Now, you can test the Device Provisioning Service with the following methods depending on the network trust level:
Expand All @@ -94,7 +94,7 @@ To set up a Zero Trust network, it is essential for the device to authenticate t
2. Run the example device with the device manufacturer certificate (IDevId):

```sh
docker run -it --rm -v $HOME/plgd_certs/device/pki_certs:/dps/bin/pki_certs ghcr.io/plgd-dev/device-provisioning-client/dps-cloud-server-debug:latest test-device "coaps+tcp://example.com:15684"
docker run -it --rm -v $HOME/plgd_certs/device/pki_certs:/dps/pki_certs ghcr.io/iotivity/iotivity-lite/dps-cloud-server-debug:latest test-device "coaps+tcp://example.com:15684"
```

{{< note >}}
Expand Down Expand Up @@ -122,7 +122,7 @@ In Trusted network device can skip validation of the Device Provisioning Service
1. Run the example device with the device manufacturer certificate (IDevId):

```sh
docker run -it --rm -v $HOME/plgd_certs/device/pki_certs:/dps/bin/pki_certs ghcr.io/plgd-dev/device-provisioning-client/dps-cloud-server-debug:latest test-device "coaps+tcp://example.com:15684" --no-verify-ca
docker run -it --rm -v $HOME/plgd_certs/device/pki_certs:/dps/pki_certs ghcr.io/iotivity/iotivity-lite/dps-cloud-server-debug:latest test-device "coaps+tcp://example.com:15684" --no-verify-ca
```

{{< warning >}}
Expand Down
10 changes: 3 additions & 7 deletions content/en/docs/tutorials/disaster-recovery-replica-set.md
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ Ensure that you have cert-manager installed on the standby cluster as well.

The primary cluster will deploy the Hub with all APIs exposed on the `primary.plgd.cloud` domain. The CoAP gateway listens on NodePort `15684`, and the device provisioning service listens on NodePort `5684`. The MongoDB replica set is exposed via a LoadBalancer service type, requiring a client certificate (mTLS) to connect to MongoDB.

To deploy the plgd-hub and plgd-dps Helm charts on the primary cluster, use the following Helm command:
To deploy the plgd-hub Helm charts on the primary cluster, use the following Helm command:

```bash
# Set variables
Expand Down Expand Up @@ -387,6 +387,7 @@ resourcedirectory:
publicConfiguration:
coapGateway: "coaps+tcp://$DOMAIN:15684"
deviceProvisioningService:
enabled: true
apiDomain: "$DOMAIN"
service:
type: NodePort
Expand Down Expand Up @@ -419,7 +420,6 @@ $(sed 's/^/ /' $MANUFACTURER_CERTIFICATE_CA)
audience: "https://$DOMAIN"
EOF
helm upgrade -i -n plgd --create-namespace -f values.yaml hub plgd/plgd-hub
helm upgrade -i -n plgd --create-namespace -f values.yaml dps plgd/plgd-dps
```

Now we need to get the IP addresses of the MongoDB members and set them to the DNS. The external IP address of the LoadBalancer is used to connect to the MongoDB replica set from the other cluster.
Expand Down Expand Up @@ -556,6 +556,7 @@ resourcedirectory:
publicConfiguration:
coapGateway: "coaps+tcp://$DOMAIN:15684"
deviceProvisioningService:
enabled: true
apiDomain: "$DOMAIN"
service:
type: NodePort
Expand Down Expand Up @@ -588,7 +589,6 @@ $(sed 's/^/ /' $MANUFACTURER_CERTIFICATE_CA)
audience: "https://$DOMAIN"
EOF
helm upgrade -i -n plgd --create-namespace -f values.yaml hub plgd/plgd-hub
helm upgrade -i -n plgd --create-namespace -f values.yaml dps plgd/plgd-dps
```

Next, we need to get the IP addresses of the MongoDB members and set them to the DNS server running on `192.168.1.1`, similar to the primary cluster.
Expand Down Expand Up @@ -650,7 +650,6 @@ The final step is to run plgd pods on the standby cluster. Set the `global.stand

```bash
helm upgrade -i -n plgd --create-namespace -f values.yaml --set mongodb.standbyTool.mode=active --set global.standby=false --set nats.enabled=true hub plgd/plgd-hub
helm upgrade -i -n plgd --create-namespace -f values.yaml --set mongodb.standbyTool.mode=active --set global.standby=false --set nats.enabled=true dps plgd/plgd-dps
```

After rotating the device provisioning endpoints, the devices will connect to the standby cluster.
Expand All @@ -661,7 +660,6 @@ When the primary cluster is back up, set the `global.standby` flag to `true`, di

```bash
helm upgrade -i -n plgd --create-namespace -f values.yaml --set global.standby=true --set nats.enabled=false hub plgd/plgd-hub
helm upgrade -i -n plgd --create-namespace -f values.yaml --set global.standby=true --set nats.enabled=false dps plgd/plgd-dps
```

### How to Switch Back to the Primary Cluster
Expand All @@ -687,7 +685,6 @@ The final step is to run plgd pods on the standby cluster. Set the `global.stand

```bash
helm upgrade -i -n plgd --create-namespace -f values.yaml --set mongodb.standbyTool.mode=standby --set global.standby=true --set nats.enabled=false hub plgd/plgd-hub
helm upgrade -i -n plgd --create-namespace -f values.yaml --set mongodb.standbyTool.mode=standby --set global.standby=true --set nats.enabled=false dps plgd/plgd-dps
```

#### Turn On plgd Pods on the Primary Cluster
Expand All @@ -696,7 +693,6 @@ When the standby cluster is ready for devices, switch back to the primary cluste

```bash
helm upgrade -i -n plgd --create-namespace -f values.yaml --set global.standby=false --set nats.enabled=true hub plgd/plgd-hub
helm upgrade -i -n plgd --create-namespace -f values.yaml --set global.standby=false --set nats.enabled=true dps plgd/plgd-dps
```

After rotating the device provisioning endpoints, the devices will connect to the primary cluster.

0 comments on commit 7b335ad

Please sign in to comment.