This module implements the RDK vision API in a viam-labs:vision:yolov8 model.
This model leverages the Ultralytics inference library to allow for object detection and classification from YOLOv8 models.
Both locally deployed YOLOv8 models and models from web sources like HuggingFace can be used (HuggingFace models will be downloaded and used locally).
To use this module, follow these instructions to add a module from the Viam Registry and select the viam-labs:vision:yolov8
model from the viam-labs YOLOv8 module.
Note
Before configuring your vision service, you must create a machine.
Navigate to the Config tab of your robot’s page in the Viam app.
Click on the Components subtab and click Create component.
Select the vision
type, then select the viam-labs:vision:yolov8
model.
Enter a name for your vision and click Create.
On the new component panel, copy and paste the following attribute template into your vision service's Attributes box:
{
"model_location": "<model_path>"
}
Note
For more information, see Configure a Robot.
The following attributes are available for viam-labs:vision:yolov8
model:
Name | Type | Inclusion | Description |
---|---|---|---|
model_location |
string | Required | Local path or HuggingFace model identifier |
{
"model_location": "keremberke/yolov8n-hard-hat-detection"
}
Local YOLOv8 model:
{
"model_location": "/path/to/yolov8n.pt"
}
The YOLOv8 resource provides the following methods from Viam's built-in rdk:service:vision API
Note: if using this method, any cameras you are using must be set in the depends_on
array for the service configuration, for example:
"depends_on": [
"cam"
]
Note: if using this method, any cameras you are using must be set in the depends_on
array for the service configuration, for example:
"depends_on": [
"cam"
]