From 91f240bd5e55bd7a81dadb261697654c2dfe0d2f Mon Sep 17 00:00:00 2001 From: Hanxue Zhang <75412366+jjxjiaxue@users.noreply.github.com> Date: Fri, 1 Sep 2023 14:37:12 +0800 Subject: [PATCH] Update getting_started.md --- docs/getting_started.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting_started.md b/docs/getting_started.md index eb92767..9f22e1c 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -77,7 +77,7 @@ train.json - `q` and `a` are python list, with each element a string of either `question` or `answer`. - The `description` under `Perception` is a mapping between `c tag` (i.e. \) and its textual description of visual appearance. - +Note: The `c tag` label is used to indicate key objects selected during the annotation process that are meaningful for self-driving of the ego vehicle. These objects include not only those present in the ground truth but also objects that are not present in the ground truth, such as landmarks and traffic lights. Each key frame contains a minimum of three and a maximum of six key objects. The organization format of the `c tag` is ``, representing the index, the corresponding camera (CAM_XXX), the x-coordinate, and the y-coordinate. The x and y coordinates refer to the positions on the stitched image obtained by combining the outputs of six cameras. The resulting stitched image has dimensions of 2880*1040.