You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use LiDAR-Camera fusion for object detection when using the simulators (both CARLA and AWSIM). Right now I am focusing on the Autoware-carla interface. What I did is to launch Autoware simulator interface through the following command:
Since I believe both AWSIM and carla_ros bridge only publish raw camera data to /sensing/camera/traffic_light/image_raw for traffic light detection instead of object detection, I modify part of the code here:
to publish camera data to /sensing/camera/camera0/image_rect_color so that YOLOX can receive the sensor data.
I also noticed that YOLO is not launched by default, thus I launched YOLO using the following command: ros2 launch autoware_tensorrt_yolox yolox.launch.xml and I checked that objects could be properly labeled (one traffic cone I spawned) when I visualized the topic /tensorrt_yolox/out/image
It looks like the topic /perception/object_recognition/detection/rois0, which I think is used by the perception module, is showing that the traffic cone is detected (as shown in the figure below).
However, I am unsure what the next step is to properly use LiDAR-Camera fusion as the ego vehicle won't stop when the traffic cone is only detected by the YOLO but not the LiDAR detection model. Are there any additional steps I need to do to properly enable sensor fusions?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi everyone,
I am trying to use LiDAR-Camera fusion for object detection when using the simulators (both CARLA and AWSIM). Right now I am focusing on the Autoware-carla interface. What I did is to launch Autoware simulator interface through the following command:
ros2 launch autoware_launch e2e_simulator.launch.xml map_path:=$HOME/autoware_map/Town01 vehicle_model:=sample_vehicle sensor_model:=awsim_sensor_kit simulator_type:=carla carla_map:=Town01 perception_mode:=camera_lidar_fusion
Since I believe both AWSIM and carla_ros bridge only publish raw camera data to /sensing/camera/traffic_light/image_raw for traffic light detection instead of object detection, I modify part of the code here:
https://github.com/autowarefoundation/autoware.universe/blob/main/simulator/autoware_carla_interface/src/autoware_carla_interface/carla_ros.py#L137
to publish camera data to /sensing/camera/camera0/image_rect_color so that YOLOX can receive the sensor data.
I also noticed that YOLO is not launched by default, thus I launched YOLO using the following command:
ros2 launch autoware_tensorrt_yolox yolox.launch.xml
and I checked that objects could be properly labeled (one traffic cone I spawned) when I visualized the topic/tensorrt_yolox/out/image
It looks like the topic
/perception/object_recognition/detection/rois0
, which I think is used by the perception module, is showing that the traffic cone is detected (as shown in the figure below).However, I am unsure what the next step is to properly use LiDAR-Camera fusion as the ego vehicle won't stop when the traffic cone is only detected by the YOLO but not the LiDAR detection model. Are there any additional steps I need to do to properly enable sensor fusions?
Any guide would be appreciated!
Beta Was this translation helpful? Give feedback.
All reactions