Replies: 0 comments 5 replies
-
@plane-li In TIER IV, we use edge ECUs(ROS Cube) for camera and neural networks. And the result of neural networks on the edge ECU send to the main Autoware ECU. @miursh @aohsato Can you share more details of our distributed perception system? |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Camera is an important sensor in Autoware, which detects object with semantic information, lane mark ,traffic light and traffic sign.
However it need more computing power, memory, and bandwidth than other modules. It is more convenient and reasonable to make the camera detection pipeline into an independent launch file which allows users to deploy Autoware on distributed hardware platform.
There are three main reasons:
1.Compared to LiDAR which is used for detection and localization, camera is only used for detection and could be easily decoupled from the arch.
2.Users of Autoware often do some changes in the quantity of camera,cnn-model and algorithm about visual detection. It is easy for users to do more research.
3.Hardware configurations of each automatic driving company/OEM are different,and some of them select smart camera such as Mobileye to handle visual detection. Considering the compatibility of hardware, this can help to commercialize Autoware.
The architecture we propose is that all the sub modules such as camera-driver, image processing, CNN run on the same hardware with hardware acceleration unit, and it has independent launch file to run them. The perception module receives the camera detection results for fusion.
Beta Was this translation helpful? Give feedback.
All reactions