You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to use Depth-Anything-V2 with mono camera to emulate RGB-D image. I have managed to create the nodes and produce a neat depth image out of it. But I got problem with rtabmap reconstructing the point cloud. It seems make an erratic cone-shaped 3D from the middle of the frame, and expanding deeper to the edges.
I wonder if anyone can help me point out what did I miss.
Preliminary guess: Should the depth information be in meters? I've used the full range of 16UC1 (0, 65535). Now I've realized there's a metric model for Depth-Anything-V2 aswell. I'm going to try to refactor my code with it.
If you use 16UC1, it should be in mm. If you use 32FC1, it should be in meters. How is the depth scale estimated? If scale is not fixed between the images while the camera is moving, that will make visual odometry fails.
Hi, I am trying to use Depth-Anything-V2 with mono camera to emulate RGB-D image. I have managed to create the nodes and produce a neat depth image out of it. But I got problem with rtabmap reconstructing the point cloud. It seems make an erratic cone-shaped 3D from the middle of the frame, and expanding deeper to the edges.
I wonder if anyone can help me point out what did I miss.
Launcher file
Python Code: depth_anything_node.py
The text was updated successfully, but these errors were encountered: