You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,I saw on the sceneflow dataset website that you mentioned that you have customized a Flything3D subset. In this subset you ommitted some extremely hard samples. So my question is what is the standards for the "extremely hard samples"?And If using the original Flything 3D subset, what impact will it have on the network?
The text was updated successfully, but these errors were encountered:
IIRC "extremely hard" samples have disparity magnitudes > 300 pixels (I am not sure whether there was also a flow criterion, but I do not think so). I think training on the original full set should still work well, maybe a bit slower.
IIRC "extremely hard" samples have disparity magnitudes > 300 pixels (I am not sure whether there was also a flow criterion, but I do not think so). I think training on the original full set should still work well, maybe a bit slower.
I traversed the Flything3D subset, and the subset still has a few samples containing more than 25% of the pixels, which disparity magnitudes > 300。This makes me more curious about the screening.
Hi,I saw on the sceneflow dataset website that you mentioned that you have customized a Flything3D subset. In this subset you ommitted some extremely hard samples. So my question is what is the standards for the "extremely hard samples"?And If using the original Flything 3D subset, what impact will it have on the network?
The text was updated successfully, but these errors were encountered: