-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BoundingValueHierarchy serialization data not portable #42
Comments
To dump the layout of Single-precision nativesFor linux64/debug/sp: btQuantizedBvh size is 248 bytes:
m_bvhAabbMin at +8 bytes
m_bvhAabbMax at +24 bytes
m_bvhQuantization at +40 bytes
m_bulletVersion at +56 bytes
m_curNodeIndex at +60 bytes
m_useQuantization at +64 bytes
m_leafNodes at +72 bytes
m_contiguousNodes at +104 bytes
m_quantizedLeafNodes at +136 bytes
m_quantizedContiguousNodes at +168 bytes
m_traversalMode at +200 bytes
m_SubtreeHeaders at +208 bytes
m_subtreeHeaderCount at +240 bytes For macOSX64/debug/sp: btQuantizedBvh size is 256 bytes:
m_bvhAabbMin at +16 bytes
m_bvhAabbMax at +32 bytes
m_bvhQuantization at +48 bytes
m_bulletVersion at +64 bytes
m_curNodeIndex at +68 bytes
m_useQuantization at +72 bytes
m_leafNodes at +80 bytes
m_contiguousNodes at +112 bytes
m_quantizedLeafNodes at +144 bytes
m_quantizedContiguousNodes at +176 bytes
m_traversalMode at +208 bytes
m_SubtreeHeaders at +216 bytes
m_subtreeHeaderCount at +248 bytes For macOSX_ARM64/debug/sp: btQuantizedBvh size is 256 bytes:
m_bvhAabbMin at +16 bytes
m_bvhAabbMax at +32 bytes
m_bvhQuantization at +48 bytes
m_bulletVersion at +64 bytes
m_curNodeIndex at +68 bytes
m_useQuantization at +72 bytes
m_leafNodes at +80 bytes
m_contiguousNodes at +112 bytes
m_quantizedLeafNodes at +144 bytes
m_quantizedContiguousNodes at +176 bytes
m_traversalMode at +208 bytes
m_SubtreeHeaders at +216 bytes
m_subtreeHeaderCount at +248 bytes For windows64\debug\sp: btQuantizedBvh size is 256 bytes:
m_bvhAabbMin at +16 bytes
m_bvhAabbMax at +32 bytes
m_bvhQuantization at +48 bytes
m_bulletVersion at +64 bytes
m_curNodeIndex at +68 bytes
m_useQuantization at +72 bytes
m_leafNodes at +80 bytes
m_contiguousNodes at +112 bytes
m_quantizedLeafNodes at +144 bytes
m_quantizedContiguousNodes at +176 bytes
m_traversalMode at +208 bytes
m_SubtreeHeaders at +216 bytes
m_subtreeHeaderCount at +248 bytes Double-precision nativesFor linux64/debug/dp: btQuantizedBvh size is 296 bytes:
m_bvhAabbMin at +8 bytes
m_bvhAabbMax at +40 bytes
m_bvhQuantization at +72 bytes
m_bulletVersion at +104 bytes
m_curNodeIndex at +108 bytes
m_useQuantization at +112 bytes
m_leafNodes at +120 bytes
m_contiguousNodes at +152 bytes
m_quantizedLeafNodes at +184 bytes
m_quantizedContiguousNodes at +216 bytes
m_traversalMode at +248 bytes
m_SubtreeHeaders at +256 bytes
m_subtreeHeaderCount at +288 bytes For macOSX64/debug/dp: btQuantizedBvh size is 296 bytes:
m_bvhAabbMin at +8 bytes
m_bvhAabbMax at +40 bytes
m_bvhQuantization at +72 bytes
m_bulletVersion at +104 bytes
m_curNodeIndex at +108 bytes
m_useQuantization at +112 bytes
m_leafNodes at +120 bytes
m_contiguousNodes at +152 bytes
m_quantizedLeafNodes at +184 bytes
m_quantizedContiguousNodes at +216 bytes
m_traversalMode at +248 bytes
m_SubtreeHeaders at +256 bytes
m_subtreeHeaderCount at +288 bytes For macOSX_ARM64/debug/dp: btQuantizedBvh size is 296 bytes:
m_bvhAabbMin at +8 bytes
m_bvhAabbMax at +40 bytes
m_bvhQuantization at +72 bytes
m_bulletVersion at +104 bytes
m_curNodeIndex at +108 bytes
m_useQuantization at +112 bytes
m_leafNodes at +120 bytes
m_contiguousNodes at +152 bytes
m_quantizedLeafNodes at +184 bytes
m_quantizedContiguousNodes at +216 bytes
m_traversalMode at +248 bytes
m_SubtreeHeaders at +256 bytes
m_subtreeHeaderCount at +288 bytes For windows64\debug\dp: btQuantizedBvh size is 304 bytes:
m_bvhAabbMin at +16 bytes
m_bvhAabbMax at +48 bytes
m_bvhQuantization at +80 bytes
m_bulletVersion at +112 bytes
m_curNodeIndex at +116 bytes
m_useQuantization at +120 bytes
m_leafNodes at +128 bytes
m_contiguousNodes at +160 bytes
m_quantizedLeafNodes at +192 bytes
m_quantizedContiguousNodes at +224 bytes
m_traversalMode at +256 bytes
m_SubtreeHeaders at +264 bytes
m_subtreeHeaderCount at +296 bytes The key difference: on Linux the first field starts at +8, while on Windows it starts at +16. macOS appears inconsistent:
I'd love to know why. Since the correct amount of padding seems to be "it's complicated", I should also test Linux-on-Arm, both 32-bit and 64-bit. Unfortunately, I don't have access to 32-bit platforms (other than Linux-on-Arm) or to Android. |
I hacked together a Clang build, and it matched Gcc perfectly: For linux64/debug/sp: btQuantizedBvh size is 248 bytes:
m_bvhAabbMin at +8 bytes
m_bvhAabbMax at +24 bytes
m_bvhQuantization at +40 bytes
m_bulletVersion at +56 bytes
m_curNodeIndex at +60 bytes
m_useQuantization at +64 bytes
m_leafNodes at +72 bytes
m_contiguousNodes at +104 bytes
m_quantizedLeafNodes at +136 bytes
m_quantizedContiguousNodes at +168 bytes
m_traversalMode at +200 bytes
m_SubtreeHeaders at +208 bytes
m_subtreeHeaderCount at +240 bytes For linux64/debug/dp: Debug_Dp_Libbulletjme version 21.0.0 initializing
btQuantizedBvh size is 296 bytes:
m_bvhAabbMin at +8 bytes
m_bvhAabbMax at +40 bytes
m_bvhQuantization at +72 bytes
m_bulletVersion at +104 bytes
m_curNodeIndex at +108 bytes
m_useQuantization at +112 bytes
m_leafNodes at +120 bytes
m_contiguousNodes at +152 bytes
m_quantizedLeafNodes at +184 bytes
m_quantizedContiguousNodes at +216 bytes
m_traversalMode at +248 bytes
m_SubtreeHeaders at +256 bytes
m_subtreeHeaderCount at +288 bytes |
I built Libbulletjme on a Raspbian with Gcc-8 (hash 0fb79b5): For linux_ARM32/debug/sp: btQuantizedBvh size is 180 bytes:
m_bvhAabbMin at +12 bytes
m_bvhAabbMax at +28 bytes
m_bvhQuantization at +44 bytes
m_bulletVersion at +60 bytes
m_curNodeIndex at +64 bytes
m_useQuantization at +68 bytes
m_leafNodes at +72 bytes
m_contiguousNodes at +92 bytes
m_quantizedLeafNodes at +112 bytes
m_quantizedContiguousNodes at +132 bytes
m_traversalMode at +152 bytes
m_SubtreeHeaders at +156 bytes
m_subtreeHeaderCount at +176 bytes For linux_ARM32/debug/dp: btQuantizedBvh size is 232 bytes:
m_bvhAabbMin at +16 bytes
m_bvhAabbMax at +48 bytes
m_bvhQuantization at +80 bytes
m_bulletVersion at +112 bytes
m_curNodeIndex at +116 bytes
m_useQuantization at +120 bytes
m_leafNodes at +124 bytes
m_contiguousNodes at +144 bytes
m_quantizedLeafNodes at +164 bytes
m_quantizedContiguousNodes at +184 bytes
m_traversalMode at +204 bytes
m_SubtreeHeaders at +208 bytes
m_subtreeHeaderCount at +228 bytes |
The If the goal is robust portability, however, Libbulletjme's total reliance on Conclusion: Minie needs access to Bullet's "new" serialization system, the one that uses |
I've been struggling to grok the examples in Bullet's "Extras/Serialize" folder. My first serialization of the test model using |
Bullet's DNA is essential to porting Bullet-serialized data structures between platforms. It encodes the field offsets of 80+ classes and structs, including 6 defined in "btQuantizedBvh.h". I begin to question the benefits of serializing and deserializing the BVH of a If it's beneficial, then the current approach (re-generating BVH during load if the platforms don't match) seems optimal for applications that run on one platform only. For apps that benefit from BVH serialization and run on multiple platforms, I see several approaches:
|
I made BVH serialization optional at 720c1ec. |
This issue dates back to Minie v0.7.2, when a workaround was implemented that discards serialized BVH data if the writer and reader were on different platforms.
The root cause is in Bullet's btQuantizedBvh.h, where the size of
class btQuantizedBvh
varies between platforms.On Linux platforms (GCC compiler, both on x86_64 and arm64), the following assertions pass:
On macOS (LLVM compiler) and Windows (Visual compiler) the same assertions often fail. Inferred from
bvh.serialize()
return values:Perhaps some padding is needed for data portability.
The text was updated successfully, but these errors were encountered: