Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A bug when transferring onnx to trt #79

Open
Son-Goku-gpu opened this issue Nov 4, 2023 · 1 comment
Open

A bug when transferring onnx to trt #79

Son-Goku-gpu opened this issue Nov 4, 2023 · 1 comment

Comments

@Son-Goku-gpu
Copy link

Hi, Thanks for your great work. I borrowed some of your code on bev_pool_v2, especially the resigterred op g.op('custom::BEVPoolV2TRT2') + its wrapped class + its cuda implemented code + its python API, all of which can be well embeded to my project, so I can transform .pth file to .onnx successfully. However, when I further transform .onnx to .engine, I met a bug below:
image
Actually, I followed your instructions to install the TensorRT plugins + MMdeploy, and also import "TensorRT/lib/libtensorrt_ops.so" with ctypes.CDLL(OS_PATH) before parsing .onnx, so I assumed the plugins are already imported. But the bug info shows that the plugin still cannot be found, so is there any step that I missed for importing the plugins? can you share any ideas on it? Thanks.

@qingwan7
Copy link

qingwan7 commented Nov 7, 2023

How do you build the environment for exporting onnx? Can you share it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants