Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jetson 编译支持显卡的版本,是否需要加入tensorrt 支持 #2676

Open
captain0005 opened this issue Oct 28, 2024 · 0 comments
Open

Comments

@captain0005
Copy link

我在使用jetson orin nx 的arm上使用,想将inspireface 编译一个支持显卡的版本,增加了全局cuda支持,但是实际使用时并没有使用到gpu,而且还提示缺少tensorrt。请问如果在该arm环境上使用gpu,还需要增加tensorrt编译选项么?

一下提示是我调用库输出的提示
== Pikachu v0.1.4 ==
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
The device support i8sdot:1, support fp16:1, support i8mm: 0
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant