You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
我在使用jetson orin nx 的arm上使用,想将inspireface 编译一个支持显卡的版本,增加了全局cuda支持,但是实际使用时并没有使用到gpu,而且还提示缺少tensorrt。请问如果在该arm环境上使用gpu,还需要增加tensorrt编译选项么?
一下提示是我调用库输出的提示
== Pikachu v0.1.4 ==
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
The device support i8sdot:1, support fp16:1, support i8mm: 0
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
The text was updated successfully, but these errors were encountered:
我在使用jetson orin nx 的arm上使用,想将inspireface 编译一个支持显卡的版本,增加了全局cuda支持,但是实际使用时并没有使用到gpu,而且还提示缺少tensorrt。请问如果在该arm环境上使用gpu,还需要增加tensorrt编译选项么?
一下提示是我调用库输出的提示
== Pikachu v0.1.4 ==
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
The device support i8sdot:1, support fp16:1, support i8mm: 0
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
[any_net.h][72]: You have forced the global use of MNN_CUDA as the neural network inference backend
[inference_helper_mnn.cpp][Initialize][168]: Enable CUDA
Can't Find type=2 backend, use 0 instead
Can't Find type=2 backend, use 0 instead
The text was updated successfully, but these errors were encountered: