Skip to content

Model formats

André Pedersen edited this page Jun 8, 2022 · 3 revisions

In order to deploy CNN models in FastPathology, only a selected proportion of formats are supported. Hence, if a model is trained in for instance Keras or PyTorch, models need to be converted before they can be used. The supported formats are: TensorFlow (SavedModel), OpenVINO (.xml/.bin/.mapping and .onnx), and TensorRT (.uff and .onnx). These are some examples of how to convert between formats:

  • TensorFlow/Keras -> ONNX. Use tf2onnx tool/library.
  • Keras (TF1, .h5 format) -> TensorFlow, example.
  • PyTorch -> TorchScript -> ONNX -> OpenVINO, example.
  • Matlab -> ONNX -> OpenVINO, example
  • TensorFlow -> TensorRT (UFF), example

Note that the recommended format is ONNX, which can be used by both OpenVINO and TensorRT. OpenVINO can be run on Intel processors, both using the CPU and integrated GPU. TensorRT can be used for GPU inference using a dedicated GPU.

TensorFlow models can also be used and supports both CPU and GPU inference. In addition, for AMD GPUs, FAST supports the ROCm inference engine. However, it has not been tested rigorously with FastPathology, and will likely not work (yet). Lastly, an alternative inference engine for TensorRT is UFF, but I would recommend everyone to use ONNX instead as it is more frequently maintained and supports a lot more layers and operations (list of all supported ops in ONNX can be seen here).

Clone this wiki locally