Skip to content

Releases: PINTO0309/onnx2tf

1.22.0

18 May 12:58
Compare
Choose a tag to compare
  • Docker Image (arm64, Apple Silicon Mac)
    Distributed docker image is not compatible with ARM environments such as Apple Silicon Mac.
    Although Docker offers emulation for x86/amd64 environments, onnx2tf within this emulation mode results in an error, as shown below:

    $ root@39d07181ce27:/# onnx2tf -h
    > Illegal instruction

    This issue is suspected to be caused by the dependency of PyPI TensorFlow on x86-specific instruction sets.

    To address this,
    I've augmented the release process with GitHub Actions to include building and pushing Docker images for Arm64 architecture. This make it possible to execute onnx2tf with docker on Arm64 hosts.

What's Changed

Full Changelog: 1.21.6...1.22.0

1.21.6

17 May 14:17
Compare
Choose a tag to compare
  • MatMulInteger
    Currently, MatMulInteger is implemented as tf matmul with int32 inputs/outputs, which leads to generation of Flex(Batch)MatMul ops.

    When -rtpo MatMulInteger is specified, inputs of MatMulInteger are casted to float32 instead, allowing the node to be converted to the builtin FullyConnected or BatchMatMul ops.

    ONNX input:
    image

    Before:
    Screenshot_20240517_202911

    After:
    image

What's Changed

Full Changelog: 1.21.5...1.21.6

1.21.4

16 May 11:04
Compare
Choose a tag to compare
  • Add -odrqt, --output-dynamic-range-quantized-tflite option.
    While output_integer_quantized_tflite already enables dynamic range quantization output, the option also triggers checks for calibration data, which is only required for full integer quantization, and causes errors when no calibration data is provided.

    This is undesirable if only dynamic quantization is wanted.

    A new option (-odrqt, --output-dynamic-range-quantized-tflite) is added to only enable dynamic range quant output, which doesn't need calibration data.

    Before:

    $ onnx2tf -i some_model_with_non_regular_input_shape.onnx -oiqt
    (other output omitted)
    Model conversion started ============================================================
    INFO: input_op_name: input shape: [1] dtype: float32
    ERROR: For INT8 quantization, the input data type must be Float32. Also, if --custom_input_op_name_np_data_path is not specified, all input OPs must assume 4D tensor image data. INPUT Name: input INPUT Shape: [1] INPUT dtype: float32
    

    After:

    $ onnx2tf -i some_model_with_non_regular_input_shape.onnx -odrqt
    (other output omitted)
    saved_model output started ==========================================================
    saved_model output complete!
    WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
    W0000 00:00:1715853625.734342    7691 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
    W0000 00:00:1715853625.734397    7691 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
    Float32 tflite output complete!
    W0000 00:00:1715853629.274694    7691 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
    W0000 00:00:1715853629.274724    7691 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
    Float16 tflite output complete!
    W0000 00:00:1715853631.535535    7691 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
    W0000 00:00:1715853631.535568    7691 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
    Dynamic Range Quantization tflite output complete!
    

What's Changed

Full Changelog: 1.21.3...1.21.4

1.21.3

16 May 08:54
68de5a2
Compare
Choose a tag to compare
  • Significantly faster flatbuffer update speed with -coion option
    Currently, the copy_onnx_input_output_names_to_tflite flag converts the tflite model to json for modification, and then convert it back. For large models, the conversions take a long time and consume a large amount of disk space.

    Flatbuffers provides Python API allowing reading and writing model files as Python objects directly. We can run flatc --python --gen-object-api to generate Python object API from the downloaded schema file and use it to read the model, add signature defs, and write it back.

What's Changed

New Contributors

Full Changelog: 1.21.2...1.21.3

1.21.2

15 May 02:50
f7918ca
Compare
Choose a tag to compare

What's Changed

  • Added automatic error correction for GatherElements by @PINTO0309 in #630

Full Changelog: 1.21.1...1.21.2

1.21.1

13 May 10:04
3807991
Compare
Choose a tag to compare
  • Constant
    • Bring Constant layers unconnected to the model into the model.

    • It is assumed that the -nuo option is specified because running onnxsim will remove constants from the ONNX file.

    • Wrap constants in a Lambda layer and force them into the model.

    • toy_with_constant.onnx.zip

    • Convert test

      onnx2tf -i toy_with_constant.onnx -nuo -cotof
      ONNX TFLite
      image image

      image

    • Inference test

      import tensorflow as tf
      import numpy as np
      from pprint import pprint
      
      interpreter = tf.lite.Interpreter(model_path="saved_model/toy_with_constant_float32.tflite")
      interpreter.allocate_tensors()
      
      input_details = interpreter.get_input_details()
      output_details = interpreter.get_output_details()
      
      interpreter.set_tensor(
          tensor_index=input_details[0]['index'],
          value=np.ones(tuple(input_details[0]['shape']), dtype=np.float32)
      )
      interpreter.invoke()
      
      variable_output = interpreter.get_tensor(output_details[0]['index'])
      constant_output = interpreter.get_tensor(output_details[1]['index'])
      
      print("=================")
      print("Variable Output:")
      pprint(variable_output)
      print("=================")
      print("Constant Output:")
      pprint(constant_output)
      =================
      Variable Output:
      array([[-0.02787317, -0.05505124,  0.05421712,  0.03526559, -0.14131774,
               0.0019211 ,  0.08399964,  0.00433664, -0.00984338, -0.03370604]],
            dtype=float32)
      =================
      Constant Output:
      array([1., 2., 3., 4., 5.], dtype=float32)
      
  • Constant outputs removed from ONNX during conversion #627

1.21.0

07 May 10:43
429627c
Compare
Choose a tag to compare
  • Fix Typo
  • Change API parameter names

What's Changed

Full Changelog: 1.20.10...1.21.0

1.20.10

07 May 06:55
a851283
Compare
Choose a tag to compare
  • YOLOvN ONNX
  • Fixed to restore metadata
  • pip install -U sng4onnx>=1.0.4 sne4onnx>=1.0.13
    image

What's Changed

Full Changelog: 1.20.9...1.20.10

1.20.9

06 May 04:46
30f25f7
Compare
Choose a tag to compare

What's Changed

  • [experimental] Support for dynamic Tile, dynamic Reshape by @PINTO0309 in #623

Full Changelog: 1.20.8...1.20.9

1.20.8

05 May 16:42
bf8e894
Compare
Choose a tag to compare
onnx2tf \
-i maskrcnn_resnet50_fpn.onnx \
-onimc boxes.55 onnx::Shape_3316 3315 onnx::Loop_3751

image

What's Changed

  • Improved conversion stability of subgraphs of If operations. by @PINTO0309 in #622

Full Changelog: 1.20.7...1.20.8