Skip to content

Releases: dgasmith/opt_einsum

v3.4.0

26 Sep 13:40
c15aec2
Compare
Choose a tag to compare

3.4.0 / 2024-09-26

NumPy has been removed from opt_einsum as a dependency allowing for more flexible installs. Type hints have been added to the code base as well as an overhaul of the documentation to MkDocs here: https://dgasmith.github.io/opt_einsum/

New Features

  • #160 Migrates docs to MkDocs Material and GitHub pages hosting.
  • #161 Adds Python type annotations to the code base.
  • #204 Removes NumPy as a hard dependency.

Enhancements

  • #154 Prevents an infinite recursion error when the memory_limit was set very low for the dp algorithm.
  • #155 Adds flake8 spell check to the doc strings
  • #159 Migrates to GitHub actions for CI.
  • #174 Prevents double contracts of floats in dynamic paths.
  • #196 Allows backend=None which is equivalent to backend='auto'
  • #208 Switches to ConfigParser insetad of SafeConfigParser for Python 3.12 compatability.
  • #228 backend='jaxlib' is now an alias for the jax library
  • #237 Switches to ruff for formatting and linting.
  • #238 Removes numpy-specific keyword args from being explicitly defined in contract and uses **kwargs instead.

Bug Fixes

  • #195 Fixes a bug where dp would not work for scalar-only contractions.
  • #200 Fixes a bug where parse_einsum_input would not correctly respect shape-only contractions.
  • #222 Fixes an erorr in parse_einsum_input where an output subscript specified multiple times was not correctly caught.
  • #229 Fixes a bug where empty contraction lists in PathInfo would cause an error.

v3.3.0

19 Jul 22:38
c826bb7
Compare
Choose a tag to compare

Adds a object backend for optimized contractions on arbitrary Python objects.

New Features

  • (#145) Adds a object based backend so that contract(backend='object') can be used on arbitrary objects such as SymPy symbols.

Enhancements

  • (#140) Better error messages when the requested contract backend cannot be found.
  • (#141) Adds a check with RandomOptimizers to ensure the objects are not accidentally reused for different contractions.
  • (#149) Limits the remaining category for the contract_path output to only show up to 20 tensors to prevent issues with the quadratically scaling memory requirements and the number of print lines for large contractions.

v3.2.1

15 Apr 14:52
7756811
Compare
Choose a tag to compare

Bug Fixes

  • (#131) Fixes an einstein subscript error message to point to the correct value.
  • (#135) Lazily loads JAX similar to other backends.

v3.2.0

03 Mar 00:32
4b81186
Compare
Choose a tag to compare

Small fixes for the dp path and support for a new mars backend.

New Features

  • (#109) Adds mars backend support.

Enhancements

  • (#110) New auto-hq and 'random-greedy-128' paths.
  • (#119) Fixes several edge cases in the dp path.

Bug fixes

  • (#127) Fixes an issue where Python 3.6 features are required while Python 3.5 is opt_einsum's stated minimum version.

v3.1.0

30 Sep 20:53
Compare
Choose a tag to compare

Adds a new dynamic programming algorithm to the suite of paths.

v3.0.1

22 Aug 13:10
f0ec70f
Compare
Choose a tag to compare

Alters setup.py to correctly state that opt_einsum requires Python 3.5+. This will now correctly effect PyPI and other pip mirror downloads, v3.0.0 will be removed from PyPI to prevent further issues.

v3.0.0

12 Aug 17:46
4f340ec
Compare
Choose a tag to compare

This release moves opt_einsum to be backend agnostic while adding support
additional backends such as Jax and Autograd. Support for Python 2.7 has been dropped and Python 3.5 will become the new minimum version, a Python deprecation policy equivalent to NumPy's has been adopted.

New Features

  • (#78) A new random-optimizer has been implemented which uses Boltzmann weighting to explore alternative near-minimum paths using greedy-like schemes. This provides a fairly large path performance enhancements with a linear path time overhead.
  • (#78) A new PathOptimizer class has been implemented to provide a framework for building new optimizers. An example is that now custom cost functions can now be provided in the greedy formalism for building custom optimizers without a large amount of additional code.
  • (#81) The backend="auto" keyword has been implemented for contract allowing automatic detection of the correct backend to use based off provided tensors in the contraction.
  • (#88) Autograd and Jax support have been implemented.
  • (#96) Deprecates Python 2 functionality and devops improvements.

Enhancements

  • (#84) The contract_path function can now accept shape tuples rather than full tensors.
  • (#84) The contract_path automated path algorithm decision technology has been refactored to a standalone function.

v2.3.2

09 Dec 20:33
Compare
Choose a tag to compare

Bug Fixes:

  • (#77) Fixes a PyTorch v1.0 JIT tensor shape issue.

v2.3.1

01 Dec 16:23
34c149a
Compare
Choose a tag to compare

Bug Fixes:

  • Minor tweak to release procedure.

v2.3.0

01 Dec 15:37
02faa68
Compare
Choose a tag to compare

This release primarily focuses on expanding the suite of available path technologies to provide better optimization characistics for 4-20 tensors while decreasing the time to find paths for 50-200+ tensors. See Path Overview for more information.

New Features:

  • (#60) A new greedy implementation has been added which is up to two orders of magnitude faster for 200 tensors.
  • (#73) Adds a new branch path that uses greedy ideas to prune the optimal exploration space to provide a better path than greedy at sub optimal cost.
  • (#73) Adds a new auto keyword to the opt_einsum.contract path option. This keyword automatically chooses the best path technology that takes under 1ms to execute.

Enhancements:

  • (#61) The opt_einsum.contract path keyword has been changed to optimize to more closely match NumPy. path will be deprecated in the future.
  • (#61) The opt_einsum.contract_path now returns a opt_einsum.contract.PathInfo object that can be queried for the scaling, flops, and intermediates of the path. The print representation of this object is identical to before.
  • (#61) The default memory_limit is now unlimited by default based on community feedback.
  • (#66) The Torch backend will now use tensordot when using a version of Torch which includes this functionality.
  • (#68) Indices can now be any hashable object when provided in the "Interleaved Input" syntax.
  • (#74) Allows the default transpose operation to be overridden to take advantage of more advanced tensor transpose libraries.
  • (#73) The optimal path is now significantly faster.

Bug fixes:

  • (#72) Fixes the "Interleaved Input" syntax and adds documentation.