You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As part of this paper I have made a python package - cotengra - that is specifically aimed at contracting large complex tensor networks, mostly in the form of PathOptimizer instances for opt_einsum. I thought
(A) it might be of interest to opt_einsum users, but more practically (B) some of the stuff in it might be useful to incorporate into opt_einsum (though other bits are kinda dependency heavy).
Just to describe some of those 'experimental' bits:
This is an implementation of contraction slicing (breaking contractions into many lower memory chunks with as little overhead as possible). It's pure python and c.f. #95 probably a good fit for opt_einsum. Anyhow if anyone wants to try it and/or suggest improvements they now can!
A ContractionTree object
This 'tree' is really the core object that says how much space and time a contraction will take. It has a few nice advantages compared to handling an explicit path + pathinfo:
Less redundancy - e.g. the trees for path=[(0, 1), (0, 1), (0, 1)] and [(2, 3), (0, 1), (0, 1)] are the same. As well as then being able to check tree1 == tree2 this allows the terms to be rearranged so that as many constant tensors come last, memory is best etc.
With a few tweaks it might be a fast way to generate (or replicate the same information as) the PathInfo - i.e. figuring out remaining indices etc.
Its very easy to generate visualisations from:
On the other hand these are kinda low-level and not really killer features if you just want to perform contractions.
HyperOptimizer stuff
This is an extension of the RandomOptimizer, but where you use Bayesian optimization / Gaussian processes to tune any algorithmic parameters and select between multiple possible path algorithms - so like guided sampling.
For example, the default mode selects between a tunable greedy (bottom-up) and a tunable hypergraph partitioning (top-down) method, to achieve very good performance across lots of different types of tensor expression.
It needs quite a lot of dependencies and is fairly heavy to run (basically good only if you want v HQ contraction paths for large TNs), so maybe best as a separate package.
The text was updated successfully, but these errors were encountered:
@jcmgray these seem like great features, but often require networkx or other dependancies. Due to the install base, it doesn't seem like we can reasonably require anything beyond NumPy. Are there features that do not have extra depends that you think are a good candidate?
As part of this paper I have made a python package -
cotengra
- that is specifically aimed at contracting large complex tensor networks, mostly in the form ofPathOptimizer
instances foropt_einsum
. I thought(A) it might be of interest to
opt_einsum
users, but more practically(B) some of the stuff in it might be useful to incorporate into
opt_einsum
(though other bits are kinda dependency heavy).Just to describe some of those 'experimental' bits:
SliceFinder
andSlicedContractor
see - #95This is an implementation of contraction slicing (breaking contractions into many lower memory chunks with as little overhead as possible). It's pure python and c.f. #95 probably a good fit for
opt_einsum
. Anyhow if anyone wants to try it and/or suggest improvements they now can!A
ContractionTree
objectThis 'tree' is really the core object that says how much space and time a contraction will take. It has a few nice advantages compared to handling an explicit path + pathinfo:
path=[(0, 1), (0, 1), (0, 1)]
and[(2, 3), (0, 1), (0, 1)]
are the same. As well as then being able to checktree1 == tree2
this allows the terms to be rearranged so that as many constant tensors come last, memory is best etc.PathInfo
- i.e. figuring out remaining indices etc.On the other hand these are kinda low-level and not really killer features if you just want to perform contractions.
HyperOptimizer
stuffThis is an extension of the
RandomOptimizer
, but where you use Bayesian optimization / Gaussian processes to tune any algorithmic parameters and select between multiple possible path algorithms - so like guided sampling.For example, the default mode selects between a tunable greedy (bottom-up) and a tunable hypergraph partitioning (top-down) method, to achieve very good performance across lots of different types of tensor expression.
It needs quite a lot of dependencies and is fairly heavy to run (basically good only if you want v HQ contraction paths for large TNs), so maybe best as a separate package.
The text was updated successfully, but these errors were encountered: