-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when using PyTorch backend on large contraction #92
Comments
Hi @emprice. This is sadly a limitation on the pytorch end. To clarify with regards to your points:
Some ways forward:
|
@emprice Great that you are adding this to pytorch directly! |
You can also try slicing here - will hopefully get time to add to |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Great project! I'm looking forward to using it in my research.
I came across this when I was trying to optimize a tensor (as in the quimb documentation) and wanted to try my hand at a contraction manually. My machine was using 88GB RAM to perform the contraction, which is simply too much, and I wanted to see which step of the contraction was causing a problem, since none of my tensors are very large.
When I tried to put together a MWE, everything seems to work great until I actually want to do the contraction, using the PyTorch backend. It's important for my application to use PyTorch because I need to do the optimization step (are there other backends that would let me do that?) for a time-evolved state. I get the error:
RuntimeError: only lowercase letters a-z allowed as indices
.Now, this is obviously a limitation in PyTorch itself, not opt_einsum, but I'm wondering if there might be a clever workaround that I've missed? I could imagine that it would be possible to take a pre-computed contraction path and
einsum
use indices that start over at "a", which might mitigate or eliminate the problem, depending on how many indices are contracted in a given stepIf it helps, this is the contraction I was trying, which I generated for a 2-dimensional quimb TensorNetwork:
abc,dbef,gehi,jhkl,mkn,aopq,drpst,gusvw,jxvyz,mAyB,oCDE,rFDGH,uIGJK,xLJMN,AOMP,CQRS,FTRUV,IWUXY,LZXÀÁ,OÂÀÃ,QÄÅ,TÄÆÇ,WÆÈÉ,ZÈÊË,ÂÊÌ,ÍÎcÏ,ÐÎÑfÒ,ÓÑÔiÕ,ÖÔ×lØ,Ù×nÚ,ÍÛÜqÝ,ÐÞÜßtà,Óáßâwã,Öäâåzæ,ÙçåBè,ÛéêEë,ÞìêíHî,áïíðKñ,äòðóNô,çõóPö,é÷øSù,ìúøûVü,ïýûþYÿ,òĀþāÁĂ,õăāÃĄ,÷ąÅĆ,úąćÇĈ,ýćĉÉĊ,ĀĉċËČ,ăċÌč,ĎďÏ,ĐďđÒ,ĒđēÕ,ĔēĕØ,ĖĕÚ,ĎėĘÝ,ĐęĘĚà,ĒěĚĜã,ĔĝĜĞæ,ĖğĞè,ėĠġë,ęĢġģî,ěĤģĥñ,ĝĦĥħô,ğĨħö,ĠĩĪù,ĢīĪĬü,ĤĭĬĮÿ,ĦįĮİĂ,ĨıİĄ,ĩIJĆ,īIJijĈ,ĭijĴĊ,įĴĵČ,ıĵč->
Thanks in advance for any help! Depending on the feasibility of the above, I might be able to help out in developing functionality that would allow contractions like this one, since they're important for my project.
The text was updated successfully, but these errors were encountered: