You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For gcp_opt we are using the default number of steps per iteration. So max_iters=2 with steps per iteration=1000 is still a lot of steps. Looking at a few problems reducing it to 1 epoch seems to still yield decent results for demonstration. Uncertain how much hyper-parameter tuning is really useful if these are demo notebooks anyway.
For HOSVD it looks like most of the time is spent at the end of the notebook running the decompositions multiple times. We could reduce 10 to 5 and still get a reasonable spread, or possibly just run them once. I'm not sure how insightful the iterative timing actually is.
Consider changing some tutorial parameters that will run these tutorials faster but don't negate the educational importance.
The text was updated successfully, but these errors were encountered: