Replies: 2 comments 1 reply
-
Adam + SR is a recipe for disaster. Use at your own risk. SR should be used with SGD. Maybe momentum if you are very careful.
QuTiP matrices stay to NetKet matrices as UK Cars stay to EU cars. It would be lovely if UK adapted to the standard of the rest of the world instead of exporting their weird standard to their colonies. I don't understand why you wouldn't do this in netket >>> import jax.numpy as jnp
>>> jnp.sum(jnp.abs(lind.to_sparse()@vs.to_matrix().reshape(-1))**2)
DeviceArray(1.43480886e-05, dtype=float64) which is correct (this is after running your script). >>> vs.expect(lind.H@lind)
1.085e-09 ± 3.329e-27 [σ²=1.193e-49, R̂=0.9984] that is underestimated because the sampling is wrong. If we increase the samples you can get the good result: >>> vs.n_samples=50000
>>> vs.expect(lind.H@lind)
0.0000050 ± 0.0000050 [σ²=0.0000012, R̂=1.0000]
I find this very weird because those three things all share the same code path. # variational state
vs = nk.vqs.MCMixedState(sa, ma, n_samples=5000, n_samples_diag=5000, seed=123)
vs.init_parameters(nk.nn.initializers.normal(stddev=0.001), seed=123)
ss = nk.SteadyState(lind, op, variational_state = vs, preconditioner = sr)
log1 = nk.logging.RuntimeLog()
ss.run(n_iter=50, out=log1, obs={'z':nk.operator.spin.sigmaz(hi,0)}) # variational state
vs2 = nk.vqs.MCMixedState(sa, ma, n_samples=5000, n_samples_diag=5000, seed=123)
vs2.init_parameters(nk.nn.initializers.normal(stddev=0.001), seed=123)
ss = nk.SteadyState(lind, op, variational_state = vs2, preconditioner = sr)
data = []
for i in tqdm(range(50)):
ss.run(n_iter=1, out="test", show_progress=False)
data.append(ss.estimate(nk.operator.spin.sigmaz(hi, 0)).Mean)
print(vs.expect(nk.operator.spin.sigmaz(hi,0))) and I check that parameters are the same by doing import jax
import numpy as np
print(jax.tree_multimap(lambda a,b: jnp.allclose(a,b), vs.parameters, vs2.parameters)) I'm running the latest release of netket (3.0b3), but this stuff has been stable for a while now.. |
Beta Was this translation helpful? Give feedback.
-
Hi, haven't checked this in detail, but what happens is that the ML model
from netket does not evaluate all the matrix elements of the density matrix
(or the ground state vector in the Hamiltonian case). Instead, it restricts
itself to a corner of the Hilbert space. Thus, it uses those matrix
elements to compute expected values. However, it can be the case that
matrix elements that do not come in the sample are wrongly estimated by the
neural quantum state.
I witnessed this while working on the Bose-Hubbard Hamiltonian in the
following paper: https://journals.jps.jp/doi/abs/10.7566/JPSJ.89.094002
This is essentially a consequence of learning the wave function in the
corner of the Hilbert space that is relevant for the ground or the
steady-state. However, outside this Hilbert space corner, you are not
guaranteed to have a correct learning of the wavefunction.
…On Wed, 14 Jul 2021 at 09:34, skothegh ***@***.***> wrote:
Hi, I am not using SR. Its commented out because I regularly switch
between Sgd and Adam. I forgot to delete the comment for the minimum
example.
The transposition is taken care off. If the code returns the correct
density matrix I also get the correct result in QUTIP. I tend to use QUTIP
for exact calculations because I am more firm with that and I didn't know
about the Netket options.
That you get the same results for all three is incredibly weird. I only
reported what I saw.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#820 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHQJA7Z76TIZSNRDCZYUSK3TXWOADANCNFSM5ALPJS7Q>
.
--
Best wishes,
Vladimir Vargas-Calderón
<https://www.researchgate.net/profile/Vladimir_Vargas-Calderon>
PhD Physics Student @ Universidad Nacional de Colombia
--
*Aviso legal:* El contenido de este mensaje y los archivos adjuntos son
confidenciales y de uso exclusivo de la Universidad Nacional de Colombia.
Se encuentran dirigidos sólo para el uso del destinatario al cual van
enviados. La reproducción, lectura y/o copia se encuentran prohibidas a
cualquier persona diferente a este y puede ser ilegal. Si usted lo ha
recibido por error, infórmenos y elimínelo de su correo. Los Datos
Personales serán tratados conforme a la Ley 1581 de 2012 y a nuestra
Política de Datos Personales que podrá consultar en la página web
www.unal.edu.co <http://www.unal.edu.co/>.* *Las opiniones, informaciones,
conclusiones y cualquier otro tipo de dato contenido en este correo
electrónico, no relacionados con la actividad de la Universidad Nacional de
Colombia, se entenderá como personales y de ninguna manera son avaladas por
la Universidad.
|
Beta Was this translation helpful? Give feedback.
-
Hola,
when calculating the steady state for a simple toy model using the Adam optimizer I found that while LdagL converged towards 0 fairly quickly, I tended to get density matrices that differed wildly and randomly from the expected density matrices.
To get a better clue of what was going on, I loaded the resulting density matrix into a QUTIP script and calculated LdagL exactly and found that it was usually not zero.
When calculating all expectation values at every iteration (ss.run(Niter=1) for 50 steps) I found that it always perfectly converged against the correct density matrix. When I only measured at the end of the calculation, the density matrix was wrong again.
Below you find the minimum example. The Lindbladian only contains a dissipation part where all jump operators are sigmam(). The Hamiltonian is 0. This should result in a simple steady state with all spins pointing down.
The script calculates the steady state density matrix three times, onnce "in one go" (Niter=50), once without measuring anything and once measuring sigmax(0) at each step. At the end of each calculation it measures sigmaz(0). You will find that the last calculation always correctly estimates sz=-1, while the first two calculations can result in anything (including -1 sometimes).
Beta Was this translation helpful? Give feedback.
All reactions