Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error getting latent molecule representation - JTNNVAE fails to initialize #26

Open
jgmeyerucsd opened this issue Sep 17, 2018 · 3 comments

Comments

@jgmeyerucsd
Copy link

I'm using python 3.6 with torch 0.4.1 and I'm trying to use your model to map molecules into latent space, but when I run this:

python gen_latent.py --data ../LINCS/cp_trt.smi --vocab data/vocab.txt --hidden 450 --depth 3 --latent 56 --model molvae/MPNVAE-h450-L56-d3-beta0.005/model.iter-4

I get the following out:

/home/jgmeyer2/anaconda3/envs/vangan/lib/python3.6/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
Traceback (most recent call last):
File "gen_latent.py", line 41, in
model = JTNNVAE(vocab, hidden_size, latent_size, depth)
File "/home/jgmeyer2/icml18-jtnn/jtnn/jtnn_vae.py", line 40, in init
self.T_mean = nn.Linear(hidden_size, latent_size / 2)
File "/home/jgmeyer2/anaconda3/envs/vangan/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 41, in init
self.weight = Parameter(torch.Tensor(out_features, in_features))
TypeError: new() received an invalid combination of arguments - got (float, int), but expected one of:

  • (torch.device device)
  • (torch.Storage storage)
  • (Tensor other)
  • (tuple of ints size, torch.device device)
    didn't match because some of the arguments have invalid types: (float, int)
  • (object data, torch.device device)
    didn't match because some of the arguments have invalid types: (float, int)

It seems that torch doesn't like the way the the nn.Linear is initialized?

@NamanChuriwala
Copy link

Try
self.T_mean = nn.Linear(hidden_size, int(latent_size / 2))

@jgmeyerucsd
Copy link
Author

jgmeyerucsd commented Sep 18, 2018 via email

@jgmeyerucsd
Copy link
Author

OK, actually setting that as int() as you describe does work.

I was confused because even though I changed jtnn_vae.py, saved it, and re-imported it into my jupyter notebook, it did not change the kernel's definition of JTVAENN()

I realized this because it would print the same error that it was getting (int, float) and even point to the same line even though I had moved the line, commented it out, or deleted the line.

Thanks for you help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants