-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gpu batch #35
base: master
Are you sure you want to change the base?
Gpu batch #35
Conversation
This could benefit of two levels of cuda: one fully on gpu (all train and test) and one batch ? |
@Jeanselme I think this still requires the model to be moved to CPU before |
risk=str(r+1)).detach().numpy()) | ||
loss += float(losses.conditional_loss(self.torch_model, | ||
x_val, t_val, e_val, elbo=False, | ||
risk=str(r+1)).detach().cpu().numpy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont think this needs to be detached and put to CPU. torch doesnt track this variable.
One has to be careful of the test set's size as it will not easily fit on gpu (either batch it or put model on cpu)
More elegant way for obtaining value in unit length tensor
Codecov Report
@@ Coverage Diff @@
## master #35 +/- ##
==========================================
- Coverage 53.18% 52.76% -0.43%
==========================================
Files 7 7
Lines 831 851 +20
==========================================
+ Hits 442 449 +7
- Misses 389 402 +13
Continue to review full report at Codecov.
|
@@ -179,8 +186,7 @@ def train_dsm(model, | |||
elbo=False, | |||
risk=str(r+1)) | |||
|
|||
valid_loss = valid_loss.detach().cpu().numpy() | |||
costs.append(float(valid_loss)) | |||
costs.append(valid_loss.item()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets use float()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.item() automatically puts on cpu if necessary and cast it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure why you use .item()
?
@@ -74,9 +74,9 @@ def pretrain_dsm(model, t_train, e_train, t_valid, e_valid, | |||
valid_loss = 0 | |||
for r in range(model.risks): | |||
valid_loss += unconditional_loss(premodel, t_valid, e_valid, str(r+1)) | |||
valid_loss = valid_loss.detach().cpu().numpy() | |||
valid_loss = valid_loss.item() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets use float
Allow to put on GPU only batch to limit memory use