You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 19, 2021. It is now read-only.
Add epoch-averaged (train and dev) values of the summarized metrics to tensorboard. After a few epochs it's hard to tell anything from the per-batch graphs.
The text was updated successfully, but these errors were encountered:
IMO the per-batch graphs can be informative to find out what's going on when things don't work. For instance, by looking at the per batch gradient norms or spikes in the loss / accuracy.
At this point graph compatibility is everything. I guess the ramification here would be that we need three additional Optional ops in TaggerGraph, right?
If graph compatibility means to load an old model with a newly written graph, then we need 4 optional ops since the variable val_epoch will be missing when calling the restore op of a new graph. So it'd also need to be a placeholder.
If it only means being able to load both graphs on the rust side, then it's three options.
All in all, it may be a good idea to rewrite the graph after training for inference where a stable interface is needed such that all of these compatibility problems would only apply to training graphs.
Add epoch-averaged (train and dev) values of the summarized metrics to tensorboard. After a few epochs it's hard to tell anything from the per-batch graphs.
The text was updated successfully, but these errors were encountered: