You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I really appreciate to you for your book, It's a great help for me to start RL. ^^
Describe the bug
A clear and concise description of what the bug is.
When executing example code 4.7 (vanilla_dpn without any change), there comes a warning msg as below
To Reproduce
OS and environment: Ubuntu 20.04
SLM Lab git SHA (run git rev-parse HEAD to get it): 5fa5ee3 (from the file "SLM-lab/data/vanilla_dqn_boltzmann_cartpole_2022_07_15_092012/vanilla_dqn_boltzmann_cartpole_t0_spec.json")
Additional context
After it occurred, it proceeded too slow (it took over an hour) than other methods (15 minutes for SARSA), and the result is also strange that mean_returns_ma decreases gradually to about 50 after 30k frames.
I wonder the result of this trial is related to the warning situation
Error logs
[2022-07-15 09:20:14,002 PID:245693 INFO logger.py info] Running RL loop for trial 0 session 3
[2022-07-15 09:20:14,006 PID:245693 INFO __init__.py log_summary] Trial 0 session 3 vanilla_dqn_boltzmann_cartpole_t0_s3 [train_df] epi: 0 t: 0 wall_t: 0 opt_step: 0 frame: 0 fps: 0 total_reward: nan total_reward_ma: nan loss: nan lr: 0.01 explore_var: 5 entropy_coef: nan entropy: nan grad_norm: nan
/home/eric/miniconda3/envs/lab/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:100: UserWarning:
Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate/home/eric/miniconda3/envs/lab/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:100: UserWarning:Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate/home/eric/miniconda3/envs/lab/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:100: UserWarning:Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
The text was updated successfully, but these errors were encountered:
younghwa-hong
changed the title
optimizer.step() before lr_scheduler.step() Warning Occurred and Stoppedoptimizer.step() before lr_scheduler.step() Warning Occurred
Jul 15, 2022
I really appreciate to you for your book, It's a great help for me to start RL. ^^
Describe the bug
A clear and concise description of what the bug is.
When executing example code 4.7 (vanilla_dpn without any change), there comes a warning msg as below
To Reproduce
git rev-parse HEAD
to get it): 5fa5ee3 (from the file "SLM-lab/data/vanilla_dqn_boltzmann_cartpole_2022_07_15_092012/vanilla_dqn_boltzmann_cartpole_t0_spec.json")spec
file used: SLM-lab/slm_lab/spec/benchmark/dqn/dqn_cartpole.jsonAdditional context
After it occurred, it proceeded too slow (it took over an hour) than other methods (15 minutes for SARSA), and the result is also strange that mean_returns_ma decreases gradually to about 50 after 30k frames.
I wonder the result of this trial is related to the warning situation
Error logs
The text was updated successfully, but these errors were encountered: